Data structures and algorithms were introduced. Key points include:
- A data structure is a way of organizing data to account for relationships and allow for effective processing. Common data structures include arrays, stacks, queues, linked lists, trees, and graphs.
- An algorithm is a set of steps to solve a problem. Time and space complexity analyze resource usage.
- Common operations on data structures include insertion, deletion, traversal, searching, and sorting of elements.
The KDD process involves several steps: data cleaning to remove noise, data integration of multiple sources, data selection of relevant data, data transformation into appropriate forms for mining, applying data mining techniques to extract patterns, evaluating patterns for interestingness, and representing mined knowledge visually. The KDD process aims to discover useful knowledge from various data types including databases, data warehouses, transactional data, time series, sequences, streams, spatial, multimedia, graphs, engineering designs, and web data.
Clustering is an unsupervised learning technique used to group unlabeled data points together based on similarities. It aims to maximize similarity within clusters and minimize similarity between clusters. There are several clustering methods including partitioning, hierarchical, density-based, grid-based, and model-based. Clustering has many applications such as pattern recognition, image processing, market research, and bioinformatics. It is useful for extracting hidden patterns from large, complex datasets.
The document discusses the process of knowledge discovery in databases (KDP). It provides the following key points:
1. KDP involves discovering useful information from data through steps like data cleaning, transformation, mining and pattern evaluation.
2. Several KDP models have been developed, including academic models with 9 steps, industrial models with 5-6 steps, and hybrid models combining aspects of both.
3. A widely used model is CRISP-DM, which stands for Cross-Industry Standard Process for Data Mining and has 6 steps: business understanding, data understanding, data preparation, modeling, evaluation and deployment.
This document discusses various machine learning techniques for classification and prediction. It covers decision tree induction, tree pruning, Bayesian classification, Bayesian belief networks, backpropagation, association rule mining, and ensemble methods like bagging and boosting. Classification involves predicting categorical labels while prediction predicts continuous values. Key steps for preparing data include cleaning, transformation, and comparing different methods based on accuracy, speed, robustness, scalability, and interpretability.
The document discusses several key design issues in entity-relationship (ER) database schemas including:
1) Distinguishing between entities and attributes and how they are modeled, such as whether a phone number is an attribute of employees or its own entity.
2) Modeling relationships between entities as either binary or ternary relationships and how ternary relationships can be broken down into multiple binary relationships.
3) Relationship design considerations like whether a relationship such as an employee working in a department should allow for multiple time periods or just one.
Integrity constraints are rules that help maintain data quality and consistency in a database. The main types of integrity constraints are:
1. Domain constraints specify valid values and data types for attributes to restrict what data can be entered.
2. Entity constraints require that each row have a unique identifier and prevent null values in primary keys.
3. Referential integrity constraints maintain relationships between tables by preventing actions that would invalidate links between foreign and primary keys.
4. Cascade rules extend referential integrity by automatically propagating updates or deletes from a primary table to its related tables.
The document describes the 8 addressing modes of the 8086 microprocessor. These are: 1) Immediate, where the operand is specified in the instruction itself. 2) Register, where operands are registers. 3) Direct memory, using a segment and offset address. 4) Register indirect, using a base register address. 5) Register relative, using a base register and displacement. 6) Base indexed, using a base and index register. 7) Relative indexed, using a base, index, and displacement. 8) Implied, where operands are implied and not specified.
The KDD process involves several steps: data cleaning to remove noise, data integration of multiple sources, data selection of relevant data, data transformation into appropriate forms for mining, applying data mining techniques to extract patterns, evaluating patterns for interestingness, and representing mined knowledge visually. The KDD process aims to discover useful knowledge from various data types including databases, data warehouses, transactional data, time series, sequences, streams, spatial, multimedia, graphs, engineering designs, and web data.
Clustering is an unsupervised learning technique used to group unlabeled data points together based on similarities. It aims to maximize similarity within clusters and minimize similarity between clusters. There are several clustering methods including partitioning, hierarchical, density-based, grid-based, and model-based. Clustering has many applications such as pattern recognition, image processing, market research, and bioinformatics. It is useful for extracting hidden patterns from large, complex datasets.
The document discusses the process of knowledge discovery in databases (KDP). It provides the following key points:
1. KDP involves discovering useful information from data through steps like data cleaning, transformation, mining and pattern evaluation.
2. Several KDP models have been developed, including academic models with 9 steps, industrial models with 5-6 steps, and hybrid models combining aspects of both.
3. A widely used model is CRISP-DM, which stands for Cross-Industry Standard Process for Data Mining and has 6 steps: business understanding, data understanding, data preparation, modeling, evaluation and deployment.
This document discusses various machine learning techniques for classification and prediction. It covers decision tree induction, tree pruning, Bayesian classification, Bayesian belief networks, backpropagation, association rule mining, and ensemble methods like bagging and boosting. Classification involves predicting categorical labels while prediction predicts continuous values. Key steps for preparing data include cleaning, transformation, and comparing different methods based on accuracy, speed, robustness, scalability, and interpretability.
The document discusses several key design issues in entity-relationship (ER) database schemas including:
1) Distinguishing between entities and attributes and how they are modeled, such as whether a phone number is an attribute of employees or its own entity.
2) Modeling relationships between entities as either binary or ternary relationships and how ternary relationships can be broken down into multiple binary relationships.
3) Relationship design considerations like whether a relationship such as an employee working in a department should allow for multiple time periods or just one.
Integrity constraints are rules that help maintain data quality and consistency in a database. The main types of integrity constraints are:
1. Domain constraints specify valid values and data types for attributes to restrict what data can be entered.
2. Entity constraints require that each row have a unique identifier and prevent null values in primary keys.
3. Referential integrity constraints maintain relationships between tables by preventing actions that would invalidate links between foreign and primary keys.
4. Cascade rules extend referential integrity by automatically propagating updates or deletes from a primary table to its related tables.
The document describes the 8 addressing modes of the 8086 microprocessor. These are: 1) Immediate, where the operand is specified in the instruction itself. 2) Register, where operands are registers. 3) Direct memory, using a segment and offset address. 4) Register indirect, using a base register address. 5) Register relative, using a base register and displacement. 6) Base indexed, using a base and index register. 7) Relative indexed, using a base, index, and displacement. 8) Implied, where operands are implied and not specified.
This document discusses data mining and the architecture of data mining systems. It describes data mining as extracting knowledge from large amounts of data. The architecture of a data mining system is important, with a good system facilitating efficient and timely data mining tasks. Different levels of coupling between data mining systems and database/data warehouse systems are described, including no coupling, loose coupling, semi-tight coupling, and tight coupling. Tight coupling provides the most integrated and optimized system but is also the most complex to implement.
Data mining primitives include task-relevant data, the kind of knowledge to be mined, background knowledge such as concept hierarchies, interestingness measures, and methods for presenting discovered patterns. A data mining query specifies these primitives to guide the knowledge discovery process. Background knowledge like concept hierarchies allow mining patterns at different levels of abstraction. Interestingness measures estimate pattern simplicity, certainty, utility, and novelty to filter uninteresting results. Discovered patterns can be presented through various visualizations including rules, tables, charts, and decision trees.
Modes Of Transfer in Input/Output OrganizationMOHIT AGARWAL
This document discusses different modes of data transfer between I/O devices and memory in a computer system. It describes three main modes: programmed I/O, interrupt-initiated I/O, and direct memory access (DMA). Programmed I/O involves constant CPU monitoring during transfers. Interrupt-initiated I/O uses interrupts to notify the CPU when a transfer is ready. DMA allows I/O devices to access memory directly without CPU involvement for improved efficiency.
Register Organisation of 8086 MicroprocessorNikhil Kumar
The document discusses the registers used in the 8086 microprocessor. It describes the different types of registers: general purpose registers like AX, BX, CX, and DX which can be used for data and addressing; segment registers like CS, SS, DS and ES that store segment offsets; pointer and index registers like IP, BP, SP, SI and DI that contain memory addresses; and flag registers that indicate results of arithmetic logic unit operations and control the CPU. The document provides details on the individual registers and their purposes in the 8086 architecture.
The document discusses frequent pattern mining and the Apriori algorithm. It introduces frequent patterns as frequently occurring sets of items in transaction data. The Apriori algorithm is described as a seminal method for mining frequent itemsets via multiple passes over the data, generating candidate itemsets and pruning those that are not frequent. Challenges with Apriori include multiple database scans and large number of candidate sets generated.
This document discusses database design using Entity Relationship Diagrams (ERDs). It covers how to draw ERDs using Chen's Model and Crow's Foot notations and define the basic elements of ERDs. Conversion rules are presented to convert ERDs into relational tables for one-to-one, one-to-many, and many-to-many relationships. An example is given to demonstrate drawing an ERD for a company database and converting it into relational tables.
Object Relational Database Management System(ORDBMS)Rabin BK
The document discusses Object Relational Database Management Systems (ORDBMS). It defines an ORDBMS as a system that attempts to extend relational database systems with functionality to support a broader class of applications by providing a bridge between relational and object-oriented paradigms. This allows objects, classes and inheritance in database schemas and query languages. The document outlines some advantages of ORDBMS like reusability and preserving relational application knowledge, but also disadvantages like increased complexity. It also describes common OR operations like create, retrieve, update and delete objects, as well as Object-Relational Mapping (ORM) which converts data between incompatible type systems.
This document provides an overview of different data models, including object-based models like the entity-relationship model and object-oriented model, and record-based models like the relational, network, and hierarchical models. It describes the key features of each model, such as how data and relationships are represented, and highlights some of their advantages and disadvantages. The presentation aims to guide students in understanding different approaches to database design and modeling.
Object relational database management systemSaibee Alam
this presentation provide a full explanation of object relational database management system. its a part of advanced database management system. important topic of computer science if you are UG/PG student or preparing for some competitive exam.
Cache memory is located between the processor and main memory. It is smaller and faster than main memory. There are two types of cache memory policies - write-back and write-through. Mapping is a technique that maps CPU-generated memory addresses to cache lines. There are three types of mapping - direct, associative, and set associative. Direct mapping maps each main memory block to a single cache line using the formula: cache line number = main memory block number % number of cache lines. This can cause conflict misses.
The document discusses the key functions of ETL (extract, transform, load) processes which are important for reshaping relevant data from source systems into useful information stored in a data warehouse. It examines the challenges and techniques for data extraction and the wide range of transformation tasks. It also explains that ETL encompasses extracting data from source systems, transforming it into appropriate formats for the data warehouse, and loading it into the data warehouse repository.
There are three main methods to map main memory addresses to cache memory addresses: direct mapping, associative mapping, and set-associative mapping. Direct mapping is the simplest but least flexible method, while associative mapping is most flexible but also slowest. Set-associative mapping combines aspects of the other two methods, dividing the cache into sets with multiple lines to gain efficiency while remaining reasonably flexible.
The document discusses database management systems and data independence. It defines data independence as the ability to change the database schema at one level without requiring changes at other levels. There are two types of data independence: logical data independence, which allows changing the conceptual schema without changing the external schema; and physical data independence, which allows changing the internal schema without changing the conceptual schema. The document provides examples of each type of data independence and explains the importance of data independence for database maintenance and flexibility.
The ID3 algorithm generates a decision tree from training data using a top-down, greedy search. It calculates the entropy of attributes in the training data to determine which attribute best splits the data into pure subsets with maximum information gain. It then recursively builds the decision tree, using the selected attributes to split the data at each node until reaching leaf nodes containing only one class. The resulting decision tree can then classify new samples not in the training data.
The document presents information on Entity Relationship (ER) modeling for database design. It discusses the key concepts of ER modeling including entities, attributes, relationships and cardinalities. It also explains how to create an Entity Relationship Diagram (ERD) using standard symbols and notations. Additional features like generalization, specialization and inheritance are covered which allow ERDs to represent hierarchical relationships between entities. The presentation aims to provide an overview of ER modeling and ERDs as an important technique for conceptual database design.
This document describes the three level architecture of a database management system (DBMS): the external, conceptual, and internal levels. The external level defines different views of the database for users. The conceptual level defines the logical structure and relationships of the entire database. The internal level defines the physical storage and implementation of the data. The document also discusses logical and physical data independence, which refer to the ability to modify schemas at different levels without affecting higher levels.
The document discusses various aspects of I/O organization in a computer system. It describes the input-output interface that provides a method for transferring information between internal storage and external I/O devices. It discusses asynchronous data transfer techniques like strobe control and handshaking. It also covers asynchronous serial transmission, different modes of data transfer like programmed I/O, interrupt-initiated I/O, and direct memory access (DMA).
10. Search Tree - Data Structures using C++ by Varsha Patilwidespreadpromotion
The document discusses binary search trees and their variants. It explains that search trees are important for algorithm design and it is desirable to minimize the search time of each node. There are static and dynamic binary search trees, with the latter adjusting its structure during access. AVL trees are a type of self-balancing binary search tree where rotations are used to rebalance the tree after insertions or deletions and ensure the heights of subtrees differ by at most one. Compilers use symbol tables implemented as search trees to track variables in source code.
The document discusses normalization of database tables. It covers normal forms including 1NF, 2NF, 3NF, BCNF and 4NF. The process of normalization reduces data redundancies and helps eliminate data anomalies. Normalization is done concurrently with entity-relationship modeling to produce an effective database design. In some cases, denormalization may be needed to generate information more efficiently.
This document discusses data structures and their classification. It defines data as unorganized facts and information as organized data that provides knowledge. Data structures are ways of organizing data so it can be used efficiently. They are classified as primitive and non-primitive. Primitive structures like integers and characters are directly operated on by machines. Non-primitive structures like arrays, linked lists, stacks, queues, trees and graphs are derived from primitive structures and used to store groups of values. Linear structures like arrays and linked lists place elements anywhere in memory, while non-linear structures like trees and graphs represent hierarchical and edge-connected relationships.
This document discusses various data structures and their applications. It describes linear data structures like arrays, linked lists, stacks, and queues. It also covers nonlinear data structures like trees and graphs. For each data structure, it provides details on their properties, types, and how they are used in programming and data storage. Data structures are important for program design as they help make programs more robust, efficient and easier to develop. They are useful tools for organizing and storing data in computers.
This document discusses data mining and the architecture of data mining systems. It describes data mining as extracting knowledge from large amounts of data. The architecture of a data mining system is important, with a good system facilitating efficient and timely data mining tasks. Different levels of coupling between data mining systems and database/data warehouse systems are described, including no coupling, loose coupling, semi-tight coupling, and tight coupling. Tight coupling provides the most integrated and optimized system but is also the most complex to implement.
Data mining primitives include task-relevant data, the kind of knowledge to be mined, background knowledge such as concept hierarchies, interestingness measures, and methods for presenting discovered patterns. A data mining query specifies these primitives to guide the knowledge discovery process. Background knowledge like concept hierarchies allow mining patterns at different levels of abstraction. Interestingness measures estimate pattern simplicity, certainty, utility, and novelty to filter uninteresting results. Discovered patterns can be presented through various visualizations including rules, tables, charts, and decision trees.
Modes Of Transfer in Input/Output OrganizationMOHIT AGARWAL
This document discusses different modes of data transfer between I/O devices and memory in a computer system. It describes three main modes: programmed I/O, interrupt-initiated I/O, and direct memory access (DMA). Programmed I/O involves constant CPU monitoring during transfers. Interrupt-initiated I/O uses interrupts to notify the CPU when a transfer is ready. DMA allows I/O devices to access memory directly without CPU involvement for improved efficiency.
Register Organisation of 8086 MicroprocessorNikhil Kumar
The document discusses the registers used in the 8086 microprocessor. It describes the different types of registers: general purpose registers like AX, BX, CX, and DX which can be used for data and addressing; segment registers like CS, SS, DS and ES that store segment offsets; pointer and index registers like IP, BP, SP, SI and DI that contain memory addresses; and flag registers that indicate results of arithmetic logic unit operations and control the CPU. The document provides details on the individual registers and their purposes in the 8086 architecture.
The document discusses frequent pattern mining and the Apriori algorithm. It introduces frequent patterns as frequently occurring sets of items in transaction data. The Apriori algorithm is described as a seminal method for mining frequent itemsets via multiple passes over the data, generating candidate itemsets and pruning those that are not frequent. Challenges with Apriori include multiple database scans and large number of candidate sets generated.
This document discusses database design using Entity Relationship Diagrams (ERDs). It covers how to draw ERDs using Chen's Model and Crow's Foot notations and define the basic elements of ERDs. Conversion rules are presented to convert ERDs into relational tables for one-to-one, one-to-many, and many-to-many relationships. An example is given to demonstrate drawing an ERD for a company database and converting it into relational tables.
Object Relational Database Management System(ORDBMS)Rabin BK
The document discusses Object Relational Database Management Systems (ORDBMS). It defines an ORDBMS as a system that attempts to extend relational database systems with functionality to support a broader class of applications by providing a bridge between relational and object-oriented paradigms. This allows objects, classes and inheritance in database schemas and query languages. The document outlines some advantages of ORDBMS like reusability and preserving relational application knowledge, but also disadvantages like increased complexity. It also describes common OR operations like create, retrieve, update and delete objects, as well as Object-Relational Mapping (ORM) which converts data between incompatible type systems.
This document provides an overview of different data models, including object-based models like the entity-relationship model and object-oriented model, and record-based models like the relational, network, and hierarchical models. It describes the key features of each model, such as how data and relationships are represented, and highlights some of their advantages and disadvantages. The presentation aims to guide students in understanding different approaches to database design and modeling.
Object relational database management systemSaibee Alam
this presentation provide a full explanation of object relational database management system. its a part of advanced database management system. important topic of computer science if you are UG/PG student or preparing for some competitive exam.
Cache memory is located between the processor and main memory. It is smaller and faster than main memory. There are two types of cache memory policies - write-back and write-through. Mapping is a technique that maps CPU-generated memory addresses to cache lines. There are three types of mapping - direct, associative, and set associative. Direct mapping maps each main memory block to a single cache line using the formula: cache line number = main memory block number % number of cache lines. This can cause conflict misses.
The document discusses the key functions of ETL (extract, transform, load) processes which are important for reshaping relevant data from source systems into useful information stored in a data warehouse. It examines the challenges and techniques for data extraction and the wide range of transformation tasks. It also explains that ETL encompasses extracting data from source systems, transforming it into appropriate formats for the data warehouse, and loading it into the data warehouse repository.
There are three main methods to map main memory addresses to cache memory addresses: direct mapping, associative mapping, and set-associative mapping. Direct mapping is the simplest but least flexible method, while associative mapping is most flexible but also slowest. Set-associative mapping combines aspects of the other two methods, dividing the cache into sets with multiple lines to gain efficiency while remaining reasonably flexible.
The document discusses database management systems and data independence. It defines data independence as the ability to change the database schema at one level without requiring changes at other levels. There are two types of data independence: logical data independence, which allows changing the conceptual schema without changing the external schema; and physical data independence, which allows changing the internal schema without changing the conceptual schema. The document provides examples of each type of data independence and explains the importance of data independence for database maintenance and flexibility.
The ID3 algorithm generates a decision tree from training data using a top-down, greedy search. It calculates the entropy of attributes in the training data to determine which attribute best splits the data into pure subsets with maximum information gain. It then recursively builds the decision tree, using the selected attributes to split the data at each node until reaching leaf nodes containing only one class. The resulting decision tree can then classify new samples not in the training data.
The document presents information on Entity Relationship (ER) modeling for database design. It discusses the key concepts of ER modeling including entities, attributes, relationships and cardinalities. It also explains how to create an Entity Relationship Diagram (ERD) using standard symbols and notations. Additional features like generalization, specialization and inheritance are covered which allow ERDs to represent hierarchical relationships between entities. The presentation aims to provide an overview of ER modeling and ERDs as an important technique for conceptual database design.
This document describes the three level architecture of a database management system (DBMS): the external, conceptual, and internal levels. The external level defines different views of the database for users. The conceptual level defines the logical structure and relationships of the entire database. The internal level defines the physical storage and implementation of the data. The document also discusses logical and physical data independence, which refer to the ability to modify schemas at different levels without affecting higher levels.
The document discusses various aspects of I/O organization in a computer system. It describes the input-output interface that provides a method for transferring information between internal storage and external I/O devices. It discusses asynchronous data transfer techniques like strobe control and handshaking. It also covers asynchronous serial transmission, different modes of data transfer like programmed I/O, interrupt-initiated I/O, and direct memory access (DMA).
10. Search Tree - Data Structures using C++ by Varsha Patilwidespreadpromotion
The document discusses binary search trees and their variants. It explains that search trees are important for algorithm design and it is desirable to minimize the search time of each node. There are static and dynamic binary search trees, with the latter adjusting its structure during access. AVL trees are a type of self-balancing binary search tree where rotations are used to rebalance the tree after insertions or deletions and ensure the heights of subtrees differ by at most one. Compilers use symbol tables implemented as search trees to track variables in source code.
The document discusses normalization of database tables. It covers normal forms including 1NF, 2NF, 3NF, BCNF and 4NF. The process of normalization reduces data redundancies and helps eliminate data anomalies. Normalization is done concurrently with entity-relationship modeling to produce an effective database design. In some cases, denormalization may be needed to generate information more efficiently.
This document discusses data structures and their classification. It defines data as unorganized facts and information as organized data that provides knowledge. Data structures are ways of organizing data so it can be used efficiently. They are classified as primitive and non-primitive. Primitive structures like integers and characters are directly operated on by machines. Non-primitive structures like arrays, linked lists, stacks, queues, trees and graphs are derived from primitive structures and used to store groups of values. Linear structures like arrays and linked lists place elements anywhere in memory, while non-linear structures like trees and graphs represent hierarchical and edge-connected relationships.
This document discusses various data structures and their applications. It describes linear data structures like arrays, linked lists, stacks, and queues. It also covers nonlinear data structures like trees and graphs. For each data structure, it provides details on their properties, types, and how they are used in programming and data storage. Data structures are important for program design as they help make programs more robust, efficient and easier to develop. They are useful tools for organizing and storing data in computers.
This document discusses various data structures and their applications. It describes linear data structures like arrays, linked lists, stacks, and queues. It also covers nonlinear data structures like trees and graphs. Arrays store similar data types in contiguous memory locations. Linked lists link data structures non-consecutively. Stacks and queues follow LIFO and FIFO ordering respectively. Trees store data hierarchically with parent and child nodes. Graphs connect nodes through edges in a non-sequential manner. These data structures are important for efficient program design and development.
Unit.1 Introduction to Data Structuresresamplopsurat
The document provides an introduction to data structures. It defines a data structure as a way of storing and organizing data efficiently to allow operations to be performed quickly. Data structures can be static or dynamic. An abstract data type (ADT) is a mathematical description of an object and its operations. Algorithms implement ADTs using data structures. There are many data structures because there are tradeoffs between speed, memory usage, elegance, and other factors. Common data structures include lists, trees, hash tables. Operations on data structures include traversing, searching, insertion, deletion and others. Static structures have fixed sizes while dynamic structures have variable sizes.
This document provides an introduction to data structures. It discusses primitive and non-primitive data structures and their classifications. Linear data structures like arrays, stacks, queues and linked lists are covered, along with non-linear structures like trees and graphs. Common operations on data structures are also summarized such as traversing, searching, inserting and deleting. Finally, abstract data types and examples of common ADTs like lists, stacks and queues are introduced.
This document introduces data structures and their classifications. It defines data structure as a structured way of organizing data in a computer so it can be used efficiently. Data structures are classified as simple, linear, and non-linear. Linear structures like arrays, stacks, and queues store elements in a sequence while non-linear structures like trees and graphs have non-sequential relationships. The document discusses common operations on each type and provides examples of different data structures like linked lists, binary trees, and graphs. It concludes by noting data structures should be selected based on the nature of the data and requirements of operations.
This document provides an introduction to data structures. It discusses primitive and non-primitive data structures and their classifications. Linear data structures like arrays, stacks, queues and linked lists are covered, along with non-linear structures like trees and graphs. Common operations on data structures like traversing, searching, inserting and deleting are also summarized. Finally, the document introduces abstract data types and provides examples of common ADT specifications for lists, stacks and queues.
Data Structures and algoithms Unit - 1.pptxmexiuro901
it is about data structures and algorithms. this ppt has all data structures like linkedlist, trees, graph,it is about data structures and algorithms. this ppt has all data structures like linkedlist, trees, graph,it is about data structures and algorithms. this ppt has all data structures like linkedlist, trees, graph,
This document provides an overview of data structures. It defines data structures as a way to organize and store data to allow for effective operations. The document outlines common data structure operations like traversing, searching, insertion, and deletion. It also categorizes data structures as primitive, non-primitive, linear, and non-linear. Linear structures discussed include stacks and queues. Non-linear structures covered are trees and graphs. Details are provided on representing graphs and trees.
The document discusses different data structures including primitive and non-primitive structures. It defines data structures as representations of logical relationships between data elements. Primitive structures like integers are directly operated on by machines while non-primitive structures like arrays, lists, stacks, queues, trees and graphs are built from primitive structures. Arrays store homogeneous data in consecutive memory locations accessed via indexes. Lists use nodes of data and pointer fields, connected in a linear fashion. Stacks and queues follow LIFO and FIFO principles respectively for insertion and removal. Trees have hierarchical relationships and graphs model physical networks with vertices and edges.
The document outlines a course on data structures. It covers 10 units: pointers and dynamic memory allocation, arrays, strings, structures and unions, file handling, stacks and queues. It defines data structures and different types including linear (arrays, stacks, queues, linked lists) and non-linear (trees and graphs). It also discusses static and dynamic data structures, basic terminology, why we should learn data structures, and common operations like traversal, search, insertion, deletion, sorting, and merge.
This document provides an introduction to data structures. It defines data structures as representations of logical relationships between data elements. Data structures can be primitive, like integers and floats, or non-primitive, like lists, stacks, queues, trees and graphs. Non-primitive data structures are built from primitive structures and emphasize structuring groups of homogeneous or heterogeneous data. The document describes common data structures like arrays, lists, stacks, queues and trees, and explains their properties and implementations.
Data structures are used to organize data efficiently to perform operations on large amounts of data. They include primitive structures like integers and floats, as well as linear structures like arrays, stacks, and queues and non-linear structures like trees and graphs. Common operations on data structures include traversing, inserting, deleting, searching, and sorting data elements. Understanding which data structure to use for a given problem is important to write efficient programs as data volumes continue growing rapidly.
This document discusses linear and non-linear data structures. Linear data structures like arrays, stacks, and queues store elements sequentially. Static linear structures like arrays have fixed sizes while dynamic structures like linked lists can grow and shrink. Non-linear structures like trees and graphs store elements in a hierarchical manner. Common abstract data types (ADTs) include stacks, queues, and lists, which define operations without specifying implementation. Lists can be implemented using arrays or linked lists.
The document discusses linear and non-linear data structures. It defines a data structure as a way of organizing data to be used effectively. Linear data structures like arrays, stacks, queues, and linked lists arrange data sequentially, allowing single traversal. Non-linear structures like trees and graphs arrange data hierarchically, requiring multiple traversals. Linear structures are easier to implement but use memory inefficiently, while non-linear structures use memory efficiently but are harder to implement. Examples and properties of various linear and non-linear data structures are provided.
This document defines and describes different types of data structures. It begins by defining primitive data structures as basic structures directly operated on by the machine, such as integers and floats, and non-primitive data structures as more sophisticated structures derived from primitive ones, such as lists, stacks, queues, trees and graphs. It then provides examples and descriptions of common non-primitive data structures like arrays, lists, stacks, queues, trees and graphs, highlighting their key characteristics and common operations.
data structure details of types and .pptpoonamsngr
The document defines and describes various data structures. It begins by defining data structures as representations of logical relationships between data elements. It then discusses how data structures affect program design and how algorithms are paired with appropriate data structures. The document goes on to classify data structures as primitive and non-primitive, providing examples of each. It proceeds to describe several specific non-primitive data structures in more detail, including lists, stacks, queues, trees, and graphs.
This document contains the code for a bubble sort algorithm written in C. It prompts the user to enter the number of elements in an array, accepts user input to populate the array, then performs multiple passes over the array to compare and swap adjacent elements into ascending order. It prints the contents of the array after each pass and when fully sorted.
This C program implements the Tower of Hanoi problem recursively. It takes user input for the number of disks and calls the TOH function, moving disks from the source rod to the destination rod using a temporary rod. The TOH function recursively calls itself to move the top n-1 disks, then moves the nth disk, then moves the remaining n-1 disks.
This document describes functions for a doubly linked list data structure in C including:
1) Functions for inserting nodes at the beginning, end, and at a specified position before or after a node.
2) Functions for deleting nodes from the beginning, end, and at a specified position.
3) A display function to print out the elements of the linked list.
4) Additional functions like creating new nodes, checking for invalid positions, and calculating the length.
This document implements a stack using a singly linked list in C. It defines a node structure with an info field and pointer to the next node. Functions are created to push, pop and display elements on the stack. Push adds a new node to the front of the list, pop removes the front node and frees memory. Display traverses the list and prints elements. The main function provides a menu to test the stack operations and uses a switch to call the appropriate function.
This C program implements functions for a singly linked list including inserting and deleting nodes from different positions as well as displaying the list. The main function uses a switch case to allow the user to choose an operation on the list such as inserting a new node at the beginning, end, or a specified position. Other operations include deleting nodes from the beginning, end, or a specified position. The length of the list is tracked and positions are validated. Pointers are used to traverse the list and add/remove nodes as needed.
This document contains the code for a C program that implements a stack using an array. It defines functions for push, pop, and display operations on the stack. The main function runs a menu loop that calls these functions based on the user's input choice and handles errors like overflow and underflow.
This C program implements a circular queue data structure using an array. It defines functions for enqueue, dequeue, and display operations on the queue. The main function contains a menu loop that calls these functions based on user input and allows testing the queue functionality. The enqueue function adds elements to the rear of the queue, dequeue removes from the front, and display prints out the current queue contents.
This C program implements a queue data structure using an array. It defines functions for enqueue, dequeue, and display operations on the queue. The main function contains a menu loop that calls the appropriate queue function based on the user's choice and displays messages for the queue status and operations.
This document provides guidance on writing a proposal for a BSc CSIT project. It discusses the key components of a project proposal, including the title, statement of problem, objectives, literature review, methodology, and references. It provides examples and recommendations for each section. The document also covers other aspects like project evaluation, structuring a presentation, and avoiding plagiarism. Overall, the document aims to help students structure a well-written proposal that clearly outlines their project plan and justification.
A presentation on a Web Technology and its scope in NEPAL. The slides contains introduction of Web technology and list the hosting company and Software Company in NEPAL where the IT Students can work.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
DSA chapter 1
1. XP
1
DATA STRUCTURE AND ALGORITHM
CHAPTER 1: INTRODUCTION
Presented By:
Er.Ganesh Ram Suwal
BE | ME Computer Engineering
Email: suwalganesh@gmail.com
Lecturer
January 2021
2. XP
2
INTRODUCTION
Fact: Fact is a Universal Truth (either TRUE or FALSE). It is an
occurrence in the world. Fact is a raw data. It is collected from
questionnaires, survey or from data collection process.
Data: It is simply a collection of fact. Data may be a single value or
a set of values. The organization of data leads to the structuring of
data.
Information: When we
process data we get certain
knowledge and that knowledge
is known as an Information.
Data is a bit of information but
not an information itself.
Figure 1.1: Data and Information
3. XP
3
DATA STRUCTURE
Data Structure is defined as it is the representation of logical
relationship existing between individual elements of data. The data
structure is a way of organizing all the data items that considers
not only the elements but also their relationship to each other.
So data structures are the building blocks of a program. The
selection of a data structure must focuses on following two things
1. The data structure must reflects the relationship existing
between the data.
2. The data structure must be simple to process data effectively
whenever required
4. XP
4
DATA STRUCTURE
Data Structure specifies four things
1. Organization of Data
2. Accessing methods
3. Degree of associativity
4. Processing for Information
Data structure can be defined as triple {D, F, A}
D is data
F is set of Functions and
A is set of axioms.
Permitted Data values + Operations = Data Type
Organized Data + Allowed Operations = Data Structure
5. XP
5
DATA STRUCTURE - USES
Data Structure is extensively used in
Database Management System (DBMS)
Data Mining and Warehousing
Compiler Design
Network Analysis
Numerical Analysis
Artificial Intelligence
Operating System
Image Processing
Computer Graphics, etc…
6. XP
6
DATA STRUCTURE - CLASSIFICATION
Primitive and Non Primitive
Primitive Data Structure
Non Primitive Data Structure
According to the Nature of Size
Static Data Structure
Dynamic Data Structure
According to its occurrence
Linear Data Structure
Non Linear Data Structure
Homogeneous and Non-Homogeneous
Homogeneous Data Structure
Non Homogeneous (Heterogeneous) Data Structure
7. XP
7
1. Primitive and Non Primitive DATA STRUCTURE
Primitive data structure
Primitive Data Structures are the basic data structure and they are
directly operated by a machine instructions.
The representation of primitive data structure is different on different
computers
character, integer, floating point numbers, strings
Non Primitive data structure
Non Primitive Data Structures are complex data structures that are
derived from the basic data structure
Non-primitive data structure emphasis on structuring the homogeneous
and heterogeneous data elements
Array, List, Files
8. XP
8
Primitive and Non Primitive DATA STRUCTURE
Figure 1.2: Primitive and Non Primitive Data Structure Classification
9. XP
9
2. According to the Nature of Size DATA STRUCTURE
Static Data Structure
The data structure which have a fixed data size is called static data
structure. The size cannot be vary at run time.
An array is used in static data structure
Char name[100] – the array size is 100 so maximum upto 100 characters
can be stored by array variable ‘name’
Dynamic Data Structure
The size of data structures are vary not fixed is called dynamic data
structure. It uses the concept of pointer.
DMA (Dynamic Memory Allocation) is used in dynamic data structure.
Allocation of memory is done whenever the data required and de-
allocation of memory is done whenever the data is no more required
The size of data structure is grow and shrink
Int *ptr
DMA functions malloc(), calloc(), realloc() and free() are used
10. XP
10
3. According to its occurrence DATA STRUCTURE
Linear Data Structure
In linear data structure the data elements are stored in a consecutive
memory locations or data are stored in a sequential manner.
For example: array, stack, queue etc
Non Linear Data Structure
In non-linear data structure the data elements are not stored in a
consecutive memory location or non-sequential manner.
These data structures are used to represent the hierarchical relationship
between data elements.
For example: tree, graph etc.
11. XP
11
According to its occurrence DATA STRUCTURE
Figure 1.3: Linear and Non Linear Data Structure Classification
12. XP
12
4. Homogeneous and Non-Homogeneous DATA STRUCTURE
Homogeneous Data Structure
The data structure which store same types of data elements is called
homogeneous data structure
For example: array,
Int a[150] – array variable ‘a’ can store 150 data elements and all the
elements must be of integer type only
Non Homogeneous Data Structure
The data structure which store different types of data elements is called
non homogeneous data structure
For example: structure
Struct student{
Int roll_number;
Char name[100];
Float marks;
}s1;
Note: student structure can store integer, character and float type of data elements
13. XP
13
DATA STRUCTURE - OPERATIONS
1. CREATIONS: The process of creating a new data structure
2. INSERTIONS: The process of inserting data element into an existing data
structure
3. DELETIONS: The process of deleting data element from a data structure
4. TRAVERSING: The process of visiting each and every element of a data
structure
5. CONCATENATION: The process of concatenating one data structure into
another
6. SEARCHING: The process of finding the desired data element from a set of
data elements of a data structure
7. SORTING: The process of arranging a set of data elements in some specific
order for example ascending order, descending order
8. DISPLAY: The process of sowing all the elements of a data structure
9. DESTROYING: The process of destroying the whole data structure
14. XP
14
DATA STRUCTURE - EXAMPLES
1. ARRAY
2. STACK
3. QUEUE
4. LINKED LIST
5. TREE
6. GRAPH
7. SORTING
8. SEARCHING
15. XP
15
DATA STRUCTURE - ARRAY
Array is a data structure that is used to store a set of homogeneous data
element in the form of index, value pairs in a consecutive memory location
under the same name
char name[7] = {‘A’, ‘B’, ‘C’, ‘D’, ‘E’};
name[0] = ‘A’
name[1] = ‘B’
name[2] = ‘C’
name[3] = ‘D’
name[4] = ‘E’
Figure 1.4: Showing Array
16. XP
16
DATA STRUCTURE - STACK
Stack is non primitive linear data structure in which the insertion of data element
and deletion of data element is done from only one end called the Top of the
Stack (TOS).
TOP of a STACK is initialize to -1 (TOP = -1)
Figure 1.5: Stack
Stack works on the principle of LIFO (Last in First
Out).
The Process of inserting data element into a
Stack is called PUSH operation and in every
PUSH operation the variable TOP is incremented
by 1. If TOP = MAX-1 then the STACK is overflow
condition.
The Process of deleting data element from a
Stack is called POP operation and in every POP
operation the variable TOP is decremented by 1.
If TOP = -1 then the STACK is Underflow
condition.
17. XP
17
DATA STRUCTURE - QUEUE
QUEUE is non primitive linear data structure in which the data elements are
inserted from one end called rear of a queue and deleted from another end
called front of a queue.
REAR and FRON of a QUEUE is initialize to -1 (REAR = -1, FRONT=-1)
Figure 1.6: Queue
Queue works on the principle of FIFO (First in First Out).
The Process of inserting data element into a Queue is called ENQUEUE
operation and in every QUEUE operation the variable REAR is incremented
by 1.
The Process of deleting data element from a Queue is called DEQUEUE
operation and in every DEQUEUE operation the variable FRONT is
incremented by 1.
18. XP
18
DATA STRUCTURE – LINKED LIST
Lists are the most commonly used non primitive linear data structures.
List can be defined as a collection of variable number of data elements.
An element of a list is called node and each node of a list contains at least two
fields one for storing actual information and another field is used for storing
address of next element (use pointer).
Types Of Linked List
Singly Linked List
Circular Linked List
Doubly Linked List
Circular Doubly Linked List
Figure 1.7: Singly Linked List
19. XP
19
DATA STRUCTURE - TREE
Tree is a non primitive nonlinear data structures in which data elements are
arranged or sorted in a sorted order.
A tree is a finite set of data elements (nodes). The data elements in a tree are
represent the hierarchical relationship between various elements.
Figure 1.8: Tree
20. XP
20
DATA STRUCTURE - GRAPH
Graph is a nonlinear data structure used to represent many kinds of physical
structures.
Its application is in Computer network, Engineering Science, Mathematics,
Chemistry etc.
Thus a graph G is a collection of two sets V and E, where V is the vertices
v0,v1,……. vn – 1 and E is the collection of edges e1, e2, …….en. this can b0e
represented as
G = (V, E) where,
Figure 1.9: Graph
V(G) = (v0, v1,……….vn) or
set of vertices
E(G) = (e1, e2,……….en) or
set of edges
21. XP
21
DATA STRUCTURE - SORTING
A sorting is a process of arranging data-elements in some particular order.
The order may be either in ascending or descending or some priority based
order.
Types of Sorting
Internal Sorting
External Sorting
Types of Internal Sorting
Bubble sort
Selection sort
Insertion sort
Quick sort
Figure 1.10: Sorting
Merge sort
Shell sort
Radix sort
Heap sort
22. XP
22
DATA STRUCTURE - SEARCHING
A Searching is a process of identifying a data element from a set of data with a
help of key. Where key may be either internal key or external key.
Types of Searching
Internal Searching
External Searching
Types of Internal Searching
Sequential Search
Binary Search
Interpolation searching
Searching Cases
Best Case: Search success with minimum time or comparisons.
Worst Case: Search takes maximum time or comparisons or search failed
Average Case: Search takes average time.
23. XP
23
DATA STRUCTURE - IMPLEMENTATION
1. Static Implementation of Data Structure
Array is used in static implementation of data structure
The size of data structure cannot be varying during the execution time
It is useful, if the size of the data structure is fixed.
Int a[150] – array variable ‘a’ can store 150 data
Advantages:
Searching is faster as elements are in continuous memory locations
Compiler manage the memory
Disadvantages:
The size is fixed. It must be known at design time.
Insertion and deletion involves unnecessary shifting the rest of the elements
2. Dynamic Implementation of Data Structure
Pointer is used in Dynamic implementation of data structure
24. XP
24
DATA STRUCTURE – DYNAMIC IMPLEMENTATION(contd…)
Dynamic memory allocation (Allocate the memory whenever required and de-
allocate the memory whenever the memory is no longer needed) leads the
data structure dynamic
The size of data structure may grow and shrink during the execution time.
Various DMA functions are used. For example, There are four library
functions malloc( ), calloc( ), free( ) and realloc( ) for memory management.
These functions are defined within header file stdlib.h and alloc.h
For example:
Int *ptr;
Syntax: Ptr = (data_type*)malloc(sizeof(data_type));
For example: Ptr = (int*)malloc(sizeof(int));
Syntax: Ptr = (data_type*)calloc(no_of_bnlock,size_of_each_block);
Syntax: Ptr = realloc(ptr,new_size);
Syntax: Free(ptr);
25. XP
25
DATA STRUCTURE – ABSTRACT DATA TYPE (ADT)
An Abstract Data Type is a mathematical model where various mathematical
operations are defined.
The abstract data type is special kind of data type, whose behavior is defined
by a set of values and set of operations.
The keyword “Abstract” is used as we can use these data types, we can
perform different operations. But how those operations are working that is
totally hidden from the user.
When we use abstract data types,
our programs divided into two types:
1. The application: The part that
uses the abstract data type.
2. The implementation: The part
that implements the abstract
data type.
Figure 1.11: Abstract Data Type
26. XP
26
ALGORITHM
Algorithm is a step by step procedure to solve the problem. An
algorithm is a sequence of instructions designed in such a way that
if the instructions are executed in the specified sequence, the
desired result will be obtained.
Characteristics of Algorithm
Each and every instruction should be precise and
unambiguous. Instruction should be concise and efficient.
Each instruction should be such that it can be performed in a
finite time.
Instructions should not be repeated infinitely. This ensures
that the algorithm will ultimately terminate.
After performing the instructions, that is after the algorithm
terminates, the desire for results must be obtained.
27. XP
27
ALGORITHM - EXAMPLE
Example 1:
Write an algorithm for checking odd and even. The number is entered
by a user.
Step1: Enter a number from user
Step2: Divide the entered number by 2 and check the remainder
If the remainder is 0,
Print the given number is even
Else
Print the given number is odd
Step3: Stop
28. XP
28
ALGORITHM - Complexity Analysis
Time Complexity
Time Complexity is defined as the amount of time required by a
system to solve a particular problem.
Time Complexity is depend on implementation of the algorithm,
programming language, optimizing the capabilities of the
computer used, the CPU speed, other hardware characteristics etc
The time complexity also depends on the amount of data inputted
to an algorithm
Space Complexity
The Space complexity is defined as the amount of space required
by a system to solve a particular problem.
The space needed by the following components
1. Instruction space
2. Data space
3. Environment stack space
29. XP
29
ALGORITHM - Design Approach
Greedy algorithm
Divide and conquer algorithm
Backtracking
Randomized algorithms
There are typically three scenarios of Complexity analysis:
1.Best case: What is the best case or least amount of time this
code/algorithm would need to execute.
2.Worst case: What is the worst case or maximum amount of time
this code/algorithm need to execute.
3.Average case: As the name suggests it the average amount of
time required to execute this code.
30. XP
30
ALGORITHM - Asymptotic Analysis
Asymptotic notation is the simplest and easiest way of describing
the running time of an algorithm.
It represents the efficiency and performance of an algorithm in a
systematic and meaningful manner.
Asymptotic analysis is the theoretical analysis of an algorithm.
The following notations are used for asymptotic analysis
1.Big Oh(O) notation
2.Omega (Ω) notation
3.Theta (θ) notation
31. XP
31
1. Big Oh(O) Notation
It is used to express the upper bound of the running time of an
algorithm.
It is denoted by ‘O’. Using this notation, we can compute the
maximum possible amount of time that an algorithm will take for its
completion.
Definition: consider f(n) and g(n) be
two positive functions of n, where n is
the size of the input data. Then f(n) is
big-Oh of g(n), if and only if there
exists a positive constant C and an
integer n0, such thatf(n) ≤ C g(n) and
n>n0
Here, f(n) = Og((n))
Figure 1.12: Big-Oh Notation
32. XP
32
Big Oh(O) Notation – Contd…
1 O(1) Constant time
2 O(n) linear time of “order N”
3 O(N2) quadratic time of “order N squared”
4 O(log N) logarithmic time of “order log N”
5 O(N log N) logarithmic time of “order N log N”
6 O(N!) factorial time of “order N factorial”
33. XP
33
2. Big Omega (Ω) Notation
This notation gives a Lower bound for a function to within a
constant factor.
We write f(n) = Ω(g(n)) if there are positive constant n0and C such
that to the right of n0,the value of f(n) always lies on or above
Cg(n).
Definition: Let us consider f(n) and
g(n) be two positive functions of n,
where n is the size of the input data.
Then, f(n) is omega of g(n), if and only
if there exists a positive constant C
and an integer n0, such that f(n) ≥ C
g(n) and n>n0
Here, f(n) = Ω(g(n))
Figure 1.13: Big-Omega Notation
34. XP
34
3. Big Theta (θ) Notation
The Theta notation is a method that is used to express the running
time of an algorithm between the lower and upper bounds.
Theta notation is denoted by θ. using this notation; we can
compute the average time that an algorithm will take for its
completion.
Definition: Let us consider f(n) and
g(n) be two positive functions of n,
where n is the size of the input data.
Then, f(n) is theta of g(n), if and only
if there exists two positive constants
C1 and C2, such that,C1 g(n) ≤ f(n) ≤
C2 g(n)
Here, f(n) = θ(g(n))
Figure 1.14: Big-Oh Notation