This document proposes an approach to improve the efficiency of the Apriori algorithm for association rule mining. The Apriori algorithm is inefficient because it requires multiple scans of the transaction database to find frequent itemsets. The proposed approach aims to reduce this inefficiency in two ways: 1) It reduces the size of the transaction database by removing transactions where the transaction size is less than the candidate itemset size. 2) It scans only the relevant transactions for candidate itemset counting rather than the full database, by using transaction IDs of minimum support items from the first pass of the algorithm. An example is provided to demonstrate how the approach reduces the database and number of transactions scanned to generate frequent itemsets more efficiently than the standard Apriori
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability
Associationship is an important component of data mining. In real world applications, the knowledge that is used for aiding decision-making is always time-varying. However, most of the existing data mining approaches rely on the assumption that discovered knowledge is valid indefinitely. For supporting better decision making, it is desirable to be able to actually identify the temporal features with the interesting patterns or rules. This paper presents a novel approach for mining Efficient Temporal Association Rule (ETAR). The basic idea of ETAR is to first partition the database into time periods of item set and then progressively accumulates the occurrence count of each item set based on the intrinsic partitioning characteristics. Explicitly, the execution time of ETAR is, in orders of magnitude, smaller than those required by schemes which are directly extended from existing methods because it scan the database only once.
Scalable frequent itemset mining using heterogeneous computing par apriori a...ijdpsjournal
Association Rule mining is one of the dominant tasks of data mining, which concerns in finding frequent
itemsets in large volumes of data in order to produce summarized models of mined rules. These models are
extended to generate association rules in various applications such as e-commerce, bio-informatics,
associations between image contents and non image features, analysis of effectiveness of sales and retail
industry, etc. In the vast increasing databases, the major challenge is the frequent itemsets mining in a
very short period of time. In the case of increasing data, the time taken to process the data should be
almost constant. Since high performance computing has many processors, and many cores, consistent runtime
performance for such very large databases on association rules mining is achieved. We, therefore,
must rely on high performance parallel and/or distributed computing. In literature survey, we have studied
the sequential Apriori algorithms and identified the fundamental problems in sequential environment and
parallel environment. In our proposed ParApriori, we have proposed parallel algorithm for GPGPU, and
we have also done the results analysis of our GPU parallel algorithm. We find that proposed algorithm
improved the computing time, consistency in performance over the increasing load. The empirical analysis
of the algorithm also shows that efficiency and scalability is verified over the series of datasets
experimented on many core GPU platform.
An Improved Frequent Itemset Generation Algorithm Based On Correspondence cscpconf
Association rules play a very vital role in the present day market that especially involves generation of maximal frequent itemsets in an efficient way. The efficiency of association rule is determined by the number of database scans required to generate the frequent itemsets. This in turn is proportional to the time, which will lead to the faster computation of the frequent itemsets. In this paper, a single scan algorithm which makes use of the mapping of the item numbers and array indexing to achieve the generation of the frequent item sets dynamically and faster. The proposed algorithm is an incremental algorithm in that it generates frequent itemsets as and when the data is entered into the database
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability
Associationship is an important component of data mining. In real world applications, the knowledge that is used for aiding decision-making is always time-varying. However, most of the existing data mining approaches rely on the assumption that discovered knowledge is valid indefinitely. For supporting better decision making, it is desirable to be able to actually identify the temporal features with the interesting patterns or rules. This paper presents a novel approach for mining Efficient Temporal Association Rule (ETAR). The basic idea of ETAR is to first partition the database into time periods of item set and then progressively accumulates the occurrence count of each item set based on the intrinsic partitioning characteristics. Explicitly, the execution time of ETAR is, in orders of magnitude, smaller than those required by schemes which are directly extended from existing methods because it scan the database only once.
Scalable frequent itemset mining using heterogeneous computing par apriori a...ijdpsjournal
Association Rule mining is one of the dominant tasks of data mining, which concerns in finding frequent
itemsets in large volumes of data in order to produce summarized models of mined rules. These models are
extended to generate association rules in various applications such as e-commerce, bio-informatics,
associations between image contents and non image features, analysis of effectiveness of sales and retail
industry, etc. In the vast increasing databases, the major challenge is the frequent itemsets mining in a
very short period of time. In the case of increasing data, the time taken to process the data should be
almost constant. Since high performance computing has many processors, and many cores, consistent runtime
performance for such very large databases on association rules mining is achieved. We, therefore,
must rely on high performance parallel and/or distributed computing. In literature survey, we have studied
the sequential Apriori algorithms and identified the fundamental problems in sequential environment and
parallel environment. In our proposed ParApriori, we have proposed parallel algorithm for GPGPU, and
we have also done the results analysis of our GPU parallel algorithm. We find that proposed algorithm
improved the computing time, consistency in performance over the increasing load. The empirical analysis
of the algorithm also shows that efficiency and scalability is verified over the series of datasets
experimented on many core GPU platform.
An Improved Frequent Itemset Generation Algorithm Based On Correspondence cscpconf
Association rules play a very vital role in the present day market that especially involves generation of maximal frequent itemsets in an efficient way. The efficiency of association rule is determined by the number of database scans required to generate the frequent itemsets. This in turn is proportional to the time, which will lead to the faster computation of the frequent itemsets. In this paper, a single scan algorithm which makes use of the mapping of the item numbers and array indexing to achieve the generation of the frequent item sets dynamically and faster. The proposed algorithm is an incremental algorithm in that it generates frequent itemsets as and when the data is entered into the database
Result analysis of mining fast frequent itemset using compacted dataijistjournal
Data mining and knowledge discovery of database is magnetizing wide array of non-trivial research arena,
making easy to industrial decision support systems and continues to expand even beyond imagination in
one such promising field like Artificial Intelligence and facing the real world challenges. Association rules
forms an important paradigm in the field of data mining for various databases like transactional database,
time-series database, spatial, object-oriented databases etc. The burgeoning amount of data in multiple
heterogeneous sources coalesces with the impediment in building and preserving central vital repositories
compels the need for effectual distributive mining techniques.
The majority of the previous studies rely on an Apriori-like candidate set generation-and-test approach.
For these applications, these forms of aged techniques are found to be quite expensive, sluggish and highly
subjective in case there exists long length patterns.
Top Down Approach to find Maximal Frequent Item Sets using Subset Creationcscpconf
Association rule has been an area of active research in the field of knowledge discovery. Data
mining researchers had improved upon the quality of association rule mining for business
development by incorporating influential factors like value (utility), quantity of items sold
(weight) and more for the mining of association patterns. In this paper, we propose an efficient
approach to find maximal frequent item set first. Most of the algorithms in literature used to find
minimal frequent item first, then with the help of minimal frequent item sets derive the maximal
frequent item sets. These methods consume more time to find maximal frequent item sets. To
overcome this problem, we propose a navel approach to find maximal frequent item set directly using the concepts of subsets. The proposed method is found to be efficient in finding maximal frequent item sets.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A PREFIXED-ITEMSET-BASED IMPROVEMENT FOR APRIORI ALGORITHMcsandit
Association rules is a very important part of data mining. It is used to find the interesting patterns from transaction databases. Apriori algorithm is one of the most classical algorithms
of association rules, but it has the bottleneck in efficiency. In this article, we proposed a prefixed-itemset-based data structure for candidate itemset generation, with the help of the structure we managed to improve the efficiency of the classical Apriori algorithm.
Modifed Bit-Apriori Algorithm for Frequent Item- Sets in Data Miningidescitation
Mining frequent item-sets is one of the most
important concepts in data mining. It is a fundamental and
initial task of data mining. Apriori[3] is the most popular and
frequently used algorithm for finding frequent item-sets.
There are other algorithms viz, Eclat[4], FP-growth[5] which
are used to find out frequent item-sets. In order to improve
the time efficiency of Apriori algorithms, Jiemin Zheng
introduced Bit-Apriori[1] algorithm with the following
corrections with respect to Apriori[3] algorithm.
1) Support count is implemented by performing bitwise “And”
operation on binary strings
2) Special equal-support pruning
In this paper, to improve the time efficiency of Bit-Apriori[1]
algorithm, a novel algorithm that deletes infrequent items
during trie2 and subsequent tire’s are proposed and
demonstrated with an example.
Classical association rule mining algorithm discovers frequent itemsets from transactional databases by considering the appearance of the itemset and not other utilities such as profit of an item or quantity in which items bought. But in transactional databases large quantity of items is purchased may lead to very high profit even though items appeared in few transactions. Therefore the quantity of the item is considered as the most important components, lack of which may lead to loss of information. Here binary attributes of the Item is considered for calculating item weight using link based model. This paper provides novel framework, Quantity Based Association Rule Mining (QBARM) algorithm, considers quantity and item weight.
Frequent pattern mining techniques helpful to find interesting trends or patterns in
massive data. Prior domain knowledge leads to decide appropriate minimum support threshold. This
review article show different frequent pattern mining techniques based on apriori or FP-tree or user
define techniques under different computing environments like parallel, distributed or available data
mining tools, those helpful to determine interesting frequent patterns/itemsets with or without prior
domain knowledge. Proposed review article helps to develop efficient and scalable frequent pattern
mining techniques.
Comparative analysis of association rule generation algorithms in data streamsIJCI JOURNAL
Data mining technology is engaged in establishing helpful and unfamiliar data from the huge databases. Generally, data mining methods are useful for static databases for knowledge extraction wherever currently available data mining techniques are not appropriate and it also has a number of limitations for managing dynamic databases. A data stream manages dynamic data sets and it has become one of the essential research domains in data mining. The fundamental definition of the data stream is an arrival of continuous and unlimited data which may not be stored fully because it needs more storage capacity. In
order to perform data analysis with this, many new data mining techniques are to be required. Data analysis is carried out by using clustering, classification, frequent item set mining and association rule generation. Association rule mining is one of the significant research problems in the data stream which helps to find out the relationship between the data items in the transactional databases. This research work concentrated on how the traditional algorithms are used for generating association rules in data streams. The algorithms used in this work are Assoc Outliers, Frequent Item sets and Supervised Association Rule. A number of rules generated by an algorithm and execution time are considered as the performance factors. Experimental results give that Frequent Item set algorithm efficiency is better than Assoc Outliers and Supervised Association Rule Algorithms. This implementation work is executed in the Tanagra data mining tool.
Insertion:
Insert at the beginning: Add a new node at the beginning of the linked list.
Insert at the end: Add a new node at the end of the linked list.
Insert at a specified position: Add a new node at a specific position in the linked list.
Deletion:
Delete from the beginning: Remove the first node from the linked list.
Delete from the end: Remove the last node from the linked list.
Delete a specific node: Remove a node with a specific value or position from the linked list.
Traversal:
Print the linked list: Display all the elements in the linked list.
Search for a specific element: Find a particular element in the linked list.
Other operations:
Get length of the linked list: Calculate the number of nodes in the linked list.
Reverse the linked list: Reverse the order of elements in the linked list.
Slides 8-49: Detailed Explanation with Examples
Each slide covers one specific topic or operation.
Result analysis of mining fast frequent itemset using compacted dataijistjournal
Data mining and knowledge discovery of database is magnetizing wide array of non-trivial research arena,
making easy to industrial decision support systems and continues to expand even beyond imagination in
one such promising field like Artificial Intelligence and facing the real world challenges. Association rules
forms an important paradigm in the field of data mining for various databases like transactional database,
time-series database, spatial, object-oriented databases etc. The burgeoning amount of data in multiple
heterogeneous sources coalesces with the impediment in building and preserving central vital repositories
compels the need for effectual distributive mining techniques.
The majority of the previous studies rely on an Apriori-like candidate set generation-and-test approach.
For these applications, these forms of aged techniques are found to be quite expensive, sluggish and highly
subjective in case there exists long length patterns.
Top Down Approach to find Maximal Frequent Item Sets using Subset Creationcscpconf
Association rule has been an area of active research in the field of knowledge discovery. Data
mining researchers had improved upon the quality of association rule mining for business
development by incorporating influential factors like value (utility), quantity of items sold
(weight) and more for the mining of association patterns. In this paper, we propose an efficient
approach to find maximal frequent item set first. Most of the algorithms in literature used to find
minimal frequent item first, then with the help of minimal frequent item sets derive the maximal
frequent item sets. These methods consume more time to find maximal frequent item sets. To
overcome this problem, we propose a navel approach to find maximal frequent item set directly using the concepts of subsets. The proposed method is found to be efficient in finding maximal frequent item sets.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A PREFIXED-ITEMSET-BASED IMPROVEMENT FOR APRIORI ALGORITHMcsandit
Association rules is a very important part of data mining. It is used to find the interesting patterns from transaction databases. Apriori algorithm is one of the most classical algorithms
of association rules, but it has the bottleneck in efficiency. In this article, we proposed a prefixed-itemset-based data structure for candidate itemset generation, with the help of the structure we managed to improve the efficiency of the classical Apriori algorithm.
Modifed Bit-Apriori Algorithm for Frequent Item- Sets in Data Miningidescitation
Mining frequent item-sets is one of the most
important concepts in data mining. It is a fundamental and
initial task of data mining. Apriori[3] is the most popular and
frequently used algorithm for finding frequent item-sets.
There are other algorithms viz, Eclat[4], FP-growth[5] which
are used to find out frequent item-sets. In order to improve
the time efficiency of Apriori algorithms, Jiemin Zheng
introduced Bit-Apriori[1] algorithm with the following
corrections with respect to Apriori[3] algorithm.
1) Support count is implemented by performing bitwise “And”
operation on binary strings
2) Special equal-support pruning
In this paper, to improve the time efficiency of Bit-Apriori[1]
algorithm, a novel algorithm that deletes infrequent items
during trie2 and subsequent tire’s are proposed and
demonstrated with an example.
Classical association rule mining algorithm discovers frequent itemsets from transactional databases by considering the appearance of the itemset and not other utilities such as profit of an item or quantity in which items bought. But in transactional databases large quantity of items is purchased may lead to very high profit even though items appeared in few transactions. Therefore the quantity of the item is considered as the most important components, lack of which may lead to loss of information. Here binary attributes of the Item is considered for calculating item weight using link based model. This paper provides novel framework, Quantity Based Association Rule Mining (QBARM) algorithm, considers quantity and item weight.
Frequent pattern mining techniques helpful to find interesting trends or patterns in
massive data. Prior domain knowledge leads to decide appropriate minimum support threshold. This
review article show different frequent pattern mining techniques based on apriori or FP-tree or user
define techniques under different computing environments like parallel, distributed or available data
mining tools, those helpful to determine interesting frequent patterns/itemsets with or without prior
domain knowledge. Proposed review article helps to develop efficient and scalable frequent pattern
mining techniques.
Comparative analysis of association rule generation algorithms in data streamsIJCI JOURNAL
Data mining technology is engaged in establishing helpful and unfamiliar data from the huge databases. Generally, data mining methods are useful for static databases for knowledge extraction wherever currently available data mining techniques are not appropriate and it also has a number of limitations for managing dynamic databases. A data stream manages dynamic data sets and it has become one of the essential research domains in data mining. The fundamental definition of the data stream is an arrival of continuous and unlimited data which may not be stored fully because it needs more storage capacity. In
order to perform data analysis with this, many new data mining techniques are to be required. Data analysis is carried out by using clustering, classification, frequent item set mining and association rule generation. Association rule mining is one of the significant research problems in the data stream which helps to find out the relationship between the data items in the transactional databases. This research work concentrated on how the traditional algorithms are used for generating association rules in data streams. The algorithms used in this work are Assoc Outliers, Frequent Item sets and Supervised Association Rule. A number of rules generated by an algorithm and execution time are considered as the performance factors. Experimental results give that Frequent Item set algorithm efficiency is better than Assoc Outliers and Supervised Association Rule Algorithms. This implementation work is executed in the Tanagra data mining tool.
Insertion:
Insert at the beginning: Add a new node at the beginning of the linked list.
Insert at the end: Add a new node at the end of the linked list.
Insert at a specified position: Add a new node at a specific position in the linked list.
Deletion:
Delete from the beginning: Remove the first node from the linked list.
Delete from the end: Remove the last node from the linked list.
Delete a specific node: Remove a node with a specific value or position from the linked list.
Traversal:
Print the linked list: Display all the elements in the linked list.
Search for a specific element: Find a particular element in the linked list.
Other operations:
Get length of the linked list: Calculate the number of nodes in the linked list.
Reverse the linked list: Reverse the order of elements in the linked list.
Slides 8-49: Detailed Explanation with Examples
Each slide covers one specific topic or operation.
An improved apriori algorithm for association rulesijnlc
There are several mining algorithms of association rules. One of the most popular algorithms is Apriori
that is used to extract frequent itemsets from large database and getting the association rule for
discovering the knowledge. Based on this algorithm, this paper indicates the limitation of the original
Apriori algorithm of wasting time for scanning the whole database searching on the frequent itemsets, and
presents an improvement on Apriori by reducing that wasted time depending on scanning only some
transactions. The paper shows by experimental results with several groups of transactions, and with
several values of minimum support that applied on the original Apriori and our implemented improved
Apriori that our improved Apriori reduces the time consumed by 67.38% in comparison with the original
Apriori, and makes the Apriori algorithm more efficient and less time consuming
Data Mining plays an important role in extracting patterns and other information from data. The Apriori Algorithm has been the most popular techniques infinding frequent patterns. However, Apriori Algorithm scans the database many times leading to large I/O. This paper is proposed to overcome the limitaions of Apriori Algorithm while improving the overall speed of execution for all variations in ‘minimum support’. It is aimed to reduce the number of scans required to find frequent patters.
Associationship is an important component of data mining. In real world applications, the knowledge that is used for aiding decision-making is always time-varying. However, most of the existing data mining approaches rely on the assumption that discovered knowledge is valid indefinitely. For supporting better decision making, it is desirable to be able to actually identify the temporal features with the interesting patterns or rules. This paper presents a novel approach for mining Efficient Temporal Association Rule (ETAR). The basic idea of ETAR is to first partition the database into time periods of item set and then progressively accumulates the occurrence count of each item set based on the intrinsic partitioning characteristics. Explicitly, the execution time of ETAR is, in orders of magnitude, smaller than those required by schemes which are directly extended from existing methods because it scan the database only once.
Result Analysis of Mining Fast Frequent Itemset Using Compacted Dataijistjournal
Data mining and knowledge discovery of database is magnetizing wide array of non-trivial research arena, making easy to industrial decision support systems and continues to expand even beyond imagination in one such promising field like Artificial Intelligence and facing the real world challenges. Association rules forms an important paradigm in the field of data mining for various databases like transactional database, time-series database, spatial, object-oriented databases etc. The burgeoning amount of data in multiple heterogeneous sources coalesces with the impediment in building and preserving central vital repositories compels the need for effectual distributive mining techniques.
The majority of the previous studies rely on an Apriori-like candidate set generation-and-test approach. For these applications, these forms of aged techniques are found to be quite expensive, sluggish and highly subjective in case there exists long length patterns.
Present day, mining of high utility itemsets
especially from transactional databases is required task to
process many transactional operations quick. There are many
methods that are presented for mining high utility itemsets from
transactional datasets are subjected to some serious limitations
such as performance of this methods needs to be investigated in
low memory based systems for mining high utility itemsets from
large transactional datasets and hence needs to address further
as well. Further limitation includes these methods cannot
overcome the screenings as well as overhead of null transactions;
hence, performance degrades eventually. We are analyzing the
new approaches to overcome these limitations such as distributed
programming model for mining business-oriented transactional
datasets, which overcomes the limitations and main memorybased
computing, but also unexpectedly highly scalable in terms
of increasing database size. We have used this approach with
existing UP-Growth and UP-Growth+ with aim of improving
their performances further.
With entry of time proportion of maturing and incessant infections going high and high. That is the reason people groups are for the most part stressed over their great wellbeing. Furthermore, they are energized by their longing for better wellbeing administration. Individuals intrigues, consideration move towards quiet focused rather than old customary and traditional hospitalized administrations. For this reason in later past thought of U-HEALTHCARE was embraced. U-HealthCare was such a framework made out of a shrewd headband and a wellbeing state screen program. U-Health Care is in charge of keeping under perceptions diverse conditions of wellbeing amid running, strolling, running. Its produce data about heart rate, client PGG (Photo Plet hy Smography) with help of savvy headband. A sticks of time clock proceeds recently investigate on telemedicine drive forward omnipresent social insurance (U-Health).researchers and designers have anticipating such a telemedicine framework which is arrangement of MOBILE, UBIQUITOUS and WIRELESS BODY AREA NETWORK, on the grounds that such a framework have more positive to appreciate next offspring of U-Health. With giving a great deal of productive results present photograph of U-Health framework is still a tiny bit unclear and dark because of inadequacies which make question mark on notice alternative of U-HealthCare System. So for this reason, we should need to take incorporation of most recent, very much modern equipment, correspondences, interconnections, a trademark figuring, advance steering and protection to upcoming offspring of U-Healthcare taking into account MOBILE, UBIQUITOUS and WIRELESS BODY AREA NETWORK. Our distinct fascination and consideration will be on change of ROUTING and SECURITY.
Multicarrier modulation can be implemented by using Orthogonal Frequency Division Multiplexing (OFDM) to achieve utmost bandwidth exploitation and soaring alleviation attributes profile besides multipath fading. To support delay sensitive and band bandwidth demanding multimedia applications and internet services, MIMO in addition with other techniques can be used to achieve high capacity and reliability. To obtain high spatial rate by transmitting data on several antennas by using MIMO with OFDM results in reducing error recovery features and the equalization complexities arise by sending data on varying frequency levels. Three parameters frequency OFDM, Spatial (MIMO) and time (STC) can be used to achieve diversity in MIMO-OFDM. This technique is dynamic and well-known for services of wireless broadband access. MIMO if used with OFDM is highly beneficial for each scheme and provides high throughput. There are several space time block codes to exploit MIMO OFDM; one of the techniques is called Alamouti Codes. The paper investigates adaptive Alamouti Codes and their application in IEEE 802.11n.
During data acquisition and transmission of biomedical signals like electrocardiography (ECG), different types of artifacts are embedded in the signal. Since an ECG is a low amplitude signal these artifacts greatly degrade the signal quality and the signal becomes noisy. The sources of artifacts are power line interference (PLI), high frequency interference electromyography (EMG) and base line wanders (BLW). Different digital filters are used in order to reduce these artifacts. ECG signal is a non-stationary signal, it is difficult to find fixed filters for the removal of interference from the ECG signal. In order to overcome these problems adaptive filters are used as they are well suited for the non-stationary environment. In this paper a new algorithm “Modified Normalized Least Mean Square” has been proposed. A comparison is made among the new algorithm and the existing algorithms like LMS, NLMS, Sign data LMS and Log LMS in terms of SNR, convergence rate and time complexity. It has been observed that the performance of new algorithm is superior to the existing ones in terms of SNR and convergence rate however it is more complex than the other algorithms. Results of simulations in MATLAB are presented and a critical analysis is made on the basis of convergence rate, signal to noise ratio (SNR), and computational time among the filtering techniques.
As IT industry flourish and computer graphics race started hereby AI serve a step forward to this.The next industry will be with characters that behave realistically and that can learn and adapt.This paper explores the various artificial intelligence techniques that are currently being used by game developers, as well as techniques that are new to the industry. The techniques covered in this paper are finite state machines, scripting, agents, flocking, genetic algorithms. This paper introduces each of these technique, explains how they can be applied to games and how commercial games are currently making use of them. Finally, the effectiveness of these techniques and their future role in the industry are evaluated.
E-Voting system is a system which allow all citizens of country to cast their vote online is to increase the overall voting percentage across the country, as in the recent scenario people have to visit the booth to cast their vote and those people who live out of their native place are not able to cast vote during the elections. So due to this the voting percentage across the country is very less. Through this software those people who live out of their home town will also be able to cast their votes as this system is online. The main objective of this software is to increase the overall voting percentage and create and manage polling and election details like general user details, nominated users, and election and result details efficiently.
GenBank is the NIH genetic sequence database, an annotated collection of all publicly available DNA sequences. It comprises the DNA DataBank of Japan (DDBJ), the European Molecular Biology Laboratory (EMBL), and GenBank at NCB[3].The information retrieved in a genbank file is complex and requires condsiderable time for analyzing the data. In this work we wish to convert the data present in Genbank file into a more readable, graphical format by developing a computational tool RASA-GD, using Perl code, Perl module and php. The tool accept file in genbank format and will extract information such as coding region, exon, intron, gene, mRNA,tRNA, and other data contained in gene bank file and also calculate the length of these elements and will generate a graphical output of length and statistical values such as mean, median, mode, standard deviation, variance, sample range, minimum value, maximum value, unique gene number. This tool will be very useful for various kinds of analysis to both molecular biologists and computational biologists and will help in hypothesis generation. We have used ourtool for visualizing variation of foxp2 gene from various species such as in human, monkey, gorilla, house mouse, gallusgallus. This gene is responsible for speech behavior. In order to explore whether there is an evolutionary significance of the length of various genetic elements of this gene, The data generated has provided useful insight into the evolutionary history of this gene.
Gabor filter is a powerful way to enhance biometric images like fingerprint images in order to extract correct features from these images, Gabor filter used in extracting features directly asin iris images, and sometimes Gabor filter has been used for texture analysis. In fingerprint images The even symmetric Gabor filter is contextual filter or multi-resolution filter will be used to enhance fingerprint imageby filling small gaps (low-pass effect) in the direction of the ridge (black regions) and to increase the discrimination between ridge and valley (black and white regions) in the direction, orthogonal to the ridge, the proposed method in applying Gabor filter on fingerprint images depending on translated fingerprint image into binary image after applying some simple enhancing methods to partially overcome time consuming problem of the Gabor filter.
More from International Journal of Computer and Communication System Engineering (20)
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.