There is an exponential increase in the use of conversational bots. Conversational bots can be
described as a platform that can chat with people using artificial intelligence. The recent advancement has
made A.I capable of learning from data and produce an output. This learning of data can be performed by using
various machine learning algorithm. Machine learning techniques involves construction of algorithms that can
learn for data and can predict the outcome. This paper reviews the efficiency of different machine learning
algorithm that are used in conversational bot.
Biometric Identification and Authentication Providence using Fingerprint for ...IJECEIAES
The raise in the recent security incidents of cloud computing and its challenges is to secure the data. To solve this problem, the integration of mobile with cloud computing, Mobile biometric authentication in cloud computing is presented in this paper. To enhance the security, the biometric authentication is being used, since the Mobile cloud computing is popular among the mobile user. This paper examines how the mobile cloud computing (MCC) is used in security issue with finger biometric authentication model. Through this fingerprint biometric, the secret code is generated by entropy value. This enables the person to request for accessing the data in the desk computer. When the person requests the access to the authorized user through Bluetooth in mobile, the Authorized user sends the permit access through fingerprint secret code. Finally this fingerprint is verified with the database in the Desk computer. If it is matched, then the computer can be accessed by the requested person.
Identification of important features and data mining classification technique...IJECEIAES
Employees absenteeism at the work costs organizations billions a year. Prediction of employees’ absenteeism and the reasons behind their absence help organizations in reducing expenses and increasing productivity. Data mining turns the vast volume of human resources data into information that can help in decision-making and prediction. Although the selection of features is a critical step in data mining to enhance the efficiency of the final prediction, it is not yet known which method of feature selection is better. Therefore, this paper aims to compare the performance of three well-known feature selection methods in absenteeism prediction, which are relief-based feature selection, correlation-based feature selection and information-gain feature selection. In addition, this paper aims to find the best combination of feature selection method and data mining technique in enhancing the absenteeism prediction accuracy. Seven classification techniques were used as the prediction model. Additionally, cross-validation approach was utilized to assess the applied prediction models to have more realistic and reliable results. The used dataset was built at a courier company in Brazil with records of absenteeism at work. Regarding experimental results, correlationbased feature selection surpasses the other methods through the performance measurements. Furthermore, bagging classifier was the best-performing data mining technique when features were selected using correlation-based feature selection with an accuracy rate of (92%).
Biometric Identification and Authentication Providence using Fingerprint for ...IJECEIAES
The raise in the recent security incidents of cloud computing and its challenges is to secure the data. To solve this problem, the integration of mobile with cloud computing, Mobile biometric authentication in cloud computing is presented in this paper. To enhance the security, the biometric authentication is being used, since the Mobile cloud computing is popular among the mobile user. This paper examines how the mobile cloud computing (MCC) is used in security issue with finger biometric authentication model. Through this fingerprint biometric, the secret code is generated by entropy value. This enables the person to request for accessing the data in the desk computer. When the person requests the access to the authorized user through Bluetooth in mobile, the Authorized user sends the permit access through fingerprint secret code. Finally this fingerprint is verified with the database in the Desk computer. If it is matched, then the computer can be accessed by the requested person.
Identification of important features and data mining classification technique...IJECEIAES
Employees absenteeism at the work costs organizations billions a year. Prediction of employees’ absenteeism and the reasons behind their absence help organizations in reducing expenses and increasing productivity. Data mining turns the vast volume of human resources data into information that can help in decision-making and prediction. Although the selection of features is a critical step in data mining to enhance the efficiency of the final prediction, it is not yet known which method of feature selection is better. Therefore, this paper aims to compare the performance of three well-known feature selection methods in absenteeism prediction, which are relief-based feature selection, correlation-based feature selection and information-gain feature selection. In addition, this paper aims to find the best combination of feature selection method and data mining technique in enhancing the absenteeism prediction accuracy. Seven classification techniques were used as the prediction model. Additionally, cross-validation approach was utilized to assess the applied prediction models to have more realistic and reliable results. The used dataset was built at a courier company in Brazil with records of absenteeism at work. Regarding experimental results, correlationbased feature selection surpasses the other methods through the performance measurements. Furthermore, bagging classifier was the best-performing data mining technique when features were selected using correlation-based feature selection with an accuracy rate of (92%).
DEEP-LEARNING-BASED HUMAN INTENTION PREDICTION WITH DATA AUGMENTATIONijaia
Data augmentation has been broadly applied in training deep-learning models to increase the diversity of
data. This study ingestigates the effectiveness of different data augmentation methods for deep-learningbased human intention prediction when only limited training data is available. A human participant pitches
a ball to nine potential targets in our experiment. We expect to predict which target the participant pitches
the ball to. Firstly, the effectiveness of 10 data augmentation groups is evaluated on a single-participant
data set using RGB images. Secondly, the best data augmentation method (i.e., random cropping) on the
single-participant data set is further evaluated on a multi-participant data set to assess its generalization
ability. Finally, the effectiveness of random cropping on fusion data of RGB images and optical flow is
evaluated on both single- and multi-participant data sets. Experiment results show that: 1) Data
augmentation methods that crop or deform images can improve the prediction performance; 2) Random
cropping can be generalized to the multi-participant data set (prediction accuracy is improved from 50%
to 57.4%); and 3) Random cropping with fusion data of RGB images and optical flow can further improve
the prediction accuracy from 57.4% to 63.9% on the multi-participant data set.
Interpreting available data is a focal issue in data mining. Gathering of primary data is a difficult and expensive affair for assessing the trends for any business decision especially when multiple players are present. There is no uniform formula-type work procedure to deduce information from a vast data set especially if the data formats in the secondary sources are not uniform and need enormous cleansing to
mend the data for statistical analysis. In this paper, an incremental approach to cleanse data using a simple yet extended procedure is presented and it is shown how to deduce conclusions to facilitate business decisions. Freely available Indian Telecom Industry’s data over a year is used to illustrate this process. It is shown how to conclude the superiority of one telecom service provider over the others comparing different parameters like network availability, customer service quality etc. using a relative parameter
quantification technique. It is found that this method is computationally less costly than the other known
methods.
Clustering Prediction Techniques in Defining and Predicting Customers Defecti...IJECEIAES
With the growth of the e-commerce sector, customers have more choices, a fact which encourages them to divide their purchases amongst several ecommerce sites and compare their competitors‟ products, yet this increases high risks of churning. A review of the literature on customer churning models reveals that no prior research had considered both partial and total defection in non-contractual online environments. Instead, they focused either on a total or partial defect. This study proposes a customer churn prediction model in an e-commerce context, wherein a clustering phase is based on the integration of the k-means method and the Length-RecencyFrequency-Monetary (LRFM) model. This phase is employed to define churn followed by a multi-class prediction phase based on three classification techniques: Simple decision tree, Artificial neural networks and Decision tree ensemble, in which the dependent variable classifies a particular customer into a customer continuing loyal buying patterns (Non-churned), a partial defector (Partially-churned), and a total defector (Totally-churned). Macroaveraging measures including average accuracy, macro-average of Precision, Recall, and F-1 are used to evaluate classifiers‟ performance on 10-fold cross validation. Using real data from an online store, the results show the efficiency of decision tree ensemble model over the other models in identifying both future partial and total defection.
MITIGATION TECHNIQUES TO OVERCOME DATA HARM IN MODEL BUILDING FOR MLijaia
Given the impact of Machine Learning (ML) on individuals and the society, understanding how harm might
be occur throughout the ML life cycle becomes critical more than ever. By offering a framework to
determine distinct potential sources of downstream harm in ML pipeline, the paper demonstrates the
importance of choices throughout distinct phases of data collection, development, and deployment that
extend far beyond just model training. Relevant mitigation techniques are also suggested for being used
instead of merely relying on generic notions of what counts as fairness.
Augmentation of Customer’s Profile Dataset Using Genetic AlgorithmRSIS International
Data is the lifeblood of all type of business. Clean,
accurate and complete data is the prerequisite for the decisionmaking
in business process. Data is one of the most valuable
assets for any organization. It is immensely important that the
business focus on the quality of their data as it can help in
increasing the business performance by improving efficiencies,
streamlining operations and consolidating data sources. Good
quality data helps to improve and simplify processes, eliminate
time-consuming rework and externally to enhance a user’s
experience, further translating it to significant financial and
operational benefits [1] [2]. All organizations/ businesses strive to
retain their existing customers and gain new ones. Accurate data
enables the business to improve the customer experience. Data
augmentation adds value to base data by enhancing information
derived from the existing source. Data augmentation can help
reduce the manual intervention required to develop meaningful
information and insight of business data, as well as significantly
enhance data quality. Hence the business can provide unique
customer experience and deliver above and beyond their
expectations. The Data Augmentation is immensely important as
it helps in improving the overall productivity of the business. It
is also important in making the most accurate and relevant
information available quickly for decision making.
This work focuses on augmentation of the customer
dataset using Genetic Algorithm(GA). These augmented data are
used for the purpose of customer behavioral analysis. The data
set consists of the different factors inherent in each situation of
the customer to understand the market strategy. This behavioral
data is used in the earlier work of analyzing the data [13]. It is
found that collecting a very large amount of such data manually
is a very cumbersome process. It is inferred from the earlier
work [13] that the more number of data may give accurate
result. Hence it is decided to enrich the dataset by using Genetic
Algorithm.
DATA AUGMENTATION TECHNIQUES AND TRANSFER LEARNING APPROACHES APPLIED TO FACI...ijaia
The face expression is the first thing we pay attention to when we want to understand a person’s state of
mind. Thus, the ability to recognize facial expressions in an automatic way is a very interesting research
field. In this paper, because the small size of available training datasets, we propose a novel data
augmentation technique that improves the performances in the recognition task. We apply geometrical
transformations and build from scratch GAN models able to generate new synthetic images for each
emotion type. Thus, on the augmented datasets we fine tune pretrained convolutional neural networks with
different architectures. To measure the generalization ability of the models, we apply extra-database
protocol approach, namely we train models on the augmented versions of training dataset and test them on
two different databases. The combination of these techniques allows to reach average accuracy values of
the order of 85% for the InceptionResNetV2 model.
Information security risk analysis methods and research trends ahp and fuzzy ...ijcsit
Information security risk analysis becomes an increasingly essential component of organization’s
operations. Traditional Information security risk analysis is quantitative and qualitative analysis methods.
Quantitative and qualitative analysis methods have some advantages for information risk analysis.
However, hierarchy process has been widely used in security assessment. A future research direction may
be development and application of soft computing such as rough sets, grey sets, fuzzy systems, generic
algorithm, support vector machine, and Bayesian network and hybrid model. Hybrid model are developed
by integrating two or more existing model. A Practical advice for evaluation information security risk is
discussed. This approach is combination with AHP and Fuzzy comprehensive method.
REVIEWING PROCESS MINING APPLICATIONS AND TECHNIQUES IN EDUCATIONijaia
Process Mining (PM) emerged from business process management but has recently been applied to
educational data and has been found to facilitate the understanding of the educational process.
Educational Process Mining (EPM) bridges the gap between process analysis and data analysis, based on
the techniques of model discovery, conformance checking and extension of existing process models. We
present a systematic review of the recent and current status of research in the EPM domain, focusing on
application domains, techniques, tools and models, to highlight the use of EPM in comprehending and
improving educational processes.
APPLICATION WISE ANNOTATIONS ON INTELLIGENT DATABASE TECHNIQUESJournal For Research
Databases are systems, which are used for storing data. With the increase in information at rapid pace, extraction of relevant information becomes time-consuming using traditional databases. Thus, the need of intelligent or smart databases arises. Intelligent database is a system in which automation is applied on conventional database in order to enhance its functionality and efficiency. Intelligent databases help us in making decisions and can respond or act to situations by learning. It has applications in wide areas such as healthcare, automobile, education, information security, business etc. This paper represents various intelligent database techniques, which are further used for implementation of applications of this domain.
MOVIE SUCCESS PREDICTION AND PERFORMANCE COMPARISON USING VARIOUS STATISTICAL...ijaia
Movies are among the most prominent contributors to the global entertainment industry today, and they
are among the biggest revenue-generating industries from a commercial standpoint. It's vital to divide
films into two categories: successful and unsuccessful. To categorize the movies in this research, a variety
of models were utilized, including regression models such as Simple Linear, Multiple Linear, and Logistic
Regression, clustering techniques such as SVM and K-Means, Time Series Analysis, and an Artificial
Neural Network. The models stated above were compared on a variety of factors, including their accuracy
on the training and validation datasets as well as the testing dataset, the availability of new movie
characteristics, and a variety of other statistical metrics. During the course of this study, it was discovered
that certain characteristics have a greater impact on the likelihood of a film's success than others. For
example, the existence of the genre action may have a significant impact on the forecasts, although another
genre, such as sport, may not. The testing dataset for the models and classifiers has been taken from the
IMDb website for the year 2020. The Artificial Neural Network, with an accuracy of 86 percent, is the best
performing model of all the models discussed.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs. Through this system, huge volume of data’s that are generated by the system will also get control.
Abstract Learning Analytics by nature relies on computational information processing activities intended to extract from raw data some interesting aspects that can be used to obtain insights into the behaviors of learners, the design of learning experiences, etc. There is a large variety of computational techniques that can be employed, all with interesting properties, but it is the interpretation of their results that really forms the core of the analytics process. As a rising subject, data mining and business intelligence are playing an increasingly important role in the decision support activity of every walk of life. The Variance Rover System (VRS) mainly focused on the large data sets obtained from online web visiting and categorizing this into clusters according some similarity and the process of predicting customer behavior and selecting actions to influence that behavior to benefit the company, so as to take optimized and beneficial decisions of business expansion. Keywords: Analytics, Business intelligence, Clustering, Data Mining, Standard K-means, Optimized K-means
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Applying Soft Computing Techniques in Information RetrievalIJAEMSJORNAL
There is plethora of information available over the internet on daily basis and to retrieve meaningful effective information using usual IR methods is becoming a cumbersome task. Hence this paper summarizes the different soft computing techniques available that can be applied to information retrieval systems to improve its efficiency in acquiring knowledge related to a user’s query.
SCCAI- A Student Career Counselling Artificial Intelligencevivatechijri
As education is growing day by day, the competition has prompted a need for the student to
understand more about the educational field. Many times the counselor isn’t available all the time and
sometimes due to the lack of proper knowledge about some educational field. Due to this, it creates an issue of
misconception of that field. This creates a problem for the student to decide a proper educational trajectory and
guidance is not always useful. The proposed paper will overcome all these problem using machine learning
algorithm. Various algorithms are being considered and amongst them the best suitable for our project are used
here. There are 3 major problems that come across our path and they are solved using Random forest, Linear
regression and Searching algorithm using Google API. At first Searching algorithm solves the problem of
location by segregating the college’s location vice, then Random Forest provides the list of colleges by using
stream and range of percentage and finally Linear Regression predicts the current cutoff using previous years’
data. Rather than this, the proposed system also provides information regarding all fields of education helping
students to understand and know about their field of interest better. The following idea is a total fresh idea with
no existing projects of similar kind. This project will help students guide them throughout.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not
get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the
chip based design through automation .The main advantage of applying the machine learning & deep
learning technique is to improve the implementation rate based upon the capability of the society. The
main objective of the proposed system is to apply the deep learning using data driven approach for
controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs.
Through this system, huge volume of data’s that are generated by the system will also get control.
DEEP-LEARNING-BASED HUMAN INTENTION PREDICTION WITH DATA AUGMENTATIONijaia
Data augmentation has been broadly applied in training deep-learning models to increase the diversity of
data. This study ingestigates the effectiveness of different data augmentation methods for deep-learningbased human intention prediction when only limited training data is available. A human participant pitches
a ball to nine potential targets in our experiment. We expect to predict which target the participant pitches
the ball to. Firstly, the effectiveness of 10 data augmentation groups is evaluated on a single-participant
data set using RGB images. Secondly, the best data augmentation method (i.e., random cropping) on the
single-participant data set is further evaluated on a multi-participant data set to assess its generalization
ability. Finally, the effectiveness of random cropping on fusion data of RGB images and optical flow is
evaluated on both single- and multi-participant data sets. Experiment results show that: 1) Data
augmentation methods that crop or deform images can improve the prediction performance; 2) Random
cropping can be generalized to the multi-participant data set (prediction accuracy is improved from 50%
to 57.4%); and 3) Random cropping with fusion data of RGB images and optical flow can further improve
the prediction accuracy from 57.4% to 63.9% on the multi-participant data set.
Interpreting available data is a focal issue in data mining. Gathering of primary data is a difficult and expensive affair for assessing the trends for any business decision especially when multiple players are present. There is no uniform formula-type work procedure to deduce information from a vast data set especially if the data formats in the secondary sources are not uniform and need enormous cleansing to
mend the data for statistical analysis. In this paper, an incremental approach to cleanse data using a simple yet extended procedure is presented and it is shown how to deduce conclusions to facilitate business decisions. Freely available Indian Telecom Industry’s data over a year is used to illustrate this process. It is shown how to conclude the superiority of one telecom service provider over the others comparing different parameters like network availability, customer service quality etc. using a relative parameter
quantification technique. It is found that this method is computationally less costly than the other known
methods.
Clustering Prediction Techniques in Defining and Predicting Customers Defecti...IJECEIAES
With the growth of the e-commerce sector, customers have more choices, a fact which encourages them to divide their purchases amongst several ecommerce sites and compare their competitors‟ products, yet this increases high risks of churning. A review of the literature on customer churning models reveals that no prior research had considered both partial and total defection in non-contractual online environments. Instead, they focused either on a total or partial defect. This study proposes a customer churn prediction model in an e-commerce context, wherein a clustering phase is based on the integration of the k-means method and the Length-RecencyFrequency-Monetary (LRFM) model. This phase is employed to define churn followed by a multi-class prediction phase based on three classification techniques: Simple decision tree, Artificial neural networks and Decision tree ensemble, in which the dependent variable classifies a particular customer into a customer continuing loyal buying patterns (Non-churned), a partial defector (Partially-churned), and a total defector (Totally-churned). Macroaveraging measures including average accuracy, macro-average of Precision, Recall, and F-1 are used to evaluate classifiers‟ performance on 10-fold cross validation. Using real data from an online store, the results show the efficiency of decision tree ensemble model over the other models in identifying both future partial and total defection.
MITIGATION TECHNIQUES TO OVERCOME DATA HARM IN MODEL BUILDING FOR MLijaia
Given the impact of Machine Learning (ML) on individuals and the society, understanding how harm might
be occur throughout the ML life cycle becomes critical more than ever. By offering a framework to
determine distinct potential sources of downstream harm in ML pipeline, the paper demonstrates the
importance of choices throughout distinct phases of data collection, development, and deployment that
extend far beyond just model training. Relevant mitigation techniques are also suggested for being used
instead of merely relying on generic notions of what counts as fairness.
Augmentation of Customer’s Profile Dataset Using Genetic AlgorithmRSIS International
Data is the lifeblood of all type of business. Clean,
accurate and complete data is the prerequisite for the decisionmaking
in business process. Data is one of the most valuable
assets for any organization. It is immensely important that the
business focus on the quality of their data as it can help in
increasing the business performance by improving efficiencies,
streamlining operations and consolidating data sources. Good
quality data helps to improve and simplify processes, eliminate
time-consuming rework and externally to enhance a user’s
experience, further translating it to significant financial and
operational benefits [1] [2]. All organizations/ businesses strive to
retain their existing customers and gain new ones. Accurate data
enables the business to improve the customer experience. Data
augmentation adds value to base data by enhancing information
derived from the existing source. Data augmentation can help
reduce the manual intervention required to develop meaningful
information and insight of business data, as well as significantly
enhance data quality. Hence the business can provide unique
customer experience and deliver above and beyond their
expectations. The Data Augmentation is immensely important as
it helps in improving the overall productivity of the business. It
is also important in making the most accurate and relevant
information available quickly for decision making.
This work focuses on augmentation of the customer
dataset using Genetic Algorithm(GA). These augmented data are
used for the purpose of customer behavioral analysis. The data
set consists of the different factors inherent in each situation of
the customer to understand the market strategy. This behavioral
data is used in the earlier work of analyzing the data [13]. It is
found that collecting a very large amount of such data manually
is a very cumbersome process. It is inferred from the earlier
work [13] that the more number of data may give accurate
result. Hence it is decided to enrich the dataset by using Genetic
Algorithm.
DATA AUGMENTATION TECHNIQUES AND TRANSFER LEARNING APPROACHES APPLIED TO FACI...ijaia
The face expression is the first thing we pay attention to when we want to understand a person’s state of
mind. Thus, the ability to recognize facial expressions in an automatic way is a very interesting research
field. In this paper, because the small size of available training datasets, we propose a novel data
augmentation technique that improves the performances in the recognition task. We apply geometrical
transformations and build from scratch GAN models able to generate new synthetic images for each
emotion type. Thus, on the augmented datasets we fine tune pretrained convolutional neural networks with
different architectures. To measure the generalization ability of the models, we apply extra-database
protocol approach, namely we train models on the augmented versions of training dataset and test them on
two different databases. The combination of these techniques allows to reach average accuracy values of
the order of 85% for the InceptionResNetV2 model.
Information security risk analysis methods and research trends ahp and fuzzy ...ijcsit
Information security risk analysis becomes an increasingly essential component of organization’s
operations. Traditional Information security risk analysis is quantitative and qualitative analysis methods.
Quantitative and qualitative analysis methods have some advantages for information risk analysis.
However, hierarchy process has been widely used in security assessment. A future research direction may
be development and application of soft computing such as rough sets, grey sets, fuzzy systems, generic
algorithm, support vector machine, and Bayesian network and hybrid model. Hybrid model are developed
by integrating two or more existing model. A Practical advice for evaluation information security risk is
discussed. This approach is combination with AHP and Fuzzy comprehensive method.
REVIEWING PROCESS MINING APPLICATIONS AND TECHNIQUES IN EDUCATIONijaia
Process Mining (PM) emerged from business process management but has recently been applied to
educational data and has been found to facilitate the understanding of the educational process.
Educational Process Mining (EPM) bridges the gap between process analysis and data analysis, based on
the techniques of model discovery, conformance checking and extension of existing process models. We
present a systematic review of the recent and current status of research in the EPM domain, focusing on
application domains, techniques, tools and models, to highlight the use of EPM in comprehending and
improving educational processes.
APPLICATION WISE ANNOTATIONS ON INTELLIGENT DATABASE TECHNIQUESJournal For Research
Databases are systems, which are used for storing data. With the increase in information at rapid pace, extraction of relevant information becomes time-consuming using traditional databases. Thus, the need of intelligent or smart databases arises. Intelligent database is a system in which automation is applied on conventional database in order to enhance its functionality and efficiency. Intelligent databases help us in making decisions and can respond or act to situations by learning. It has applications in wide areas such as healthcare, automobile, education, information security, business etc. This paper represents various intelligent database techniques, which are further used for implementation of applications of this domain.
MOVIE SUCCESS PREDICTION AND PERFORMANCE COMPARISON USING VARIOUS STATISTICAL...ijaia
Movies are among the most prominent contributors to the global entertainment industry today, and they
are among the biggest revenue-generating industries from a commercial standpoint. It's vital to divide
films into two categories: successful and unsuccessful. To categorize the movies in this research, a variety
of models were utilized, including regression models such as Simple Linear, Multiple Linear, and Logistic
Regression, clustering techniques such as SVM and K-Means, Time Series Analysis, and an Artificial
Neural Network. The models stated above were compared on a variety of factors, including their accuracy
on the training and validation datasets as well as the testing dataset, the availability of new movie
characteristics, and a variety of other statistical metrics. During the course of this study, it was discovered
that certain characteristics have a greater impact on the likelihood of a film's success than others. For
example, the existence of the genre action may have a significant impact on the forecasts, although another
genre, such as sport, may not. The testing dataset for the models and classifiers has been taken from the
IMDb website for the year 2020. The Artificial Neural Network, with an accuracy of 86 percent, is the best
performing model of all the models discussed.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs. Through this system, huge volume of data’s that are generated by the system will also get control.
Abstract Learning Analytics by nature relies on computational information processing activities intended to extract from raw data some interesting aspects that can be used to obtain insights into the behaviors of learners, the design of learning experiences, etc. There is a large variety of computational techniques that can be employed, all with interesting properties, but it is the interpretation of their results that really forms the core of the analytics process. As a rising subject, data mining and business intelligence are playing an increasingly important role in the decision support activity of every walk of life. The Variance Rover System (VRS) mainly focused on the large data sets obtained from online web visiting and categorizing this into clusters according some similarity and the process of predicting customer behavior and selecting actions to influence that behavior to benefit the company, so as to take optimized and beneficial decisions of business expansion. Keywords: Analytics, Business intelligence, Clustering, Data Mining, Standard K-means, Optimized K-means
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Applying Soft Computing Techniques in Information RetrievalIJAEMSJORNAL
There is plethora of information available over the internet on daily basis and to retrieve meaningful effective information using usual IR methods is becoming a cumbersome task. Hence this paper summarizes the different soft computing techniques available that can be applied to information retrieval systems to improve its efficiency in acquiring knowledge related to a user’s query.
SCCAI- A Student Career Counselling Artificial Intelligencevivatechijri
As education is growing day by day, the competition has prompted a need for the student to
understand more about the educational field. Many times the counselor isn’t available all the time and
sometimes due to the lack of proper knowledge about some educational field. Due to this, it creates an issue of
misconception of that field. This creates a problem for the student to decide a proper educational trajectory and
guidance is not always useful. The proposed paper will overcome all these problem using machine learning
algorithm. Various algorithms are being considered and amongst them the best suitable for our project are used
here. There are 3 major problems that come across our path and they are solved using Random forest, Linear
regression and Searching algorithm using Google API. At first Searching algorithm solves the problem of
location by segregating the college’s location vice, then Random Forest provides the list of colleges by using
stream and range of percentage and finally Linear Regression predicts the current cutoff using previous years’
data. Rather than this, the proposed system also provides information regarding all fields of education helping
students to understand and know about their field of interest better. The following idea is a total fresh idea with
no existing projects of similar kind. This project will help students guide them throughout.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not
get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the
chip based design through automation .The main advantage of applying the machine learning & deep
learning technique is to improve the implementation rate based upon the capability of the society. The
main objective of the proposed system is to apply the deep learning using data driven approach for
controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs.
Through this system, huge volume of data’s that are generated by the system will also get control.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not
get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the
chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs.Through this system, huge volume of data’s that are generated by the system will also get control.
BEHAVIOR-BASED SECURITY FOR MOBILE DEVICES USING MACHINE LEARNING TECHNIQUESijaia
The goal of this research project is to design and implement a mobile application and machine learning techniques to solve problems related to the security of mobile devices. We introduce in this paper a behavior-based approach that can be applied in a mobile environment to capture and learn the behavior of
mobile users. The proposed system was tested using Android OS and the initial experimental results show that the proposed technique is promising, and it can be used effectively to solve the problem of anomaly detection in mobile devices.
A Study of Software Size Estimation with use Case Pointsijtsrd
Estimates for cost and schedule in software projects are based on a prediction of the size of the system. Software size estimation is the most important role in software cost estimation. Use Case Point method can provide software size estimation at the early stage of the development process. Software size estimation is based on the high level speciation of Use Case. This paper describes a simple approach to software size estimation base on use case models the "Use Case Points Method. This model is imported into an estimating tool. To get software size with Use Case Point, the needed factors are the number of use cases and their complexity, the number of actors and their complexity, technical complexity factors TCF , and environmental complexity factors ECF . The system computes unadjusted use case points UUCP , adjusted use case points UPC , and the total effort in staff hours. Aye Aye Seint "A Study of Software Size Estimation with use Case Points" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26531.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/26531/a-study-of-software-size-estimation-with-use-case-points/aye-aye-seint
Processing of the data generated from transactions that occur every day which resulted in nearly thousands of data per day requires software capable of enabling users to conduct a search of the necessary data. Data mining becomes a solution for the problem. To that end, many large industries began creating software that can perform data processing. Due to the high cost to obtain data mining software that comes from the big industry, then eventually some communities such as universities eventually provide convenience for users who want just to learn or to deepen the data mining to create software based on open source. Meanwhile, many commercial vendors market their products respectively. WEKA and Salford System are both of data mining software. They have the advantages and the disadvantages. This study is to compare them by using several attributes. The users can select which software is more suitable for their daily activities.
Similar to A Comprehensive review of Conversational Agent and its prediction algorithm (20)
Understanding the Impact and Challenges of Corona Crisis on Education Sector...vivatechijri
n the second week of March 2020, governments of all states in a country suddenly declared
shutting down of all colleges and schools for a temporary period of time as an immediate measure to stop the
spread of pandemic that is of novel corona virus. As the days pass by almost close to a month with no certainty
when they will again reopen. Due to pandemic like this an alarm bells have started sounding in the field of
education where a huge impact can be seen on teaching and learning process as well as on the entire education
sector in turn. The pandemic disruption like this is actually gave time to educators of today to really think about
the sector. Through the present research article, the author is highlighting on the possible impact of
coronavirus on education sector with the future challenges for education sector with possible suggestions.
LEADERSHIP ONLY CAN LEAD THE ORGANIZATION TOWARDS IMPROVEMENT AND DEVELOPMENT vivatechijri
This paper is explaining that how only leadership is responsible for sustainable improvement and
growth and only it can lead the organization towards improvement and overall development. Leadership and its
effectiveness are discussed in this research work and also how leadership is a different way of the success of the
organization and different from the traditional management to create true work-culture and good-will of the
organization in the social scene. Leadership is only responsible in bringing positive and negative change in the
organization; if the leadership doesn’t have the concern in the organization, the organization will not be able to
lead in the right direction towards improvement and development.
The topic of assignment is a critical problem in mathematics and is further explored in the real
physical world. We try to implement a replacement method during this paper to solve assignment problems with
algorithm and solution steps. By using new method and computing by existing two methods, we analyse a
numerical example, also we compare the optimal solutions between this new method and two current methods. A
standardized technique, simple to use to solve assignment problems, may be the proposed method
Structural and Morphological Studies of Nano Composite Polymer Gel Electroly...vivatechijri
n today’s society, we stand before a change in energy scarcity. As our civilization grows, many
countries in thedeveloping world seek to have the standard of living that has been exclusive to a few nations, so
their arises a need in thedevelopment of technology that is compatible enough with the resources provided by
nature in order to have sustainabledevelopment to all class of the society. In order to overcomethe prevailing
challenges of huge energy crises in near future, there is an urgent need for the development of electrical
vehiclesor hybrid electrical vehicles with low CO2 emissions using renewable energy sources. In view of the
above, electrochemicalcapacitors can fulfil the requirements to some extent.Preparation of nano composite
polymer gel electrolyte is the best optional product to overcome these problems. When fillers are added or
dispersed to the polymer gel electrolyte, amorphous or porous nature of electrolyte increases which enhances
the liquid absorbing quality of polymer and helps in removing the drawbacks of polymer gel electrolytes such as
leakage, poor mechanical and thermal stability etc. In this work dispersion of SiO2 nano filler is done in the
[PVdF (HFP)-PC-Mg (ClO4)2] for the synthesis of nano composite PGE [PVdF (HFP)-PC-MgClO4- SiO2].
Optimization and characterization was carried out by using various techniques.
Theoretical study of two dimensional Nano sheet for gas sensing applicationvivatechijri
This study is focus on various two dimensional material for sensing various gases with theoretical
view for new research in gas sensing application. In this paper we review various two dimensional sheet such as
Graphene, Boron Nitride nanosheet, Mxene and their application in sensing various gases present in the
atmosphere.
METHODS FOR DETECTION OF COMMON ADULTERANTS IN FOODvivatechijri
Food is essential forliving. Food adulteration deceives consumers and can endanger their health. The
purpose of this document is to list common food adulterant methods commonly found in India. An adulterant is
a substance found in other substances such as food, cosmetics, pharmaceuticals, fuels, or other chemicals that
compromise the safety or effectiveness of that substance. The addition of adulterants is called adulteration. The
most common reason for adulteration is the use of undeclared materials by manufacturers that are cheaper than
the correct and declared ones. The adulterants can be harmful or reduce the effectiveness of the product, or
they can be harmless.
The novel ideas of being a entrepreneur is a key for everyone to get in the hustle, but developing a
idea from core requires a systematic plan, time management, time investment and most importantly client
attention. The Time required for developing may vary from idea to idea and strength of the team. Leadership to
build a team and manage the same throughout the peak of development is the main quality. Innovations and
Techniques to qualify the huddles is another aspect of Business Development and client Retention.
Innovation for supporting prosperity has for quite some time been a focus on numerous orders, including PC science, brain research, and human-PC connection. In any case, the meaning of prosperity isn't continuously clear and this has suggestions for how we plan for and evaluate advances that intend to cultivate it. Here, we talk about current meanings of prosperity and how it relates with and now and then is a result of self-amazing quality. We at that point center around how innovations can uphold prosperity through encounters of self-amazing quality, finishing with conceivable future bearings.
An Alternative to Hard Drives in the Coming Future:DNA-BASED DATA STORAGEvivatechijri
Demand for data storage is growing exponentially, but the capacity of existing storage media is not keeping up, there emerges a requirement for a storage medium with high capacity, high storage density, and possibility to face up to extreme environmental conditions. According to a research in 2018, every minute Google conducted 3.88 million searches, other people posted 49,000 photos on Instagram, sent 159,362,760 e-mails, tweeted 473,000 times and watched 4.33 million videos on YouTube. In 2020 it estimated a creation of 1.7 megabytes of knowledge per second per person globally, which translates to about 418 zettabytes during a single year. The magnetic or optical data-storage systems that currently hold this volume of 0s and 1s typically cannot last for quite a century. Running data centres takes vast amounts of energy. In short, we are close to have a substantial data-storage problem which will only become more severe over time. Deoxyribonucleic acid (DNA) are often potentially used for these purposes because it isn't much different from the traditional method utilized in a computer. DNA’s information density is notable, 215 petabytes or 215 million gigabytes of data can be stored in just one gram of DNA. First we can encode all data at a molecular level and then store it in a medium that will last for a while and not become out-dated just like floppy disks. Due to the improved techniques for reading and writing DNA, a rapid increase is observed in the amount of possible data storage in DNA.
The usage of chatbots has increased tremendously since past few years. A conversational interface is an interface that the user can interact with by means of a conversation. The conversation can occur by speech but also by text input. When a chatty interface uses text, it is also described as a chatbot or a conversational medium. During this study, the user experience factors of these so called chatbots were investigated. The prime objective is “to spot the state of the art in chatbot usability and applied human-computer interaction methodologies, to research the way to assess chatbots usability". Two sorts of chatbots are formulated, one with and one without personalisation factors. the planning of this research may be a two-by-two factorial design. The independent variables are the two chatbots (unpersonalised versus personalised) and thus the speci?c task or goal the user are ready to do with the chatbot within the ?nancial ?eld (a simple versus a posh task). The results are that there was no noteworthy interaction effect between personalisation and task on the user experience of chatbots. A signi?cant di?erence was found between the two tasks with regard to the user experience of chatbots, however this variation wasn't because of personalisation.
The Smart glasses Technology of wearable computing aims to identify the computing devices into today’s world.(SGT) are wearable Computer glasses that is used to add the information alongside or what the wearer sees. They are also able to change their optical properties at runtime.(SGT) is used to be one of the modern computing devices that amalgamate the humans and machines with the help of information and communication technology. Smart glasses is mainly made up of an optical head-mounted display or embedded wireless glasses with transparent heads- up display or augmented reality (AR) overlay in it. In recent years, it is been used in the medical and gaming applications, and also in the education sector. This report basically focuses on smart glasses, one of the categories of wearable computing which is very popular presently in the media and expected to be a big market in the next coming years. It Evaluate the differences from smart glasses to other smart devices. It introduces many possible different applications from the different companies for the different types of audience and gives an overview of the different smart glasses which are available presently and will be available after the next few years.
Future Applications of Smart Iot Devicesvivatechijri
With the Internet of Things (IoT) bit by bit creating as the resulting time of the headway of the Internet, it gets critical to see the diverse expected zones for the utilization of IoT and the research challenges that are connected with these applications going from splendid savvy urban areas, to medical care administrations, shrewd farming, collaborations and retail. IoT is needed to attack into for all expectations and purposes for all pieces of our day-to-day life. Despite the fact that the current IoT enabling advancements have immensely improved in the continuous years, there are so far different issues that require attention. Since the IoT ideas results from heterogeneous advancements, many examination difficulties will arise. In like manner, IoT is planning for new components of exploration to be finished. This paper presents the progressing headway of IoT advancements and inspects future applications.
Cross Platform Development Using Fluttervivatechijri
Today the development of cross-platform mobile application has under the state of compromise. The developers are not willing to choose an alternative of either building the similar app many times for many operating systems or to accept a lowest common denominator and optimal solution that will going to trade the native speed, accuracy for portability. The Flutter is an open-source SDK for creating high-performance, high fidelity mobile apps for the development of iOS and Android. Few significant features of flutter are - Just-in-time compilation (JIT), Ahead- of-time compilation (AOT compilation) into a native (system-dependent) machine code so that the resulting binary file can execute natively. The Flutter’s hot reload functionality helps us to understand quickly and easily experiment, build UIs, add features, and fix bugs. Hot reload works by injecting updated source code files into the running Dart Virtual Machine (VM). With the help of Flutter, we believe that we would be having a solution that gives us the best of both worlds: hardware accelerated graphics and UI, powered by native ARM code, targeting both popular mobile operating systems.
The Internet, today, has become an important part of our lives. The World Wide Web that was once a small and inaccessible data storage service is now large and valuable. Current activities partially or completely integrated into the physical world can be made to a higher standard. All activities related to our daily life are mapped and linked to another business in the digital world. The world has seen great strides in the Internet and in 3D stereoscopic displays. The time has come to unite the two to bring a new level of experience to the users. 3D Internet is a concept that is yet to be used and requires browsers to be equipped with in-depth visualization and artificial intelligence. When this material is included, the Internet concept of material may become a reality discussed in this paper. In this paper we have discussed the features, possible setting methods, applications, and advantages and disadvantages of using the Internet. With this paper we aim to provide a clear view of 3D Internet and the potential benefits associated with this obviously cost the amount of investment needed to be used.
Recommender System (RS) has emerged as a significant research interest that aims to assist users to seek out items online by providing suggestions that closely match their interests. Recommender system, an information filtering technology employed in many items is presented in internet sites as per the interest of users, and is implemented in applications like movies, music, venue, books, research articles, tourism and social media normally. Recommender systems research is usually supported comparisons of predictive accuracy: the higher the evaluation scores, the higher the recommender. One amongst the leading approaches was the utilization of advice systems to proactively recommend scholarly papers to individual researchers. In today's world, time has more value and therefore the researchers haven't any much time to spend on trying to find the proper articles in line with their research domain. Recommender Systems are designed to suggest users the things that best fit the user needs and preferences. Recommender systems typically produce an inventory of recommendations in one among two ways -through collaborative or content-based filtering. Additionally, both the general public and also the non-public used descriptive metadata are used. The scope of the advice is therefore limited to variety of documents which are either publicly available or which are granted copyright permits. Recommendation systems (RS) support users and developers of varied computer and software systems to beat information overload, perform information discovery tasks and approximate computation, among others.
The study LiFi (Light Fidelity) demonstrates about how can we use this technology as a medium of communication similar to Wifi . This is the latest technology proposed by Harold Haas in 2011. It explains about the process of transmitting data with the help of illumination of an Led bulb and about its speed intensity to transmit data. Basically in this paper, author will discuss about the technology and also explain that how we can replace from WiFi to LiFi . WiFi generally used for wireless coverage within the buildings while LiFi is capable for high intensity wireless data coverage in limited areas with no obstacles .This research paper represents introduction of the Lifi technology,performance,modulation and challenges. This research paper can be used as a reference and knowledge to develop some of LiFitechnology.
Social media platform and Our right to privacyvivatechijri
The advancement of Information Technology has hastened the ability to disseminate information across the globe. In particular, the recent trends in ‘Social Networking’ have led to a spark in personally sensitive information being published on the World Wide Web. While such socially active websites are creative tools for expressing one’s personality it also entails serious privacy concerns. Thus, Social Networking websites could be termed a double edged sword. It is important for the law to keep abreast of these developments in technology. The purpose of this paper is to demonstrate the limits of extending existing laws to battle privacy intrusions in the Internet especially in the context of social networking. It is suggested that privacy specific legislation is the most appropriate means of protecting online privacy. In doing so it is important to maintain a balance between the competing right of expression, the failure of which may hinder the reaping of benefits offered by Internet technology
THE USABILITY METRICS FOR USER EXPERIENCEvivatechijri
THE USABILITY METRICS FOR USER EXPERIENCE was innovatively created by Google engineers and it is ready for production in record time. The success of Google is to attributed the efficient search algorithm, and also to the underlying commodity hardware. As Google run number of application then Google’s goal became to build a vast storage network out of inexpensive commodity hardware. So Google create its own file system, named as THE USABILITY METRICS FOR USER EXPERIENCE that is GFS. THE USABILITY METRICS FOR USER EXPERIENCE is one of the largest file system in operation. Generally THE USABILITY METRICS FOR USER EXPERIENCE is a scalable distributed file system of large distributed data intensive apps. In the design phase of THE USABILITY METRICS FOR USER EXPERIENCE, in which the given stress includes component failures , files are huge and files are mutated by appending data. The entire file system is organized hierarchically in directories and identified by pathnames. The architecture comprises of multiple chunk servers, multiple clients and a single master. Files are divided into chunks, and that is the key design parameter. THE USABILITY METRICS FOR USER EXPERIENCE also uses leases and mutation order in their design to achieve atomicity and consistency. As of there fault tolerance, THE USABILITY METRICS FOR USER EXPERIENCE is highly available, replicas of chunk servers and master exists.
Google File System was innovatively created by Google engineers and it is ready for production in record time. The success of Google is to attributed the efficient search algorithm, and also to the underlying commodity hardware. As Google run number of application then Google’s goal became to build a vast storage network out of inexpensive commodity hardware. So Google create its own file system, named as Google File System that is GFS. Google File system is one of the largest file system in operation. Generally Google File System is a scalable distributed file system of large distributed data intensive apps. In the design phase of Google file system, in which the given stress includes component failures , files are huge and files are mutated by appending data. The entire file system is organized hierarchically in directories and identified by pathnames. The architecture comprises of multiple chunk servers, multiple clients and a single master. Files are divided into chunks, and that is the key design parameter. Google File System also uses leases and mutation order in their design to achieve atomicity and consistency. As of there fault tolerance, Google file system is highly available, replicas of chunk servers and master exists.
A Study of Tokenization of Real Estate Using Blockchain Technologyvivatechijri
Real estate is by far one of the most trusted investments that people have preferred, being a lucrative investment it provides a steady source of income in the form of lease and rents. Although there are numerous advantages, one of the key downsides of real estate investments is lack of liquidity. Thus, even though global real estate investments amount to about twice the size of investments in stock markets, the number of investors in the real estate market is significantly lower. Block chain technology has real potential in addressing the issues of liquidity and transparency, opening the market to even retail investors. Owing to the functionality and flexibility of creating Security Tokens, which are backed by real-world assets, real estate can be made liquid with the help of Special Purpose Vehicles. Tokens of ERC 777 standard, which represent fractional ownership of the real estate can be purchased by an investor and these tokens can also be listed on secondary exchanges. The robustness of Smart Contracts can enable the efficient transfer of tokens and seamless distribution of earnings amongst the investors. This work describes Ethereum blockchainbased solutions to make the existing Real Estate investment system much more efficient.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
A Comprehensive review of Conversational Agent and its prediction algorithm
1. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019)
ISSN(Online): 2581-7280 Article No. 7
PP 1-6
1
www.viva-technology.org/New/IJRI
A Comprehensive review of Conversational Agent and its
prediction algorithm
Aditya M. Pujari1
, Rahul M. Dalvi1
, Kaustubh S. Gawde1
, Tatwadarshi P.
Nagarhalli1
1
(Computer Engineering Department, VIVA Institute of Technology, India)
Abstract: There is an exponential increase in the use of conversational bots. Conversational bots can be
described as a platform that can chat with people using artificial intelligence. The recent advancement has
made A.I capable of learning from data and produce an output. This learning of data can be performed by using
various machine learning algorithm. Machine learning techniques involves construction of algorithms that can
learn for data and can predict the outcome. This paper reviews the efficiency of different machine learning
algorithm that are used in conversational bot.
Keywords – Machine learning, Random Forest, Linear Regression, K-means, Chatbot.
1. INTRODUCTION
Artificial Intelligence is also known machine intelligence, is intelligence demonstrated by different
machine [14]. Artificial Intelligence is defined as the research of intelligent agents, a device that can learn
information from environment and performs action that maximize the chances of successfully achieving its
goals. Modern machine capacities are generally classified as artificial appends successfully understanding
human speech, competing at the highest level in strategic game, independently operating cars, and intelligent
routing in content delivery networks and military simulations. Artificial intelligence research has been divided
into subfields that often fail to communicate with each other. These sub-fields are constructed on technical
consideration, which includes particular goals, the use of particular tools, or deep philosophical differences [14].
Artificial Intelligence often revolves around the use of algorithms. An algorithm is a set of unambiguous
instructions that a mechanical computer can execute. Complex algorithm is basically built on top of other,
simpler algorithms. Artificial Intelligence algorithm are capable of learning from data; they can enhance
themselves by learning new heuristics, or can themselves write different algorithms. Some of algorithm used
Bayesian network, decision trees and nearest-neighbor. This learning of data using different algorithm is known
as machine learning.
Machine learning is an interdisciplinary field that uses statistical techniques to give computer systems the
ability to learn from data, without being explicitly programmed. Machine learning explores the study and
construction of algorithm that can learn for and make prediction on data-such algorithms overcome following
strictly static program instructions by making data-driven and make prediction or decisions, through building a
model from sample inputs [13]. Machine learning is employed in a range of computing tasks where designing
and programming explicit algorithm with good performance is difficult or infeasible. Machine learning is related
to computational statistics, which also emphasis on prediction-making past the use of computers. Within the
field of data analytic, machine learning is a method used to devise complex models and algorithms that lend
themselves to prediction; in commercial use, this is known as predictive analytics [13]. These analytical models
let researchers, data scientists, engineers, and analysts to "construct reliable, repeatable decisions and results"
and uncover "hidden insights" through learning from historical relationships and trends in the data.
2. CONVERSATIONAL AGENT AND ITS PREDICTION ALGORITHM
2.1 Chatbot for University Related FAQS [18]
Artificial Intelligence conversational agents are becoming popular for web services and systems like
scientific, entertainment and commercial systems, and academia. But more effective human-computer interface
will take place by querying missing data by the user to provide satisfactory answer. User inquiries are first taken
2. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019)
ISSN(Online): 2581-7280 Article No. 7
PP 1-6
2
www.viva-technology.org/New/IJRI
care by AIML check piece to check whether entered inquiry is AIML script or not. AIML is characterized with
general inquiries and welcome which is replied by utilizing AIML formats
2.2 Intelligent Chatbot for Easy Web-Analytics Insights[19]
The bot responses were analysed and were classified as wrong, default and correct. The correct
responses are the expected domain related answers. More importantly they were the correct answers. Whereas
the wrong responses are the bot response to the domain related answers which are wrong. This can be avoided
by further refining the bot. The default responses are the bot response to the general or unrelated queries. In this
paper, chatbot would enable bot users to just type in the query related to web analytics and will get response
immediately. This is to avoid the time consuming task of mastering a web analytics tool. The proposed chatbot
is developed using AIML and the data set is the raw analytics data.
2.3 Weather Monitoring using Artificial Intelligence [5]
The paper define weather prediction based on the previous dataset. They intend to develop an
Intelligence Weather predicting module since this is become a necessary tool. This tool consider measure such
as Maximum Temperature, Minimum Temperature and rainfall for sample period of day on the available dataset
and it is done by using machine learning techniques. The prediction and analysis is based on the Linear
Regression which predict the next day’s weather with good accuracy which is more than 90% is obtain on the
dataset. This technique is also gives better performance than the traditional statistical method. In this paper
simple Linear Regression is use for weather prediction and it will give better accuracy.
2.4 Rainfall prediction based on 100 years of Meteorological Data [8]
This paper mainly focused on use of the Data Mining techniques for predicting rainfall of an area on
basis of some dependent feature like precipitation and wet day frequency. Instead of taking current data they are
mainly focusing on the data collected till date from last 100 years’ data by the meteorological department. The
paper defines Data Mining technique using the Liner Regression Model on the data collected for wet day
frequency, precipitation and rainfall. The Liner Regression Model developed has been trained and validate
against the actual rainfall of the area, which further was used to predicting the rainfall in coming years.
The paper use Data mining technique with liner regression for predicting the rainfall and it mainly
focus on past dataset.
2.5 Rainfall prediction using modified Linear Regression [7]
Rainfall prediction on the historical data is trending in research point of view. The existing model use
the data mining technique for predicting the state of atmosphere at a given time of a weather variable like
rainfall, cloud conditions, temperature etc. The main problem here in this system is it does not provide an
estimate of the predicted rainfall. The proposed system calculates average of the value and understand the state
of atmosphere. In this paper it will use mathematical method called linear regression to predict the rainfall in
various state of India. The model provides estimate rainfall using different atmospheric attribute like average
temperature and cloud cover to predict the rainfall. The main advantages of this model is that the model estimate
the rainfall based on the previous correlation between the different parameter.
It uses the modified liner regression to perform the prediction of a rainfall where training and test data
are formed from the input data set.
2.6 Regression technique for the prediction of the Stock Price Trend [6]
The paper examines the theory and practice of the regression technique for prediction of stock price by
using the transformed data set in ordinal dataset. In this the original pre-transformed data source contain data of
heterogeneous data type use for handling of currency value and financial ratio. The data format in currency
value and financial ratio is used for computation of stock price. The data source are corporate annual reports
which include balance sheet, income statement and cash flow statement. The paper showed that the outcomes of
regression technique can be improve for the prediction of stock price.
The outcome of regression technique can be improved when the input data was standardized into a common data
type through a customized transformation technique.
2.7 Prediction of Road Traffic Congestion Based on Random Forest [2]
The paper explains how the problem of traffic congestion is solved by using classification algorithm
random forest. The city area is divided in sections and prediction is done of the areas possible of having heavy
traffic. This is done by considering the environmental conditions such as Climate, Holiday, Road Condition etc.
The results show that the traffic prediction model established by using the random forest classification algorithm
has a prediction accuracy of 87.5%, and the generalization error is low, and it can be effectively predicted.
3. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019)
ISSN(Online): 2581-7280 Article No. 7
PP 1-6
3
www.viva-technology.org/New/IJRI
The random forest algorithm has the characteristics of high robustness, high performance and high
practicability and it will efficiently predict the road traffic congestion.
2.8 Discovery and Prediction of the Unused Land for Construction Based on Random Forest [9]
In the management of land resources, not only solving the existing problems of the land, instead also
prediction of the problems of the land and prevention on land misuse are in demand urgently due to the
urbanization so that the propose system use Random Forest algorithm for prediction The main part in getting the
accurate result of this paper is the formation of the “Decision tree”. The most frequently used attribute selection
measures in decision tree induction are the Information Gain Ratio criterion and the Gini Index. Random Forest
classifier uses the Gini Index as an attribute selection measure. The utilization rate and the period (construction
days) between land supply and construction are calculated.
The use of Random Forest for the prediction of unused land is in trend. As the algorithm helps give
maximum output accuracy for the required search it is of great use.
2.9 Random forest classification of urban landscape using Landsat archive and ancillary data:
Combining seasonal maps with decision level fusion. [1]
In this paper the problem of urbanization is solved by using Random Forest algorithm along with
Landsat archive and ancillary data. It proposes a methodology to map the urban areas with multi-seasonal
Landsat data. The Random forest classifier and decision level fusion are applied. Shifting importance measure
are used to recognise the most important input layers. The annual maps have moderately high (>60%) overall
accuracies (OA). The proposed method can be successfully transferred to other years. The methodology was
applied to produce a detailed land use land cover map of National Capital Region of India. A second
classification involving seasonal maps with decision level fusion based on expert knowledge resulted in an
annual composite map with increased number of LULC classes.
The paper gives the general idea about the random forest algorithm of urban landscape.
2.10 Predicting Passenger Flow using Different Influence Factors for Taipei MRT System [10]
According to the statistical data, each day there are over one million passengers taking the MRT in
Taipei. In this paper, we will be predicting MRT passenger flow with random forest, by using different factors
collected from the Taipei Main station as input for training. In this paper, there are two categories of the factors,
which are main factor and support factor. The main factor is the history of passenger flow, which strongly
influences the results. As for the support factors is the temporal factor, such as the month and week and the
holiday factor, which can slightly increase the accuracy of the prediction.
In this paper, system use only the Taipei main station passenger flow to test the method. Here by taking
into consideration it can predict the approximate traffic of the people on occasion’s such as public holidays,
festivals, week days and week offs etc.
2.11 A Novel Clustering Algorithm Combining Niche Genetic Algorithm with Canopy and K-
means [4]
Each chromosome is made up of a sequence of genes coding. The number of genes of a chromosome is
randomly chosen where n is the number of data points, which is randomly selected a given data sets. Canopy is
usually employed to capture the number of clusters. The canopy method in this paper can automatically capture
the number of clusters as the initial population selection and does not need as an input parameter for the number
of clusters, which is used to improve multi peak values of canopy. The canopy method to randomly select a
number of clusters (K). Chromosomes are selected, which consist of the different number of clusters (K) using
canopies.
The paper defines Clustering Algorithm with Canopy which use for making effective cluster.
2.12 An improved K-means Cluster-based Routing Scheme for Wireless Sensor Networks [3]
This paper proposes a cluster-based routing scheme based on an enhanced version of K-means
approach. The improved version of K-means generates balanced clusters in the network, which does not
overload one cluster-head over the others unlike LEACH where one of the generated clusters may contain a
large number of nodes and another contains a small number of members. Besides, the election of the cluster-
heads is done in a distributed manner after each period. This paper proposes a cluster-based routing scheme
based on an enhanced version of K-means approach.
The improved version of K-means generates balanced clusters in the network, which does not overload
one cluster-head over the others unlike LEACH where one of the generated clusters may contain a large number
of nodes and another contains a small number of members.
4. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019)
ISSN(Online): 2581-7280 Article No. 7
PP 1-6
4
www.viva-technology.org/New/IJRI
2.13 K-Means Clustering Algorithm Based on Improved Cuckoo Search Algorithm and Its
Application [11]
K-means algorithm is a clustering algorithm based on partition. Because of its simplicity and
efficiency, it has become one of the most widely used clustering algorithms. However, there are two
shortcomings in the original K-means algorithm: firstly, the difference between each clustering result is greatly
affected by the initial class centre; secondly, it is easy to fall into the local optimal solution. The original cuckoo
algorithm is influenced by step size A and probability of discovery P, and the step size and discovery probability
control the accuracy of CS algorithm global and local search, which has great influence on the optimization
effect of algorithm. The step size and discovery probability of the CS algorithm are set to a fixed value at
initialization and will not change in subsequent iterations. When the step size is set too large, reducing the
search accuracy, easy convergence, step length is too small, reducing the search speed, easy to fall into the local
optimal.
K-Means algorithm is easy to fall into the local optimum and the Cuckoo search (CS) algorithm is
affected by the step size
2.14 An Improved Sampling K-means Clustering Algorithm Based on MapReduce [12]
In the literature present an agglomerative fuzzy K-means clustering algorithm for numerical data, an
extension to the standard fuzzy K-means algorithm by introducing a penalty term to the objective function to
make the clustering process not sensitive to the initial cluster centres. The MapReduce programming model is
mainly composed of two abstract classes, Mapper and Reducer. Mapper processes the cut raw data, Reducer
summarizes the intermediate results produced by Mapper and obtains the final result.
The paper extends the K-means clustering process to calculate a weight for each dimension in each
cluster and use the weight values to identify the subsets of important dimensions that categorize different
clusters.
3. ANALYSIS TABLE
Sr.
No.
Technical paper name Technique Used Dataset Used Accuracy
1.
Weather Monitoring Using
Artificial Intelligence [5].
Simple Linear
Regression
Weather Dataset,
Meteorological
Department Database.
90%
2.
Rainfall prediction based on
100 years of Meteorological
Data [8].
Linear Regression,
Data mining, K Fold
technique.
Rainfall Dataset,
Meteorological
Department Database.
NA
3.
Rainfall prediction using
modified Linear Regression [7].
Modified Linear
Regression.
Rainfall Dataset,
Meteorological
Department Database.
NA
4.
Regression technique for the
prediction of the Stock Price
Trend [6].
Regression technique
Companies Dataset; in
Bursa, Malaysia.
NA
5.
Prediction of Road Traffic
Congestion Based on Random
Forest [2].
Random forest.
1124 data from different
sections of Shanghai
traffic management
Department
87.05%
5. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019)
ISSN(Online): 2581-7280 Article No. 7
PP 1-6
5
www.viva-technology.org/New/IJRI
6.
Discovery and Prediction of the
Unused Land for
Construction Based on Random
Forest [9].
Random Forest,
Data mining
3 different data sets
used
75.73
91.41
84.56
7.
Random forest classification of
urban landscape using Landsat
archive and ancillary data [1].
Decision level
Landsat Multi-season
Random forest
Landsat Dataset >60%
8.
Predicting Passenger Flow
using Different Influence
Factors for Taipei MRT System
[10].
Random forest Daily passenger data 94.3%
9.
A Novel Clustering Algorithm
Combining Niche Genetic
Algorithm with Canopy and K-
means [4].
Niche genetic
algorithm; k-means
clustering; Canopy
Iris, Balance, and Breast
Cancer
UCI database.
87%
10.
An improved K-means Cluster-
based Routing Scheme for
Wireless Sensor Networks [3].
K-Means. Energy Consumption. N.A
11.
K-Means Clustering Algorithm
Based on Improved Cuckoo
Search Algorithm and Its
Application [11].
K-Means;
cuckoo search
Iris, Wine, Seeds,
Haber man.
UCI database
N.A
12.
An Improved Sampling K-
means Clustering Algorithm
Based on MapReduce [12].
K-means; parallel
sampling;
Bag of Words data set
in the
UCI machine learning
library
85%
4. CONCLUSION
The paper provides a brief survey on various machine learning algorithm which are Random Forest,
Linear Regression, and k-means. The survey gives an outcome that larger dataset will provide better result. The
survey has shown that numeric prediction, categorical prediction or classification and clustering can be
implemented using machine learning algorithm. The study has shown that Random forest gives better results in
case of larger dataset whereas K-means is faster clustering algorithm comparing other unsupervised learning
algorithm. Simple Linear regression provide better accuracy on different datasets.
REFERENCES
[1] A. Ghosh, R. Sharma, P.K. Joshi, “Random forest classification of urban landscape using Landsat archive and ancillary data:
Combining seasonal maps with decision level fusion”, Applied Geography Journal, 2014, pp. 31-41.
[2] Y. Liu, H. Wu, “Prediction of Road Traffic Congestion Based on Random Forest”, 10th International Symposium on
Computational Intelligence and Design, 2017, pp. 361-364.
[3] M. Lehsaini, M.B. Benmahdi, “An improved K-means Cluster-based Routing Scheme for Wireless Sensor Networks”, IEEE,
2018.
6. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 2 (2019)
ISSN(Online): 2581-7280 Article No. 7
PP 1-6
6
www.viva-technology.org/New/IJRI
[4] H. Zhang, Z. Zhou, “A Novel clustering algorithm combining Niche genetic algorithm with canopy and K-means”, International
Conference on artificial Intelligence and Big Data, 2018, pp. 26-32.
[5] T.R.V. Anandharajan, G.A. Hariharan, K. K. Vignajeth, R. Jitendiran, “Weather Monitoring Using Artificial Intelligence”,
International Conference on Computational Intelligence and Networks, 2016.
[6] H. L. Siew, M.J, Nordin, “Regression Techniques for the Prediction of Stock Price Trend”, International Conference on Statistics
in Science, Business and Engineering (ICSSBE), 2012, pp. 1-5.
[7] S. Prabakaran, P. N. Kumar, P. S. M. Tarun, “Rainfall Prediction Using Modified Linear Regression”, ARPN Journal of
Engineering and Applied Sciences, 2017, pp. 3715-3718
[8] S. Kumar, M. Anamika Upadhyay, C. Gola, “Rainfall prediction based on 100 years of Meteorological data”, IEEE, 2017, pp.
162-166.
[9] X. Xun, L. Mo, Y. Yu, “Discovery and Prediction of the Unused Land for Construction Based on Random Forest”, Fifth
International Conference on Agro-Geoinformatics, 2016.
[10] Y. C. Shiao, L. Liu, Q. Zhao, R. C. Chen, “Predicting Passenger Flow using Different Influence Factors for Taipei MRT
System”, IEEE 8th International Conference on Awareness Science and Technology (iCAST), 2017.
[11] S. Ye, X. Huang, Y. Teng, Y. Li, “K-Means Clustering Algorithm Based on Improved Cuckoo Search Algorithm and Its
Application”, IEEE 8th International Conference on Awareness Science and Technology, 2018, pp. 447-451.
[12] Z. Ya-Ling, W. Ya-nan, Y. Lil, “An Improved Sampling K-means Clustering Algorithm Based on MapReduce”, IEEE 3rd
International Conference on Big Data Analysis,2017.
[13] https://en.wikipedia.org/wiki/Machine_learning , Last Accessed on 05th
Sept. 2018.
[14] https://en.wikipedia.org/wiki/Artificial_intelligence , Last Accessed on 05th
.Sept. 2018.
[15] https://en.wikipedia.org/wiki/Linear_regression , Last Accessed on 04th
Sept. 2018.
[16] https://en.wikipedia.org/wiki/Random_forest , Last Accessed on 05th
Sept. 2018.
[17] https://en.wikipedia.org/wiki/K-means_clustering , Last Accessed on 05th
Sept. 2018.
[18] B. R. Ranoliya, N. Raghuwanshi, S. Singh, “Chatbot for University Related FAQs”, International Conference on Advances in
Computing, Communications and Informatics (ICACCI), 2017.
[19] R. Ravi, “Intelligent Chatbot for Easy Web-Analytics Insights”, International Conference on Advances in Computing,
Communications and Informatics (ICACCI), 2017.