This paper aims at explaining the method of creating a polynomial equation out of the given data set which can be used as a representation of the data itself and can be used to run aggregation against itself to find the results. This approach uses least-squares technique to construct a model of data and fit to a polynomial. Differential calculus technique is used on this equation to generate the aggregated results that represents the original data set.
Process of converting data set having vast dimensions into data set with lesser dimensions ensuring that it conveys similar information concisely.
Concept
R code
THE IMPLICATION OF STATISTICAL ANALYSIS AND FEATURE ENGINEERING FOR MODEL BUI...ijcseit
Scrutiny for presage is the era of advance statistics where accuracy matter the most. Commensurate
between algorithms with statistical implementation provides better consequence in terms of accurate
prediction by using data sets. Prolific usage of algorithms lead towards the simplification of mathematical
models, which provide less manual calculations. Presage is the essence of data science and machine
learning requisitions that impart control over situations. Implementation of any dogmas require proper
feature extraction which helps in the proper model building that assist in precision. This paper is
predominantly based on different statistical analysis which includes correlation significance and proper
categorical data distribution using feature engineering technique that unravel accuracy of different models
of machine learning algorithms.
A Preference Model on Adaptive Affinity PropagationIJECEIAES
In recent years, two new data clustering algorithms have been proposed. One of them is Affinity Propagation (AP). AP is a new data clustering technique that use iterative message passing and consider all data points as potential exemplars. Two important inputs of AP are a similarity matrix (SM) of the data and the parameter ”preference” p. Although the original AP algorithm has shown much success in data clustering, it still suffer from one limitation: it is not easy to determine the value of the parameter ”preference” p which can result an optimal clustering solution. To resolve this limitation, we propose a new model of the parameter ”preference” p, i.e. it is modeled based on the similarity distribution. Having the SM and p, Modified Adaptive AP (MAAP) procedure is running. MAAP procedure means that we omit the adaptive p-scanning algorithm as in original Adaptive-AP (AAP) procedure. Experimental results on random non-partition and partition data sets show that (i) the proposed algorithm, MAAP-DDP, is slower than original AP for random non-partition dataset, (ii) for random 4-partition dataset and real datasets the proposed algorithm has succeeded to identify clusters according to the number of dataset’s true labels with the execution times that are comparable with those original AP. Beside that the MAAP-DDP algorithm demonstrates more feasible and effective than original AAP procedure.
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Sri Ambati
Abstract:
Explainability in the age of the EU GDPR is becoming an increasingly pertinent consideration for Machine Learning. At QuantumBlack, we address the traditional Accuracy vs. Interpretability trade-off, by leveraging modern XAI techniques such as LIME and SHAP, to enable individualised explanations without necessary limiting the utility and performance of the otherwise ‘black-box’ models. The talk focuses on Shapley additive explanations (Lundberg et al. 2017) that integrate Shapley values from the Game Theory for consistent and locally accurate explanations; provides illustrative examples and touches upon the wider XAI theory.
Bio:
Dr Torgyn Shaikhina is a Data Scientist at QuantumBlack, STEM Ambassador, and the founder of the Next Generation Programmers outreach initiative. Her background is in decision support systems for Healthcare and Biomedical Engineering with a focus on Machine Learning with limited information.
Process of converting data set having vast dimensions into data set with lesser dimensions ensuring that it conveys similar information concisely.
Concept
R code
THE IMPLICATION OF STATISTICAL ANALYSIS AND FEATURE ENGINEERING FOR MODEL BUI...ijcseit
Scrutiny for presage is the era of advance statistics where accuracy matter the most. Commensurate
between algorithms with statistical implementation provides better consequence in terms of accurate
prediction by using data sets. Prolific usage of algorithms lead towards the simplification of mathematical
models, which provide less manual calculations. Presage is the essence of data science and machine
learning requisitions that impart control over situations. Implementation of any dogmas require proper
feature extraction which helps in the proper model building that assist in precision. This paper is
predominantly based on different statistical analysis which includes correlation significance and proper
categorical data distribution using feature engineering technique that unravel accuracy of different models
of machine learning algorithms.
A Preference Model on Adaptive Affinity PropagationIJECEIAES
In recent years, two new data clustering algorithms have been proposed. One of them is Affinity Propagation (AP). AP is a new data clustering technique that use iterative message passing and consider all data points as potential exemplars. Two important inputs of AP are a similarity matrix (SM) of the data and the parameter ”preference” p. Although the original AP algorithm has shown much success in data clustering, it still suffer from one limitation: it is not easy to determine the value of the parameter ”preference” p which can result an optimal clustering solution. To resolve this limitation, we propose a new model of the parameter ”preference” p, i.e. it is modeled based on the similarity distribution. Having the SM and p, Modified Adaptive AP (MAAP) procedure is running. MAAP procedure means that we omit the adaptive p-scanning algorithm as in original Adaptive-AP (AAP) procedure. Experimental results on random non-partition and partition data sets show that (i) the proposed algorithm, MAAP-DDP, is slower than original AP for random non-partition dataset, (ii) for random 4-partition dataset and real datasets the proposed algorithm has succeeded to identify clusters according to the number of dataset’s true labels with the execution times that are comparable with those original AP. Beside that the MAAP-DDP algorithm demonstrates more feasible and effective than original AAP procedure.
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Sri Ambati
Abstract:
Explainability in the age of the EU GDPR is becoming an increasingly pertinent consideration for Machine Learning. At QuantumBlack, we address the traditional Accuracy vs. Interpretability trade-off, by leveraging modern XAI techniques such as LIME and SHAP, to enable individualised explanations without necessary limiting the utility and performance of the otherwise ‘black-box’ models. The talk focuses on Shapley additive explanations (Lundberg et al. 2017) that integrate Shapley values from the Game Theory for consistent and locally accurate explanations; provides illustrative examples and touches upon the wider XAI theory.
Bio:
Dr Torgyn Shaikhina is a Data Scientist at QuantumBlack, STEM Ambassador, and the founder of the Next Generation Programmers outreach initiative. Her background is in decision support systems for Healthcare and Biomedical Engineering with a focus on Machine Learning with limited information.
Re-mining Positive and Negative Association Mining Resultsertekg
Download Link > https://ertekprojects.com/gurdal-ertek-publications/blog/re-mining-positive-and-negative-association-mining-results/
Positive and negative association mining are well-known and extensively studied data mining techniques to analyze market basket data. Efficient algorithms exist to find both types of association, sepa-rately or simultaneously. Association mining is performed by operating on the transaction data. Despite being an integral part of the transaction data, the pricing and time information has not been incorporated into market basket analysis so far, and additional attributes have been han-dled using quantitative association mining. In this paper, a new approach is proposed to incorporate price, time and domain related attributes into data mining by re-mining the association mining results. The underlying factors behind positive and negative relationships, as indicated by the as-sociation rules, are characterized and described through the second data mining stagere-mining. The applicability of the methodology is demon-strated by analyzing data coming from apparel retailing industry, where price markdown is an essential tool for promoting sales and generating increased revenue.
OPTIMIZATION IN ENGINE DESIGN VIA FORMAL CONCEPT ANALYSIS USING NEGATIVE ATTR...csandit
There is an exhaustive study around the area of engine design that covers different methods that try to reduce costs of production and to optimize the performance of these engines.
Mathematical methods based in statistics, self-organized maps and neural networks reach the best results in these designs but there exists the problem that configuration of these methods is
not an easy work due the high number of parameters that have to be measured.
Influence over the Dimensionality Reduction and Clustering for Air Quality Me...IJAEMSJORNAL
The current trend in the industry is to analyze large data sets and apply data mining, machine learning techniques to identify a pattern. But the challenges with huge data sets are the high dimensions associated with it. Sometimes in data analytics applications, large amounts of data produce worse performance. Also, most of the data mining algorithms are implemented column wise and too many columns restrict the performance and make it slower. Therefore, dimensionality reduction is an important step in data analysis. Dimensionality reduction is a technique that converts high dimensional data into much lower dimension, such that maximum variance is explained within the first few dimensions. This paper focuses on multivariate statistical and artificial neural networks techniques for data reduction. Each method has a different rationale to preserve the relationship between input parameters during analysis. Principal Component Analysis which is a multivariate technique and Self Organising Map a neural network technique is presented in this paper. Also, a hierarchical clustering approach has been applied to the reduced data set. A case study of Air quality measurement has been considered to evaluate the performance of the proposed techniques.
Opinion mining framework using proposed RB-bayes model for text classicationIJECEIAES
Information mining is a capable idea with incredible potential to anticipate future patterns and conduct. It alludes to the extraction of concealed information from vast data sets by utilizing procedures like factual examination, machine learning, grouping, neural systems and genetic algorithms. In naive baye’s, there exists a problem of zero likelihood. This paper proposed RB-Bayes method based on baye’s theorem for prediction to remove problem of zero likelihood. We also compare our method with few existing methods i.e. naive baye’s and SVM. We demonstrate that this technique is better than some current techniques and specifically can analyze data sets in better way. At the point when the proposed approach is tried on genuine data-sets, the outcomes got improved accuracy in most cases. RB-Bayes calculation having precision 83.333.
Principal Component Analysis and ClusteringUsha Vijay
Identifying the borrower segments from the give bank data set which has 27000 rows and 77 variable using PROC PRINCOMP. variables, it is important to reduce the data set to a smaller set of variables to derive a feasible
conclusion. With the effect of multicollinearity two or more variables can share the same plane in the in dimensions. Each row of the data can
be envisioned as a 77 dimensional graph and when we project the data as orthonormal, it is expected that the certain characteristics of the
data based on the plots to cluster together as principal components. In order to identify these principal components. PROC PRINCOMP is
executed with all the variables except the constant variables(recoveries and collection fees) and we derive a plot of Eigen values of all the
principal components
Abstract Learning Analytics by nature relies on computational information processing activities intended to extract from raw data some interesting aspects that can be used to obtain insights into the behaviors of learners, the design of learning experiences, etc. There is a large variety of computational techniques that can be employed, all with interesting properties, but it is the interpretation of their results that really forms the core of the analytics process. As a rising subject, data mining and business intelligence are playing an increasingly important role in the decision support activity of every walk of life. The Variance Rover System (VRS) mainly focused on the large data sets obtained from online web visiting and categorizing this into clusters according some similarity and the process of predicting customer behavior and selecting actions to influence that behavior to benefit the company, so as to take optimized and beneficial decisions of business expansion. Keywords: Analytics, Business intelligence, Clustering, Data Mining, Standard K-means, Optimized K-means
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Dimensionality reduction by matrix factorization using concept lattice in dat...eSAT Journals
Abstract Concept lattices is the important technique that has become a standard in data analytics and knowledge presentation in many fields such as statistics, artificial intelligence, pattern recognition ,machine learning ,information theory ,social networks, information retrieval system and software engineering. Formal concepts are adopted as the primitive notion. A concept is jointly defined as a pair consisting of the intension and the extension. FCA can handle with huge amount of data it generates concepts and rules and data visualization. Matrix factorization methods have recently received greater exposure, mainly as an unsupervised learning method for latent variable decomposition. In this paper a novel method is proposed to decompose such concepts by using Boolean Matrix Factorization for dimensionality reduction. This paper focuses on finding all the concepts and the object intersections. Keywords: Data mining, formal concepts, lattice, matrix factorization dimensionality reduction.
Secure Routing for MANET in Adversarial EnvironmentIJCERT
Collection of mobile nodes is known as ad-hoc network in which wireless communication is used to connect these mobile nodes. A major requirement on the MANET is to provide unidentifiability and unlinkability for mobile nodes. There are various secure routing protocols have been proposed, but the requirement is not satisfied. The existing protocols are unguarded to the attacks of fake routing packets or denial-of-service broadcasting, even the node identities are protected by pseudonyms. We propose a new secure routing protocol which provides anonymity named as authenticated anonymous secure routing (AASR), to satisfy the requirement of mobile networks and defend the attacks. The route request packets are authenticated by a group signature and public key infrastructure, to defend the potential attacks without exposing the node identities. The cocept of key-encrypted onion routing which provides a route secret verification message, to prevent intermediate nodes from inferring a real destination. Simulation results have demonstrated the effectiveness of the proposed AASR protocol with improved performance as compared to the existing protocols.
Green Computing: A Methodology of Saving Energy by Resource Virtualization.IJCERT
In the past a couple of years computer standard was moved to remote data farms and the
software and hardware services accessible on the premise of pay for utilize .This is called
Cloud computing, In which client needs to pay for the Services .Cloud give the Services –
Programming as a Service ,stage as a Service and foundation as a Service .These
Services gave through the remote server farms (since the information is
scattered/disseminated over the web.), as Programming requisition and different Services
relocated to the remote server farm ,Service of these server farm in the imperative. Server
farm Service confronts the issue of force utilization. At present Cloud computing based
framework squander an extraordinary measure of force and produces co2. Since
numerous servers don't have a decent quality cooling framework. Green Computing can
empower more vitality proficient utilization of computing power .This paper indicates the
prerequisite of Green Computing and methods to spare the vitality by distinctive
methodologies
Re-mining Positive and Negative Association Mining Resultsertekg
Download Link > https://ertekprojects.com/gurdal-ertek-publications/blog/re-mining-positive-and-negative-association-mining-results/
Positive and negative association mining are well-known and extensively studied data mining techniques to analyze market basket data. Efficient algorithms exist to find both types of association, sepa-rately or simultaneously. Association mining is performed by operating on the transaction data. Despite being an integral part of the transaction data, the pricing and time information has not been incorporated into market basket analysis so far, and additional attributes have been han-dled using quantitative association mining. In this paper, a new approach is proposed to incorporate price, time and domain related attributes into data mining by re-mining the association mining results. The underlying factors behind positive and negative relationships, as indicated by the as-sociation rules, are characterized and described through the second data mining stagere-mining. The applicability of the methodology is demon-strated by analyzing data coming from apparel retailing industry, where price markdown is an essential tool for promoting sales and generating increased revenue.
OPTIMIZATION IN ENGINE DESIGN VIA FORMAL CONCEPT ANALYSIS USING NEGATIVE ATTR...csandit
There is an exhaustive study around the area of engine design that covers different methods that try to reduce costs of production and to optimize the performance of these engines.
Mathematical methods based in statistics, self-organized maps and neural networks reach the best results in these designs but there exists the problem that configuration of these methods is
not an easy work due the high number of parameters that have to be measured.
Influence over the Dimensionality Reduction and Clustering for Air Quality Me...IJAEMSJORNAL
The current trend in the industry is to analyze large data sets and apply data mining, machine learning techniques to identify a pattern. But the challenges with huge data sets are the high dimensions associated with it. Sometimes in data analytics applications, large amounts of data produce worse performance. Also, most of the data mining algorithms are implemented column wise and too many columns restrict the performance and make it slower. Therefore, dimensionality reduction is an important step in data analysis. Dimensionality reduction is a technique that converts high dimensional data into much lower dimension, such that maximum variance is explained within the first few dimensions. This paper focuses on multivariate statistical and artificial neural networks techniques for data reduction. Each method has a different rationale to preserve the relationship between input parameters during analysis. Principal Component Analysis which is a multivariate technique and Self Organising Map a neural network technique is presented in this paper. Also, a hierarchical clustering approach has been applied to the reduced data set. A case study of Air quality measurement has been considered to evaluate the performance of the proposed techniques.
Opinion mining framework using proposed RB-bayes model for text classicationIJECEIAES
Information mining is a capable idea with incredible potential to anticipate future patterns and conduct. It alludes to the extraction of concealed information from vast data sets by utilizing procedures like factual examination, machine learning, grouping, neural systems and genetic algorithms. In naive baye’s, there exists a problem of zero likelihood. This paper proposed RB-Bayes method based on baye’s theorem for prediction to remove problem of zero likelihood. We also compare our method with few existing methods i.e. naive baye’s and SVM. We demonstrate that this technique is better than some current techniques and specifically can analyze data sets in better way. At the point when the proposed approach is tried on genuine data-sets, the outcomes got improved accuracy in most cases. RB-Bayes calculation having precision 83.333.
Principal Component Analysis and ClusteringUsha Vijay
Identifying the borrower segments from the give bank data set which has 27000 rows and 77 variable using PROC PRINCOMP. variables, it is important to reduce the data set to a smaller set of variables to derive a feasible
conclusion. With the effect of multicollinearity two or more variables can share the same plane in the in dimensions. Each row of the data can
be envisioned as a 77 dimensional graph and when we project the data as orthonormal, it is expected that the certain characteristics of the
data based on the plots to cluster together as principal components. In order to identify these principal components. PROC PRINCOMP is
executed with all the variables except the constant variables(recoveries and collection fees) and we derive a plot of Eigen values of all the
principal components
Abstract Learning Analytics by nature relies on computational information processing activities intended to extract from raw data some interesting aspects that can be used to obtain insights into the behaviors of learners, the design of learning experiences, etc. There is a large variety of computational techniques that can be employed, all with interesting properties, but it is the interpretation of their results that really forms the core of the analytics process. As a rising subject, data mining and business intelligence are playing an increasingly important role in the decision support activity of every walk of life. The Variance Rover System (VRS) mainly focused on the large data sets obtained from online web visiting and categorizing this into clusters according some similarity and the process of predicting customer behavior and selecting actions to influence that behavior to benefit the company, so as to take optimized and beneficial decisions of business expansion. Keywords: Analytics, Business intelligence, Clustering, Data Mining, Standard K-means, Optimized K-means
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Dimensionality reduction by matrix factorization using concept lattice in dat...eSAT Journals
Abstract Concept lattices is the important technique that has become a standard in data analytics and knowledge presentation in many fields such as statistics, artificial intelligence, pattern recognition ,machine learning ,information theory ,social networks, information retrieval system and software engineering. Formal concepts are adopted as the primitive notion. A concept is jointly defined as a pair consisting of the intension and the extension. FCA can handle with huge amount of data it generates concepts and rules and data visualization. Matrix factorization methods have recently received greater exposure, mainly as an unsupervised learning method for latent variable decomposition. In this paper a novel method is proposed to decompose such concepts by using Boolean Matrix Factorization for dimensionality reduction. This paper focuses on finding all the concepts and the object intersections. Keywords: Data mining, formal concepts, lattice, matrix factorization dimensionality reduction.
Secure Routing for MANET in Adversarial EnvironmentIJCERT
Collection of mobile nodes is known as ad-hoc network in which wireless communication is used to connect these mobile nodes. A major requirement on the MANET is to provide unidentifiability and unlinkability for mobile nodes. There are various secure routing protocols have been proposed, but the requirement is not satisfied. The existing protocols are unguarded to the attacks of fake routing packets or denial-of-service broadcasting, even the node identities are protected by pseudonyms. We propose a new secure routing protocol which provides anonymity named as authenticated anonymous secure routing (AASR), to satisfy the requirement of mobile networks and defend the attacks. The route request packets are authenticated by a group signature and public key infrastructure, to defend the potential attacks without exposing the node identities. The cocept of key-encrypted onion routing which provides a route secret verification message, to prevent intermediate nodes from inferring a real destination. Simulation results have demonstrated the effectiveness of the proposed AASR protocol with improved performance as compared to the existing protocols.
Green Computing: A Methodology of Saving Energy by Resource Virtualization.IJCERT
In the past a couple of years computer standard was moved to remote data farms and the
software and hardware services accessible on the premise of pay for utilize .This is called
Cloud computing, In which client needs to pay for the Services .Cloud give the Services –
Programming as a Service ,stage as a Service and foundation as a Service .These
Services gave through the remote server farms (since the information is
scattered/disseminated over the web.), as Programming requisition and different Services
relocated to the remote server farm ,Service of these server farm in the imperative. Server
farm Service confronts the issue of force utilization. At present Cloud computing based
framework squander an extraordinary measure of force and produces co2. Since
numerous servers don't have a decent quality cooling framework. Green Computing can
empower more vitality proficient utilization of computing power .This paper indicates the
prerequisite of Green Computing and methods to spare the vitality by distinctive
methodologies
An Enhanced Predictive Proportion using TMP Algorithm in WSN NavigationIJCERT
visit http://www.ijcert.org for more research journals it offers discount for Indian Research scholars
IJCERT Standard on-line Journal
ISSN(Online):2349-7084,(An ISO 9001:2008 Certified Journal)
iso nicir csir
IJCERT (ISSN 2349–7084 (Online)) is approved by National Science Library (NSL), National Institute of Science Communication And Information Resources (NISCAIR), Council of Scientific and Industrial Research, New Delhi, India.
A collection of mobile nodes is known as ad-hoc network in which wireless communication network is used to connect these mobile nodes. A major requirement on the MANET is to provide unidentifiability and unlinkability for mobile nodes During the last few decades, continuous progresses in wireless communications have opened new research fields in computer networking, goal of extending data networks connectivity to environments where wired solutions are impracticable. Among these, vehicular traffic is attracting a increasing attention from both academic and industry, due to the amount and importance of the related applications, ranging from road safety to traffic control, up to mobile entertainment. Vehicular Ad-hoc Network(VANETs) are self-organized networks built up from moving vehicles, and are part of the broader class of Mobile Ad-hoc Net- works(MANETs). Because of their peculiar characteristics, VANETs require the definition of specific networking techniques, whose feasibility and performance are usually tested by means of simulation. One of the main challenges posed by VANETs simulations is the faithful characterization of vehicular mobility at both macroscopic and microscopic levels, leads to realistic non-uniform distributions of cars and velocity, and unique connectivity dynamics. There are various secure routing protocols have been proposed, but the requirement is not satisfied. The existing protocols are unguarded to the attacks of fake routing packets. Simulation results have demonstrated the effectiveness of the proposed AODV protocol with improved performance as compared to the existing protocols.
Ontology Based PMSE with Manifold PreferenceIJCERT
International journal from http://www.ijcert.org
IJCERT Standard on-line Journal
ISSN(Online):2349-7084,(An ISO 9001:2008 Certified Journal)
iso nicir csir
IJCERT (ISSN 2349–7084 (Online)) is approved by National Science Library (NSL), National Institute of Science Communication And Information Resources (NISCAIR), Council of Scientific and Industrial Research, New Delhi, India.
MANETs (Mobile Ad hoc Network) is a self-governing system in which different mobile nodes are connected by wireless links. MANETs comprise of mobile nodes that are independent for moving in and out over the network. Nodes are the devices or systems that is laptops, mobile phone etc. those are participating in the network. These nodes can operate as router/host or both simultaneously. These nodes can form uninformed topologies as per their connectivity among nodes over the network. Security in MANETs is the prime anxiety for the fundamental working of network. MANETs frequently will be ill with security threats because of it having features like altering its topology dynamically, open medium, lack of central management & monitoring, cooperative algorithms and no apparent security mechanism. These factors draw an attention for the MANETs against the security intimidation. In this paper we have studied about security attack in MANET and its consequences, proposed technique for black hole detection is hybrid in nature which combines the benefit of proactive and reactive protocol and proposed technique is compared with AODV.
Multiple Encryption using ECC and Its Time Complexity AnalysisIJCERT
Rapid growth of information technology in present era, secure communication, strong data encryption technique and trusted third party are considered to be major topics of study. Robust encryption algorithm development to secure sensitive data is of great significance among researchers at present. The conventional methods of encryption used as of today may not sufficient and therefore new ideas for the purpose are to be design, analyze and need to be fit into the existing system of security to provide protection of our data from unauthorized access. An effective encryption/ decryption algorithm design to enhance data security is a challenging task while computation, complexity, robustness etc. are concerned. The multiple encryption technique is a process of applying encryption over a single encryption process in a number of iteration. Elliptic Curve Cryptography (ECC) is well known and well accepted cryptographic algorithm and used in many application as of today. In this paper, we discuss multiple encryptions and analyze the computation overhead in the process and study the feasibility of practical application. In the process we use ECC as a multiple-ECC algorithm and try to analyze degree of security, encryption/decryption computation time and complexity of the algorithm. Performance measure of the algorithm is evaluated by analyzing encryption time as well as decryption time in single ECC as well as multiple-ECC are compared with the help of various examples.
Software Engineering Domain Knowledge to Identify Duplicate Bug ReportsIJCERT
Earlier, many methodologies was proposed for detecting duplicate bug reports by comparing the textual content of bug reports to subject-specific contextual material, namely lists of software-engineering terms, such as non-functional requirements and architecture keywords. When a bug report includes a word in these word-list contexts, the bug report is measured to be linked with that context and this information is likely to improve bug-deduplication methods. Here, we recommend a technique to partially automate the extraction of contextual word lists from software-engineering literature. Evaluating this software-literature context technique on real-world bug reports creates useful consequences that indicate this semi-automated method has the potential to significantly decrease the manual attempt used in contextual bug deduplication while suffering only a minor loss in accuracy.
Intelligent Device TO Device Communication Using IoTIJCERT
Internet is becoming the most intrinsic part of the human life. There are many users of the internet but the devices will be the main users in the Internet of Things (IoT). These devices communicate with each other efficiently and gather the information to transfer the data to particular device. The quality of this information depends on how smart the devices are. IoT coverage is very wide and consists of the things or devices connected in network like camera, android phones, sensors etc. Once all these devices are connected with each other, they are capable of processing smartly and satisfying basic needs of environment. Thus the communication between the devices is achieved using various technologies and devices.
A System for Denial of Service Attack Detection Based On Multivariate Corelat...IJCERT
in computing world, a denial-of-service (DoS) or is an process to make a machine or network resource unavailable to its regular users.DoS attack minimizes the efficiency of the server, inorder to increase the efficiency of the server it is necessary to identify the dos attacks hence MULTIVARIATE CORRELATION ANALYSIS(MCA)is used, this approach employs triangle area for obtaining the correlation information between the ip address. Based on extracted data the denial of service-attack is discovered and the response to the particular user is blocked, this maximizes the efficiency. Our proposed system is examined using KDD Cup 99 data set, and the influence of data on the performance of the proposed system is examined.
OPTIMIZATION IN ENGINE DESIGN VIA FORMAL CONCEPT ANALYSIS USING NEGATIVE ATTR...cscpconf
There is an exhaustive study around the area of engine design that covers different methods that try to reduce costs of production and to optimize the performance of these engines. Mathematical methods based in statistics, self-organized maps and neural networks reach the best results in these designs but there exists the problem that configuration of these methods is not an easy work due the high number of parameters that have to be measured. In this work we extend an algorithm for computing implications between attributes with positive and negative values for obtaining the mixed concepts lattice and also we propose a theoretical method based in these results for engine simulators adjusting specific and different elements for obtaining optimal engine configurations.
(Gaurav sawant & dhaval sawlani)bia 678 final project reportGaurav Sawant
PROJECT REPORT
• Performed memory-based collaborative filtering techniques like Cosine similarities, Pearson’s r & model-based Matrix Factorization techniques like Alternating Least Squares (ALS) method
• Studied the scalability of these methods on local machines & on Hadoop clusters
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONijaia
Function Approximation is a popular engineering problems used in system identification or Equation
optimization. Due to the complex search space it requires, AI techniques has been used extensively to spot
the best curves that match the real behavior of the system. Genetic algorithm is known for their fast
convergence and their ability to find an optimal structure of the solution. We propose using a genetic
algorithm as a function approximator. Our attempt will focus on using the polynomial form of the
approximation. After implementing the algorithm, we are going to report our results and compare it with
the real function output.
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
Analysis of Common Supervised Learning Algorithms Through Applicationaciijournal
Supervised learning is a branch of machine learning wherein the machine is equipped with labelled data which it uses to create sophisticated models that can predict the labels of related unlabelled data. the literature on the field offers a wide spectrum of algorithms and applications. however, there is limited research available to compare the algorithms making it difficult for beginners to choose the most efficient algorithm and tune it for their application.
This research aims to analyse the performance of common supervised learning algorithms when applied to sample datasets along with the effect of hyper-parameter tuning. for the research, each algorithm is applied to the datasets and the validation curves (for the hyper-parameters) and learning curves are analysed to understand the sensitivity and performance of the algorithms. the research can guide new researchers aiming to apply supervised learning algorithm to better understand, compare and select the appropriate algorithm for their application. Additionally, they can also tune the hyper-parameters for improved efficiency and create ensemble of algorithms for enhancing accuracy.
ANALYSIS OF COMMON SUPERVISED LEARNING ALGORITHMS THROUGH APPLICATIONaciijournal
Supervised learning is a branch of machine learning wherein the machine is equipped with labelled data
which it uses to create sophisticated models that can predict the labels of related unlabelled data.the
literature on the field offers a wide spectrum of algorithms and applications.however, there is limited
research available to compare the algorithms making it difficult for beginners to choose the most efficient
algorithm and tune it for their application.
This research aims to analyse the performance of common supervised learning algorithms when applied to
sample datasets along with the effect of hyper-parameter tuning.for the research, each algorithm is applied
to the datasets and the validation curves (for the hyper-parameters) and learning curves are analysed to
understand the sensitivity and performance of the algorithms.the research can guide new researchers
aiming to apply supervised learning algorithm to better understand, compare and select the appropriate
algorithm for their application. Additionally, they can also tune the hyper-parameters for improved
efficiency and create ensemble of algorithms for enhancing accuracy.
In the present day huge amount of data is generated in every minute and transferred frequently. Although
the data is sometimes static but most commonly it is dynamic and transactional. New data that is being
generated is getting constantly added to the old/existing data. To discover the knowledge from this
incremental data, one approach is to run the algorithm repeatedly for the modified data sets which is time
consuming. Again to analyze the datasets properly, construction of efficient classifier model is necessary.
The objective of developing such a classifier is to classify unlabeled dataset into appropriate classes. The
paper proposes a dimension reduction algorithm that can be applied in dynamic environment for
generation of reduced attribute set as dynamic reduct, and an optimization algorithm which uses the
reduct and build up the corresponding classification system. The method analyzes the new dataset, when it
becomes available, and modifies the reduct accordingly to fit the entire dataset and from the entire data
set, interesting optimal classification rule sets are generated. The concepts of discernibility relation,
attribute dependency and attribute significance of Rough Set Theory are integrated for the generation of
dynamic reduct set, and optimal classification rules are selected using PSO method, which not only
reduces the complexity but also helps to achieve higher accuracy of the decision system. The proposed
method has been applied on some benchmark dataset collected from the UCI repository and dynamic
reduct is computed, and from the reduct optimal classification rules are also generated. Experimental
result shows the efficiency of the proposed method.
Stock Price Trend Forecasting using Supervised LearningSharvil Katariya
The aim of the project is to examine a number of different forecasting techniques to predict future stock returns based on past returns and numerical user-generated content to construct a portfolio of multiple stocks in order to diversify the risk. We do this by applying supervised learning methods for stock price forecasting by interpreting the seemingly chaotic market data.
Understanding the Applicability of Linear & Non-Linear Models Using a Case-Ba...ijaia
This paper uses a case based study – “product sales estimation” on real-time data to help us understand
the applicability of linear and non-linear models in machine learning and data mining. A systematic
approach has been used here to address the given problem statement of sales estimation for a particular set
of products in multiple categories by applying both linear and non-linear machine learning techniques on
a data set of selected features from the original data set. Feature selection is a process that reduces the
dimensionality of the data set by excluding those features which contribute minimal to the prediction of the
dependent variable. The next step in this process is training the model that is done using multiple
techniques from linear & non-linear domains, one of the best ones in their respective areas. Data Remodeling
has then been done to extract new features from the data set by changing the structure of the
dataset & the performance of the models is checked again. Data Remodeling often plays a very crucial and
important role in boosting classifier accuracies by changing the properties of the given dataset. We then try
to explore and analyze the various reasons due to which one model performs better than the other & hence
try and develop an understanding about the applicability of linear & non-linear machine learning models.
The target mentioned above being our primary goal, we also aim to find the classifier with the best possible
accuracy for product sales estimation in the given scenario.
Analysis of Common Supervised Learning Algorithms Through Applicationaciijournal
Supervised learning is a branch of machine learning wherein the machine is equipped with labelled data
which it uses to create sophisticated models that can predict the labels of related unlabelled data. the
literature on the field offers a wide spectrum of algorithms and applications. However, there is limited
research available to compare the algorithms making it difficult for beginners to choose the most efficient
algorithm and tune it for their application.
This research aims to analyse the performance of common supervised learning algorithms when applied to
sample datasets along with the effect of hyper-parameter tuning. for the research, each algorithm is
applied to the datasets and the validation curves (for the hyper-parameters) and learning curves are
analysed to understand the sensitivity and performance of the algorithms. The research can guide new
researchers aiming to apply supervised learning algorithm to better understand, compare and select the
appropriate algorithm for their application. Additionally, they can also tune the hyper-parameters for
improved efficiency and create ensemble of algorithms for enhancing accuracy.
Data Science - Part V - Decision Trees & Random Forests Derek Kane
This lecture provides an overview of decision tree machine learning algorithms and random forest ensemble techniques. The practical example includes diagnosing Type II diabetes and evaluating customer churn in the telecommunication industry.
Parametric Optimization of Rectangular Beam Type Load Cell Using Taguchi MethodIJCERT
In this work, Rectangular beam type load cell is considered for stress and strain analysis by using finite element method. The stress analysis is carried out to minimize the weight of Rectangular beam- type load cell without exceeding allowable stress. The intention of the work is to create the geometry of Rectangular beam-type load cell to find out the optimum solution. FEM software HyperWorks11.0.0.39 is using for parametric optimization of Rectangular beam type load cell. If the stress value is within the permissible range, then certain dimensions will be modified to reduce the amount of material needed. The procedure will be repeated until design changes satisfying all the criteria. Experimental verification will be carried out by photo-elasticity technique with the help of suitable instrumentation like Polariscope. Using Photo-elasticity technique, results are crosschecked which gives results very close to FEM technique. Experimental results will be compared with FEM results. With the aid of these tools the designer can develop and modify the design parameters from initial design stage to finalize basic geometry of load cell.
Robust Resource Allocation in Relay Node Networks for Optimization ProcessIJCERT
Overlay steering has risen as a promising way to deal with enhances unwavering quality and effectiveness of the Internet. For one-jump overlay source steering, when a given essential way experiences the connection disappointment or execution debasement, the source can reroute the movement to the destination by means of a deliberately set transfer hub. Be that as it may, the over-substantial activity going through the same transfer hub may bring about incessant bundle misfortune and postponement jitter, which can corrupt the throughput and usage of the system. To defeat this issue, we propose a Load-Balanced One-jump Overlay Multipath Routing calculation (LB-OOMR), in which the activity is first part at the source edge hubs and afterward transmitted along numerous one-bounce overlay ways. So as to decide an ideal split proportion for the activity, we plan the issue as a direct programming (LP) definition, whose objective is to minimize the more regrettable case system blockage proportion. Since it is hard to take care of this LP issue in commonsense time, a heuristic calculation is acquainted with select the transfer hubs for building the disjoint one-jump overlay ways, which enormously lessens the computational multifaceted nature of the LP calculation. Reproductions in light of a genuine ISP system and an engineered Internet topology demonstrate that our proposed calculation can diminish the system clog proportion significantly, and accomplish top notch overlay directing administration.
A Survey on: Sound Source Separation MethodsIJCERT
now a day’s multimedia databases are growing rapidly on large scale. For the effective management and exploration of large amount of music data the technology of singer identification is developed. With the help of this technology songs performed by particular singer can be clustered automatically. To improve the Performance of singer identification the technologies are emerged that can separate the singing voice from music accompaniment. One of the methods used for separating the singing voice from music accompaniment is non-negative matrix partial co factorization. This paper studies the different techniques for separation of singing voice from music accompaniment.
An Image representation using Compressive Sensing and Arithmetic Coding IJCERT
The demand for graphics and multimedia communication over intenet is growing day by day. Generally the coding efficiency achieved by CS measurements is below the widely used wavelet coding schemes (e.g., JPEG 2000). In the existing wavelet-based CS schemes, DWT is mainly applied for sparse representation and the correlation of DWT coefficients has not been fully exploited yet. To improve the coding efficiency, the statistics of DWT coefficients has been investigated. A novel CS-based image representation scheme has been proposed by considering the intra- and inter-similarity among DWT coefficients. Multi-scale DWT is first applied. The low- and high-frequency subbands of Multi-scale DWT are coded separately due to the fact that scaling coefficients capture most of the image energy. At the decoder side, two different recovery algorithms have been presented to exploit the correlation of scaling and wavelet coefficients well. In essence, the proposed CS-based coding method can be viewed as a hybrid compressed sensing schemes which gives better coding efficiency compared to other CS based coding methods.
Hard starting every initial stage: Study on Less Engine Pulling PowerIJCERT
Automobiles engine, it is a prime mover; and a machine in which power is applied to do work, often in the form of converting heat energy into mechanical work. We know that the power unit of an automobile is called I C engine. The power developed by the engine depends upon the calorific value of the fuel used. This value is equal to the total heat produced to combustion of hydrogen and carbon. Fuel injection pump (FIP), has to supply various quantities of fuel in accordance with the different engine load and in-line pumps, which correctly positioned connects the fuel supply from gallery. Diesel engines compress pure air during compression stroke and must have some means to force fuel into the combustion chamber with a pressure higher from the compression pressure. The injection nozzle atomizes the fuel very small droplets (3 to 30 microns) and delivers it to the combustion chamber. This achieved by small orifice.
Data Security Using Elliptic Curve CryptographyIJCERT
Cryptography technique is used to provide data security. In existing cryptography technique the key generation takes place randomly. Key generation require shared key. If shared key is access by unauthorized user then security becomes disoriented. Hence existing problems are alleviated to give more security to data. In proposed system a algorithm called as Elliptic Curve Cryptography is used. The ECC generates the key by using the point on the curve. The ECC is used for generating the key by using point on the curve and encryption and decryption operation takes place through curve. In the proposed system the encryption and key generation process takes place rapidly.
SecCloudPro: A Novel Secure Cloud Storage System for Auditing and DeduplicationIJCERT
In this paper, we show the trustworthiness evaluating and secure deduplication over cloud data utilizing imaginative secure frameworks .Usually cloud framework outsourced information at cloud storage is semi-trusted because of absence of security at cloud storage while putting away or sharing at cloud level because of weak cryptosystem information may be uncover or adjusted by the hackers keeping in mind the end goal to ensure clients information protection and security We propose novel progressed secure framework i.e SecCloudPro which empower the cloud framework secured and legitimate utilizing Verifier(TPA) benefit of Cloud Server. Additionally our framework performs data deduplication in a Secured way in requested to enhance the cloud Storage space too data transfer capacity i.e bandwidth.
Handling Selfishness in Replica Allocation over a Mobile Ad-Hoc NetworkIJCERT
MANET is a collection of mobile devices that can communicate with each other without the use centralized administration. One of the interesting application of MANET is File Sharing. File Sharing in MANET is similar to that of the regular file sharing, what makes the difference is it allow user to access the data or memory of that nodes only which are connected to it. This File sharing many a times leads to Network Partitioning, i.e dividing a network into two different networks .Due to which the nodes may act selfishly. The selfishness of some of the nodes may lead in reduction of performance in terms of accessing data. The proposed system will use the SCF-tree technique for building a tree of Node which will share their data in terms of Replica, and as a result it detects the selfish node in the network. The replica insures that performance is not degraded.
GSM Based Device Controlling and Fault DetectionIJCERT
The mobile communication has expanded to a great extent such that it can be applied for controlling of electrical devices. These projects make use of this capability of mobile phone to control three electrical devices with some use of embedded technology which can be extended up to eight devices. Apart from controlling it also does the sensing of the devices. Thus a user can be able to know of the status of the devices and in addition to that the user get notified if any fault is detected. Here in the project controlling and sensing is done for three electrical devices only. According to the user need both of this can be expanded.
Efficient Multi Server Authentication and Hybrid Authentication MethodIJCERT
Password is used for authentication on many major client-server system, websites etc. Client and a server share a password using Password-authenticated key exchange to authenticate each other and establish a cryptographic key by exchanging generated exchanges. In this scenario, all the passwords are stored in a single server which will authenticate the client. If the server stopped working or compromised, for example, hacking or even insider attack, passwords stored in database will become publicly known. This system proposes that setting where multiple servers which are used to, so that the password can be split in these servers authenticate client and if one server is compromised, the attacker still cannot be able to view the client’s information from the compromised server. This system uses the Advance encryption standard algorithm encryption and for key exchange and some formulae to store the password in multiple server. This system also has the hybrid authentication as another phase to make it more secure and efficient. In the given authentication schema we also use SMS integration API for two step verification.
Online Payment System using Steganography and Visual CryptographyIJCERT
In recent time there is rapid growth in E-Commerce market. Major concerns for customers in online shopping are debit card or credit card fraud and personal information security. Identity theft and phishing are common threats of online shopping. Phishing is a method of stealing personal confidential information such as username, passwords and credit card details from victims. It is a social engineering technique used to deceive users. In this paper new method is proposed that uses text based steganography and visual cryptography. It represents new approach which will provide limited information for fund transfer. This method secures the customer's data and increases customer's confidence and prevents identity theft.
Prevention of Packet Hiding Methods In Selective Jamming AttackIJCERT
The sharing nature of wireless medium provides various challenging features among various set of users. It is very important in real world and it provides better transfer rate but authentication is ignored. The limitations of existing wired network are overcome by wireless network. These networks act as source for various types of jamming attacks. In analysis and detection of jamming attack various methods are available but sometime they fail. In case of external threat the analysis and reporting of jamming attack is very easy model but it is quite difficult in terms of internal threat model, these internal term uses the knowledge about network secrets and network protocols to launch various attacks with very low effort. Various cryptographic techniques are implemented to prevent these attacks. The main goal of this project is to prevent the information at the wireless physical layer and allowed the safe transmission among communicated nodes although the attacker is present.
Speech recognition is the next big step that the technology needs to take for general users. An Automatic Speech Recognition (ASR) will play a major role in focusing new technology to users. Applications of ASR are speech to text conversion, voice input in aircraft, data entry, voice user interfaces such as voice dialing. Speech recognition involves extracting features from the input signal and classifying them to classes using pattern matching model. This can be done using feature extraction method. This paper involves a general study of automatic speech recognition and various methods to generate an ASR system. General techniques that can be used to implement an ASR includes artificial neural networks, Hidden Markov model, acoustic –phonetic approach
Real Time Detection System of Driver FatigueIJCERT
The leading cause of vehicle crashes and accidents is the driver distraction. With the rapid development of motorization, driver fatigue has become a very serious traffic problem. Reasons for traffic accidents are driving after alcohol consumption, driving at night, driving without taking rest, aging, sleepiness, and fatigue occurred due to continuous driving, long working hours and night shifts. So to reduce rate of accidents due to above reasons, is aim of this project. This paper presents a method for detection of early signs of fatigue using feature extraction, Haar classifier and delivering of information and whereabouts of the driver to the emergency contact numbers.
A Survey on Web Page Recommendation and Data PreprocessingIJCERT
In today’s era, as we all know internet technologies are growing rapidly. Along with this, instantly, Web page recommendations are also improving. The aim of a Web page recommender system is to predict the Web page or pages, which will be visited from a given Web-page of a website. Data preprocessing is one basic and essential part of Web page recommendation. Data preprocessing consists of cleanup and constructing data to organize for extracting pattern. In this paper, we discuss and focus on Web page Recommendation and role of data preprocessing in Web page recommendation, considering how data preprocessing is related to Web page recommendation.
Greetings! Dear Sir/madam , I herewith informing you that call for papers from IJCERT for May 2016 Edition, Volume 3, Issue 5 2016. leading Scientific Research Journal for Interdisciplinary domain, Peer reviewed , Open access having SJIF (Impact Factor 4.029 ) indexing with Thomson Reuters for More info visit www.ijcert.org
Review of Various Image Processing Techniques for Currency Note AuthenticationIJCERT
In cash transactions, the biggest challenge faced is counterfeit notes. This problem is only expanding due to the technology available and many fraud cases have been uncovered. Manual detection of counterfeit notes is time consuming and inefficient and hence the need of automated counterfeit detection has raised. To tackle this problem, we studied existing systems using Matlab, which used different methods to detect fake notes.
OBD-II and Oxygen Sensor: Review the I.C Engine - Emissions related PerformanceIJCERT
Increased awareness regarding the adverse effects of pollutants from automobile exhaust gases has been the main driving force for implementation of more and more stringent legislation on automobile exhaust emissions in many Countries. On Board Diagnostic (OBD), regulations in the USA for light and medium duty vehicles (I. C. Engines) are introducing to implement the air quality standard. California and the Federal Government used a driving cycle to certify 1966 and newer models which referred to as either California Cycle or the Federal Test Procedure (FTP). The California Air Resource Board (CARB) and the objective to reduce hydrocarbon (HC) emission caused by malfunction of the vehicles emission control systems adopted the California code of Regulations (CCR) known as OBD – II. The diagnosis is based on the oxygen sensor response time that is the amount of time to complete a switch from rich to lean or lean to rich transition that gives the ability to control the engine at stochiometric Air Fuel ratio. Typical values of lean mean voltage and rich mean voltage are 300 to 600 mill volts respectively. For post-cat O2 sensor, in particular, voltage level checks and heater systems checked and done and the failure thresholds for the post O2 sensor, diagnostic must not be set at a point beyond the failure limit where the catalyst diagnostic is affected.
Cloud Partitioning of Load Balancing Using Round Robin ModelIJCERT
Abstract: The purpose of load balancing is to look up the performance of a cloud environment through an appropriate
circulation strategy. Good load balancing will construct cloud computing for more stability and efficiency. This paper
introduces a better round robin model for the public cloud based on the cloud partitioning concept with a switch mechanism
to choose different strategies for different situations. Load balancing is the process of giving out of workload among
different nodes or processor. It will introduces a enhanced approach for public cloud load distribution using screening and
game theory concept to increase the presentation of the system.
Consistent Data Release in MANET Using Light Weight Verification Algorithm wi...IJCERT
IJCERT Standard on-line Journal
ISSN(Online):2349-7084,(An ISO 9001:2008 Certified Journal)
iso nicir csir
http://www.ijcert.org offers Discount for Indian research Scholars
IJCERT (ISSN 2349–7084 (Online)) is approved by National Science Library (NSL), National Institute of Science Communication And Information Resources (NISCAIR), Council of Scientific and Industrial Research, New Delhi, India.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers