This work incorporates the identification of model
in functional form using curve fitting and genetic programming
technique which can forecast present and future load
requirement. Approximating an unknown function with
sample data is an important practical problem. In order to
forecast an unknown function using a finite set of sample
data, a function is constructed to fit sample data points. This
process is called curve fitting. There are several methods of
curve fitting. Interpolation is a special case of curve fitting
where an exact fit of the existing data points is expected.
Once a model is generated, acceptability of the model must be
tested. There are several measures to test the goodness of a
model. Sum of absolute difference, mean absolute error, mean
absolute percentage error, sum of squares due to error (SSE),
mean squared error and root mean squared errors can be used
to evaluate models. Minimizing the squares of vertical distance
of the points in a curve (SSE) is one of the most widely used
method .Two of the methods has been presented namely Curve
fitting technique & Genetic Programming and they have been
compared based on (SSE)sum of squares due to error.
This paper reviews load forecasting using a neuro-fuzzy system. It discusses how neural networks and fuzzy logic can be combined in a neuro-fuzzy system to improve load forecasting accuracy. The paper first provides background on load forecasting and different techniques used. It then proposes using a neuro-fuzzy approach where load data is classified with fuzzy sets and a neural network is trained on each classification to forecast loads. Combining the learning ability of neural networks with the symbolic reasoning of fuzzy logic in a neuro-fuzzy system can potentially provide more accurate short-term load forecasts. The paper concludes that neuro-fuzzy systems show advantages over other statistical and AI methods for load forecasting.
Short Term Electrical Load Forecasting by Artificial Neural NetworkIJERA Editor
This paper presents an application of artificial neural networks for short-term times series electrical load
forecasting. An adaptive learning algorithm is derived from system stability to ensure the convergence of
training process. Historical data of hourly power load as well as hourly wind power generation are sourced from
European Open Power System Platform. The simulation demonstrates that errors steadily decrease in training
with the adaptive learning factor starting at different initial value and errors behave volatile with constant
learning factors with different values
Nowadays, the location and sizing of distributed generation (DG) units in power system network are crucial to be at optimal as it will affect the power system operation in terms of stability and security. In this paper, a new technique termed as Immune Log-Normal Evolutionary Programming (ILNEP) is applied to find the optimal location and size of distributed generation units in power system network. Voltage stability is considered in solving this problem. The proposed technique has been tested on the IEEE 26 bus Reliability Test System to find the optimal location and size of distributed generation in transmission network. In order to study the performance of ILNEP technique in solving DG Installation problem, the results produced by ILNEP were compared with other meta-heuristic techniques like evolutionary programming (EP) and artificial immune system (AIS). It is found that the proposed technique gives better solution in term of lower total system loss compared to the other two techniques.
IRJET- Fusion based Brain Tumor DetectionIRJET Journal
1. The document discusses a method for detecting brain tumors using medical image fusion and support vector machines (SVM).
2. It involves fusing two MRI images using SVM to create a single fused image with more information than the original images. Texture and wavelet features are then extracted from the fused image.
3. The SVM classifier classifies the brain tumors as benign or malignant based on the trained and tested features extracted from the fused image.
The document proposes developing an artificial neural network model using a multi-layer feedforward neural network and backpropagation learning algorithm to more accurately estimate software development effort. The model is trained and tested on the COCOMO dataset using nine different training algorithms. Preliminary results found the neural network model improved estimation accuracy over the COCOMO model, suggesting it could accurately forecast software effort. Key performance metrics like mean squared error and regression analysis were used to evaluate the model.
This document discusses electrical energy management and load forecasting in smart grids using artificial neural networks. It presents a study applying backpropagation neural networks to short-term load forecasting for Sudan's National Electric Company. The neural network model was used to forecast load, with error calculated by comparing forecasted and actual load data. The document also discusses generation dispatch, demand forecasting techniques, and designing a neural network for one-day load forecasting. It evaluates network performance and error for different training data sizes, finding that a ten-day training dataset produced the best results with minimum error. The neural network approach was able to reliably predict the nonlinear relationship between historical data and load.
Optimal design of adaptive power scheduling using modified ant colony optimi...IJECEIAES
For generating and distributing an economic load scheduling approach, artificial neural network (ANN) has been introduced, because power generation and power consumption are economically non-identical. An efficient load scheduling method is suggested in this paper. Normally the power generation system fails due to its instability at peak load time. Traditionally, load shedding process is used in which low priority loads are disconnected from sources. The proposed method handles this problem by scheduling the load based on the power requirements. In many countries the power systems are facing limitations of energy. An efficient optimization algorithm is used to periodically schedule the load demand and the generation. Ant colony optimization (ACO) based ANN is used for this optimal load scheduling process. The present work analyse the technical economical and time-dependent limitations. Also the works meets the demanded load with minimum cost of energy. Inorder to train ANN back propagation (BP) technics is used. A hybrid training process is described in this work. Global optimization algorithms are used to provide back propagation with good initial connection weights.
Application of genetic algorithm and neuro fuzzy control techniques for autoIAEME Publication
This document discusses the application of genetic algorithms, fuzzy logic, and a hybrid neuro-fuzzy approach for automatic generation control of interconnected power systems. It summarizes previous work applying genetic algorithms and fuzzy logic that have limitations. The major contribution is a proposed hybrid neuro-fuzzy control approach that combines neural networks and fuzzy logic to address limitations in conventional approaches. This hybrid approach can effectively handle system nonlinearities and has faster processing speed than conventional controllers.
This paper reviews load forecasting using a neuro-fuzzy system. It discusses how neural networks and fuzzy logic can be combined in a neuro-fuzzy system to improve load forecasting accuracy. The paper first provides background on load forecasting and different techniques used. It then proposes using a neuro-fuzzy approach where load data is classified with fuzzy sets and a neural network is trained on each classification to forecast loads. Combining the learning ability of neural networks with the symbolic reasoning of fuzzy logic in a neuro-fuzzy system can potentially provide more accurate short-term load forecasts. The paper concludes that neuro-fuzzy systems show advantages over other statistical and AI methods for load forecasting.
Short Term Electrical Load Forecasting by Artificial Neural NetworkIJERA Editor
This paper presents an application of artificial neural networks for short-term times series electrical load
forecasting. An adaptive learning algorithm is derived from system stability to ensure the convergence of
training process. Historical data of hourly power load as well as hourly wind power generation are sourced from
European Open Power System Platform. The simulation demonstrates that errors steadily decrease in training
with the adaptive learning factor starting at different initial value and errors behave volatile with constant
learning factors with different values
Nowadays, the location and sizing of distributed generation (DG) units in power system network are crucial to be at optimal as it will affect the power system operation in terms of stability and security. In this paper, a new technique termed as Immune Log-Normal Evolutionary Programming (ILNEP) is applied to find the optimal location and size of distributed generation units in power system network. Voltage stability is considered in solving this problem. The proposed technique has been tested on the IEEE 26 bus Reliability Test System to find the optimal location and size of distributed generation in transmission network. In order to study the performance of ILNEP technique in solving DG Installation problem, the results produced by ILNEP were compared with other meta-heuristic techniques like evolutionary programming (EP) and artificial immune system (AIS). It is found that the proposed technique gives better solution in term of lower total system loss compared to the other two techniques.
IRJET- Fusion based Brain Tumor DetectionIRJET Journal
1. The document discusses a method for detecting brain tumors using medical image fusion and support vector machines (SVM).
2. It involves fusing two MRI images using SVM to create a single fused image with more information than the original images. Texture and wavelet features are then extracted from the fused image.
3. The SVM classifier classifies the brain tumors as benign or malignant based on the trained and tested features extracted from the fused image.
The document proposes developing an artificial neural network model using a multi-layer feedforward neural network and backpropagation learning algorithm to more accurately estimate software development effort. The model is trained and tested on the COCOMO dataset using nine different training algorithms. Preliminary results found the neural network model improved estimation accuracy over the COCOMO model, suggesting it could accurately forecast software effort. Key performance metrics like mean squared error and regression analysis were used to evaluate the model.
This document discusses electrical energy management and load forecasting in smart grids using artificial neural networks. It presents a study applying backpropagation neural networks to short-term load forecasting for Sudan's National Electric Company. The neural network model was used to forecast load, with error calculated by comparing forecasted and actual load data. The document also discusses generation dispatch, demand forecasting techniques, and designing a neural network for one-day load forecasting. It evaluates network performance and error for different training data sizes, finding that a ten-day training dataset produced the best results with minimum error. The neural network approach was able to reliably predict the nonlinear relationship between historical data and load.
Optimal design of adaptive power scheduling using modified ant colony optimi...IJECEIAES
For generating and distributing an economic load scheduling approach, artificial neural network (ANN) has been introduced, because power generation and power consumption are economically non-identical. An efficient load scheduling method is suggested in this paper. Normally the power generation system fails due to its instability at peak load time. Traditionally, load shedding process is used in which low priority loads are disconnected from sources. The proposed method handles this problem by scheduling the load based on the power requirements. In many countries the power systems are facing limitations of energy. An efficient optimization algorithm is used to periodically schedule the load demand and the generation. Ant colony optimization (ACO) based ANN is used for this optimal load scheduling process. The present work analyse the technical economical and time-dependent limitations. Also the works meets the demanded load with minimum cost of energy. Inorder to train ANN back propagation (BP) technics is used. A hybrid training process is described in this work. Global optimization algorithms are used to provide back propagation with good initial connection weights.
Application of genetic algorithm and neuro fuzzy control techniques for autoIAEME Publication
This document discusses the application of genetic algorithms, fuzzy logic, and a hybrid neuro-fuzzy approach for automatic generation control of interconnected power systems. It summarizes previous work applying genetic algorithms and fuzzy logic that have limitations. The major contribution is a proposed hybrid neuro-fuzzy control approach that combines neural networks and fuzzy logic to address limitations in conventional approaches. This hybrid approach can effectively handle system nonlinearities and has faster processing speed than conventional controllers.
A novel predictive optimization scheme for energy-efficient reliable operatio...IJECEIAES
Wireless Sensor Network (WSN) has been studied for more than a decades that resulted in evolution of the significant applications towards assisting in sensing physical information from human inaccesible area. It was also observed from existing sysem that energy attribute is the root cause of majority of the problems associated with WSN that also gives rise to various operational reliability issue. Therefore, the prime goal of the proposed study is to present a novel predictive optimization approach of data fusion in order to jointly address the problems associated with energy efficiency and reliable operation of sensor nodes in WSN. An analytical research approach is carried out in order to ensure that a time-based synchronization scheme contributes to offer an evolutionary approach towards significant energy optimization. A simulation-based benchmarking analysis is carried out to find that proposed system offers good energy-efficient performance in comparison to existing approaches.
IRJET- Face Recognition of Criminals for Security using Principal Component A...IRJET Journal
This document presents a face recognition system using principal component analysis to identify criminals at airports. The system is trained on images of known criminals collected from law enforcement agencies. It uses PCA for dimensionality reduction to generate eigenfaces from the training images. During testing, it generates an eigenface from the input image and calculates the Euclidean distance between this eigenface and the eigenfaces of the training images. It identifies the criminal as the one corresponding to the training image with the minimum distance, alerting authorities. The document outlines the methodology, including preprocessing steps like subtracting the mean face, and reviews prior work applying PCA and other algorithms to face recognition.
A novel predictive model for capturing threats for facilitating effective soc...IJECEIAES
Social distancing is one of the simple and effective shields for every individual to control spreading of virus in present scenario of pandemic coronavirus disease (COVID-19). However, existing application of social distancing is a basic model and it is also characterized by various pitfalls in case of dynamic monitoring of infected individual accurately. Review of existing literature shows that there has been various dedicated research attempt towards social distancing using available technologies, however, there are further scope of improvement too. This paper has introduced a novel framework which is capable of computing the level of threat with much higher degree of accuracy using distance and duration of stay as elementary parameters. Finally, the model can successfully classify the level of threats using deep learning. The study outcome shows that proposed system offers better predictive performance in contrast to other approaches.
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...IRJET Journal
This document presents a proposed system for symmetric image registration based on intensity and spatial information using a technique called the Coloured Simple Algebraic Algorithm (CSAA). The system first preprocesses color images, extracts features, then classifies images as symmetric or asymmetric using a neural network. It is shown to provide accurate and robust registration of medical and biomedical images. The system is implemented and evaluated on sample images, demonstrating it can successfully identify symmetric versus asymmetric images. The proposed approach aims to improve on existing techniques for intensity-based image registration tasks.
A multi-layer-artificial-neural-network-architecture-design-for-load-forecast...Cemal Ardil
The document discusses a proposed artificial neural network architecture for short-term load forecasting in power systems. It begins with background on artificial neural networks and load forecasting. It then describes a multilayer neural network model trained using a modified backpropagation algorithm to forecast power system load 24 hours in advance based on historical load data. The results showed the neural network model could accurately forecast daily load patterns.
Performance analysis of binary and multiclass models using azure machine lear...IJECEIAES
Network data is expanding and that too at an alarming rate. Besides, the sophisticated attack tools used by hackers lead to capricious cyber threat landscape. Traditional models proposed in the field of network intrusion detection using machine learning algorithms emphasize more on improving attack detection rate and reducing false alarms but time efficiency is often overlooked. Therefore, in order to address this limitation, a modern solution has been presented using Machine Learning-as-a-Service platform. The proposed work analyses the performance of eight two-class and three multiclass algorithms using UNSW NB-15, a modern intrusion detection dataset. 82,332 testing samples were considered to evaluate the performance of algorithms. The proposed two class decision forest model exhibited 99.2% accuracy and took 6 seconds to learn 1,75,341 network instances. Multiclass classification task was also undertaken wherein attack types like generic, exploits, shellcode and worms were classified with a recall percentage of 99%, 94.49%, 91.79% and 90.9% respectively by the multiclass decision forest model that also leapfrogged others in terms of training and execution time.
Short-Term Forecasting of Electricity Consumption in Palestine Using Artifici...ijaia
Nowadays, planning the process of electricity consumption demand is one of the keys success factors for
the development of countries. Due to the importance of electricity, countries have greatly paid attention to
the prediction of electricity consumption. Electricity consumption prediction is a major problem for the
power sector; an efficient prediction will help electrical companies to take the right decisions and to
optimize their supply strategies for their work. In this paper, we proposed a model that is used to predict
the future electricity consumption depending on the previous consumption. This model provides companies
and authorities to know the future information about the electricity consumption, so they can organize their
distribution and make suitable plans to maintain the stability in the delivery and distribution of electricity.
We aim to create a model that will be able to study the previous electricity consumption patterns and use
this data to predict the future electricity consumption. The system analyzes the collected data of electricity
consumption of the previous years, then byusing the mean value for each day and the use of Multilayer
Feed-Forward with Backpropagation Neural Networks (MFFNNBP) as a tool to predict the future
electricity consumption in Palestine. The data used in this paper depends on data collection of months and
years. Finally, this proposed model conducts a systematic process with the aim of determining the future
electricity consumption in Palestine. The proposed application and the result in this paper are developed in
order to contribute to the improvement of the current energy planning tools in Palestine. The experimental
results show that the model performs good results of prediction, with low Mean Square Error (MSE).
EMPIRICAL APPLICATION OF SIMULATED ANNEALING USING OBJECT-ORIENTED METRICS TO...ijcsa
The work is about using Simulated Annealing Algorithm for the effort estimation model parameter
optimization which can lead to the reduction in the difference in actual and estimated effort used in model
development.
The model has been tested using OOP’s dataset, obtained from NASA for research purpose.The data set
based model equation parameters have been found that consists of two independent variables, viz. Lines of
Code (LOC) along with one more attribute as a dependent variable related to software development effort
(DE). The results have been compared with the earlier work done by the author on Artificial Neural
Network (ANN) and Adaptive Neuro Fuzzy Inference System (ANFIS) and it has been observed that the
developed SA based model is more capable to provide better estimation of software development effort than
ANN and ANFIS
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A REVIEW ON OPTIMIZATION OF LEAST SQUARES SUPPORT VECTOR MACHINE FOR TIME SER...ijaia
Support Vector Machine has appeared as an active study in machine learning community and extensively
used in various fields including in prediction, pattern recognition and many more. However, the Least
Squares Support Vector Machine which is a variant of Support Vector Machine offers better solution
strategy. In order to utilize the LSSVM capability in data mining task such as prediction, there is a need to
optimize its hyper parameters. This paper presents a review on techniques used to optimize the parameters
based on two main classes; Evolutionary Computation and Cross Validation.
HSO: A Hybrid Swarm Optimization Algorithm for Reducing Energy Consumption in...TELKOMNIKA JOURNAL
Mobile Cloud Computing (MCC) is an emerging technology for the improvement of mobile service quality. MCC resources are dynamically allocated to the users who pay for the resources based on their needs. The drawback of this process is that it is prone to failure and demands a high energy input. Resource providers mainly focus on resource performance and utilization with more consideration on the constraints of service level agreement (SLA). Resource performance can be achieved through virtualization techniques which facilitates the sharing of resource providers’ information between different virtual machines. To address these issues, this study sets forth a novel algorithm (HSO) that optimized energy efficiency resource management in the cloud; the process of the proposed method involves the use of the developed cost and runtime-effective model to create a minimum energy configuration of the cloud compute nodes while guaranteeing the maintenance of all minimum performances. The cost functions will cover energy, performance and reliability concerns. With the proposed model, the performance of the Hybrid swarm algorithm was significantly increased, as observed by optimizing the number of tasks through simulation, (power consumption was reduced by 42%). The simulation studies also showed a reduction in the number of required calculations by about 20% by the inclusion of the presented algorithms compared to the traditional static approach. There was also a decrease in the node loss which allowed the optimization algorithm to achieve a minimal overhead on cloud compute resources while still saving energy significantly. Conclusively, an energy-aware optimization model which describes the required system constraints was presented in this study, and a further proposal for techniques to determine the best overall solution was also made.
Predict the Average Temperatures of Baghdad City by Used Artificial Neural Ne...IJERA Editor
This paper utilizes artificial neural networks (ANN) technique to improve temperature forecast performance of
Baghdad city. Our study based on Feed Forward Backpropagation Artificial Neural Networks (BPANN)
algorithm of which trained and tested by used a real world daily average temperatures of Bagdad city for ten
years past for months of January and July. Aimed at providing forecasts in a schedule, for all Days of the month
to help the meteorologist to foresee future weather temperature accurately and easily. Forecasts by ANN model
has been compared with the actual results and the realistic output (with IMOS). The results has been Compared
to the practical temperature prediction results, and shows that the BPANN forecasts have accuracy that gave
reasonably very good result and can be considered as a good method for temperature predicting..
DYNAMIC NETWORK ANOMALY INTRUSION DETECTION USING MODIFIED SOMcscpconf
This document presents a modified Self-Organizing Map (SOM) algorithm for network anomaly intrusion detection. The proposed algorithm allows the neural network to grow dynamically based on a distance threshold, rather than having a fixed architecture. It also uses connection strength to identify neighborhood nodes for weight vector updating. The algorithm was tested on standard intrusion detection datasets and achieved a detection rate of 98% and a false alarm rate of 2%, outperforming a basic SOM approach. The modified SOM addresses limitations of fixed network architecture and random weight initialization in the standard SOM method.
Survey on deep learning applied to predictive maintenance IJECEIAES
Prognosis health monitoring (PHM) plays an increasingly important role in the management of machines and manufactured products in today’s industry, and deep learning plays an important part by establishing the optimal predictive maintenance policy. However, traditional learning methods such as unsupervised and supervised learning with standard architectures face numerous problems when exploiting existing data. Therefore, in this essay, we review the significant improvements in deep learning made by researchers over the last 3 years in solving these difficulties. We note that researchers are striving to achieve optimal performance in estimating the remaining useful life (RUL) of machine health by optimizing each step from data to predictive diagnostics. Specifically, we outline the challenges at each level with the type of improvement that has been made, and we feel that this is an opportunity to try to select a state-of-the-art architecture that incorporates these changes so each researcher can compare with his or her model. In addition, post-RUL reasoning and the use of distributed computing with cloud technology is presented, which will potentially improve the classification accuracy in maintenance activities. Deep learning will undoubtedly prove to have a major impact in upgrading companies at the lowest cost in the new industrial revolution, Industry 4.0.
IRJET- Proposed System for Animal Recognition using Image ProcessingIRJET Journal
The document proposes an animal recognition system using image processing. It would use PCA algorithms and eigenfaces methods to identify animals from images captured by existing cameras in nature reserves. This would allow authorities to be alerted of the presence of dangerous animals so people in the reserves can live without fear of attack. The system would analyze images to extract features, calculate covariances and eigenvectors to identify animals based on training data from five classes of animals. If implemented, it could automate animal detection compared to relying on human monitoring of video feeds.
IRJET- Image Reconstruction Algorithm for Electrical Impedance Tomographic Sy...IRJET Journal
This document summarizes research on using electrical impedance tomography (EIT) to reconstruct images showing conductivity distributions within a volume. It describes using MATLAB to simulate EIT data and evaluate different reconstruction algorithms, including back projection, filtered back projection, Gauss-Newton, Tikhonov regularization, NOSER, total variation, and dynamic regularization. Simulation results show total variation and dynamic regularization produce clearer reconstructed images compared to other methods. The accuracy of EIT imaging depends on hardware, electrodes, conductivity distributions, and the reconstruction algorithm.
IRJET- Reducing electricity usage in Internet using transactional dataIRJET Journal
This document summarizes a research paper that proposes a method to reduce electricity usage and costs for internet services by optimizing how transactional data is mapped across geographically distributed data centers. It formulates the problem as a stochastic programming problem to maximize energy utilization within a cost budget. An efficient online algorithm is developed using Lyapunov optimization to map user requests to data centers based on changing factors like electricity prices and workload, with the goal of significantly reducing costs compared to baseline strategies. The system architecture involves front-end servers collecting user requests and dispatching them to appropriate back-end data centers for processing.
WIND SPEED & POWER FORECASTING USING ARTIFICIAL NEURAL NETWORK (NARX) FOR NEW...Journal For Research
Continuous Depleting conventional fuel reserves and its impact as increasing global warming concerns have diverted world attention towards non-conventional energy sources. Out of different non-conventional energy sources wind energy can be consider as one of the cleanest source with minimum possible pollution or harmful emissions and has the potential to decrease the relying on conventional energy sources. Today Wind energy can play a vital role to meet our energy demands; however, it faces various issues such as intermittent nature and frequency instability. To reduce such issues the knowledge of futuristic weather conditions and wind speed trend are required. This work mainly describes the implementation of NARX Artificial neural network for wind speed & power forecasting with the help of historical data available from wind farms.
A study of a modified histogram based fast enhancement algorithm (mhbfe)sipij
Image enhancement is one of the most important issues in low-level image processing. The goal of image
enhancement is to improve the quality of an image such that enhanced image is better than the original
image. Conventional Histogram equalization (HE) is one of the most algorithms used in the contrast
enhancement of medical images, this due to its simplicity and effectiveness. However, it causes the
unnatural look and visual artefacts, where it tends to change the brightness of an images. The Histogram
Based Fast Enhancement Algorithm (HBFE) tries to enhance the CT head images, where it improves the
water-washed effect caused by conventional histogram equalization algorithms with less complexity. It
depends on using full gray levels to enhance the soft tissues ignoring other image details. We present a
modification of this algorithm to be valid for most CT image types with keeping the degree of simplicity.
Experimental results show that The Modified Histogram Based Fast Enhancement Algorithm (MHBFE)
enhances the results in term of PSNR, AMBE and entropy. We use also the Statistical analysis to ensure
the improvement of the proposed modification that can be generalized. ANalysis Of VAriance (ANOVA) is
used as first to test whether or not all the results have the same average. Then we find the significant
improvement of the modification.
Analytical framework for optimized feature extraction for upgrading occupancy...IJECEIAES
The adoption of the occupancy sensors has become an inevitable in commercial and non-commercial security devices, owing to their proficiency in the energy management. It has been found that the usages of conventional sensors is shrouded with operational problems, hence the use of the Doppler radar offers better mitigation of such problems. However, the usage of Doppler radar towards occupancy sensing in existing system is found to be very much in infancy stage. Moreover, the performance of monitoring using Doppler radar is yet to be improved more. Therefore, this paper introduces a simplified framework for enriching the event sensing performance by efficient selection of minimal robust attributes using Doppler radar. Adoption of analytical methodology has been carried out to find that different machine learning approaches could be further used for improving the accuracy performance for the feature that has been extracted in the proposed system of occuancy system.
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural Ne...Scientific Review
Radial Basis Probabilistic Neural Network (RBPNN) has a broader generalized capability that been successfully applied to multiple fields. In this paper, the Euclidean distance of each data point in RBPNN is extended by calculating its kernel-induced distance instead of the conventional sum-of squares distance. The kernel function is a generalization of the distance metric that measures the distance between two data points as the data points are mapped into a high dimensional space. During the comparing of the four constructed classification models with Kernel RBPNN, Radial Basis Function networks, RBPNN and Back-Propagation networks as proposed, results showed that, model classification on Iris Data with Kernel RBPNN display an outstanding performance in this regard.
Mixed Language Based Offline Handwritten Character Recognition Using First St...CSCJournals
Artificial Neural Network is an artificial representation of the human brain that tries to simulate its learning process. To train a network and measure how well it performs, an objective function must be defined. A commonly used performance criterion function is the sum of squares error function. Full end-to-end text recognition in natural images is a challenging problem that has recently received much attention in computer vision and machine learning. Traditional systems in this area have relied on elaborate models that incorporate carefully hand-engineered features or large amounts of prior knowledge. Language identification and interpretation of handwritten characters is one of the challenges faced in various industries. For example, it is always a big challenge in data interpretation from cheques in banks, language identification and translated messages from ancient script in the form of manuscripts, palm scripts and stone carvings to name a few. Handwritten character recognition using Soft computing methods like Neural networks is always a big area of research for long time and there are multiple theories and algorithms developed in the area of neural networks for handwritten character recognition.
A novel predictive optimization scheme for energy-efficient reliable operatio...IJECEIAES
Wireless Sensor Network (WSN) has been studied for more than a decades that resulted in evolution of the significant applications towards assisting in sensing physical information from human inaccesible area. It was also observed from existing sysem that energy attribute is the root cause of majority of the problems associated with WSN that also gives rise to various operational reliability issue. Therefore, the prime goal of the proposed study is to present a novel predictive optimization approach of data fusion in order to jointly address the problems associated with energy efficiency and reliable operation of sensor nodes in WSN. An analytical research approach is carried out in order to ensure that a time-based synchronization scheme contributes to offer an evolutionary approach towards significant energy optimization. A simulation-based benchmarking analysis is carried out to find that proposed system offers good energy-efficient performance in comparison to existing approaches.
IRJET- Face Recognition of Criminals for Security using Principal Component A...IRJET Journal
This document presents a face recognition system using principal component analysis to identify criminals at airports. The system is trained on images of known criminals collected from law enforcement agencies. It uses PCA for dimensionality reduction to generate eigenfaces from the training images. During testing, it generates an eigenface from the input image and calculates the Euclidean distance between this eigenface and the eigenfaces of the training images. It identifies the criminal as the one corresponding to the training image with the minimum distance, alerting authorities. The document outlines the methodology, including preprocessing steps like subtracting the mean face, and reviews prior work applying PCA and other algorithms to face recognition.
A novel predictive model for capturing threats for facilitating effective soc...IJECEIAES
Social distancing is one of the simple and effective shields for every individual to control spreading of virus in present scenario of pandemic coronavirus disease (COVID-19). However, existing application of social distancing is a basic model and it is also characterized by various pitfalls in case of dynamic monitoring of infected individual accurately. Review of existing literature shows that there has been various dedicated research attempt towards social distancing using available technologies, however, there are further scope of improvement too. This paper has introduced a novel framework which is capable of computing the level of threat with much higher degree of accuracy using distance and duration of stay as elementary parameters. Finally, the model can successfully classify the level of threats using deep learning. The study outcome shows that proposed system offers better predictive performance in contrast to other approaches.
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...IRJET Journal
This document presents a proposed system for symmetric image registration based on intensity and spatial information using a technique called the Coloured Simple Algebraic Algorithm (CSAA). The system first preprocesses color images, extracts features, then classifies images as symmetric or asymmetric using a neural network. It is shown to provide accurate and robust registration of medical and biomedical images. The system is implemented and evaluated on sample images, demonstrating it can successfully identify symmetric versus asymmetric images. The proposed approach aims to improve on existing techniques for intensity-based image registration tasks.
A multi-layer-artificial-neural-network-architecture-design-for-load-forecast...Cemal Ardil
The document discusses a proposed artificial neural network architecture for short-term load forecasting in power systems. It begins with background on artificial neural networks and load forecasting. It then describes a multilayer neural network model trained using a modified backpropagation algorithm to forecast power system load 24 hours in advance based on historical load data. The results showed the neural network model could accurately forecast daily load patterns.
Performance analysis of binary and multiclass models using azure machine lear...IJECEIAES
Network data is expanding and that too at an alarming rate. Besides, the sophisticated attack tools used by hackers lead to capricious cyber threat landscape. Traditional models proposed in the field of network intrusion detection using machine learning algorithms emphasize more on improving attack detection rate and reducing false alarms but time efficiency is often overlooked. Therefore, in order to address this limitation, a modern solution has been presented using Machine Learning-as-a-Service platform. The proposed work analyses the performance of eight two-class and three multiclass algorithms using UNSW NB-15, a modern intrusion detection dataset. 82,332 testing samples were considered to evaluate the performance of algorithms. The proposed two class decision forest model exhibited 99.2% accuracy and took 6 seconds to learn 1,75,341 network instances. Multiclass classification task was also undertaken wherein attack types like generic, exploits, shellcode and worms were classified with a recall percentage of 99%, 94.49%, 91.79% and 90.9% respectively by the multiclass decision forest model that also leapfrogged others in terms of training and execution time.
Short-Term Forecasting of Electricity Consumption in Palestine Using Artifici...ijaia
Nowadays, planning the process of electricity consumption demand is one of the keys success factors for
the development of countries. Due to the importance of electricity, countries have greatly paid attention to
the prediction of electricity consumption. Electricity consumption prediction is a major problem for the
power sector; an efficient prediction will help electrical companies to take the right decisions and to
optimize their supply strategies for their work. In this paper, we proposed a model that is used to predict
the future electricity consumption depending on the previous consumption. This model provides companies
and authorities to know the future information about the electricity consumption, so they can organize their
distribution and make suitable plans to maintain the stability in the delivery and distribution of electricity.
We aim to create a model that will be able to study the previous electricity consumption patterns and use
this data to predict the future electricity consumption. The system analyzes the collected data of electricity
consumption of the previous years, then byusing the mean value for each day and the use of Multilayer
Feed-Forward with Backpropagation Neural Networks (MFFNNBP) as a tool to predict the future
electricity consumption in Palestine. The data used in this paper depends on data collection of months and
years. Finally, this proposed model conducts a systematic process with the aim of determining the future
electricity consumption in Palestine. The proposed application and the result in this paper are developed in
order to contribute to the improvement of the current energy planning tools in Palestine. The experimental
results show that the model performs good results of prediction, with low Mean Square Error (MSE).
EMPIRICAL APPLICATION OF SIMULATED ANNEALING USING OBJECT-ORIENTED METRICS TO...ijcsa
The work is about using Simulated Annealing Algorithm for the effort estimation model parameter
optimization which can lead to the reduction in the difference in actual and estimated effort used in model
development.
The model has been tested using OOP’s dataset, obtained from NASA for research purpose.The data set
based model equation parameters have been found that consists of two independent variables, viz. Lines of
Code (LOC) along with one more attribute as a dependent variable related to software development effort
(DE). The results have been compared with the earlier work done by the author on Artificial Neural
Network (ANN) and Adaptive Neuro Fuzzy Inference System (ANFIS) and it has been observed that the
developed SA based model is more capable to provide better estimation of software development effort than
ANN and ANFIS
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A REVIEW ON OPTIMIZATION OF LEAST SQUARES SUPPORT VECTOR MACHINE FOR TIME SER...ijaia
Support Vector Machine has appeared as an active study in machine learning community and extensively
used in various fields including in prediction, pattern recognition and many more. However, the Least
Squares Support Vector Machine which is a variant of Support Vector Machine offers better solution
strategy. In order to utilize the LSSVM capability in data mining task such as prediction, there is a need to
optimize its hyper parameters. This paper presents a review on techniques used to optimize the parameters
based on two main classes; Evolutionary Computation and Cross Validation.
HSO: A Hybrid Swarm Optimization Algorithm for Reducing Energy Consumption in...TELKOMNIKA JOURNAL
Mobile Cloud Computing (MCC) is an emerging technology for the improvement of mobile service quality. MCC resources are dynamically allocated to the users who pay for the resources based on their needs. The drawback of this process is that it is prone to failure and demands a high energy input. Resource providers mainly focus on resource performance and utilization with more consideration on the constraints of service level agreement (SLA). Resource performance can be achieved through virtualization techniques which facilitates the sharing of resource providers’ information between different virtual machines. To address these issues, this study sets forth a novel algorithm (HSO) that optimized energy efficiency resource management in the cloud; the process of the proposed method involves the use of the developed cost and runtime-effective model to create a minimum energy configuration of the cloud compute nodes while guaranteeing the maintenance of all minimum performances. The cost functions will cover energy, performance and reliability concerns. With the proposed model, the performance of the Hybrid swarm algorithm was significantly increased, as observed by optimizing the number of tasks through simulation, (power consumption was reduced by 42%). The simulation studies also showed a reduction in the number of required calculations by about 20% by the inclusion of the presented algorithms compared to the traditional static approach. There was also a decrease in the node loss which allowed the optimization algorithm to achieve a minimal overhead on cloud compute resources while still saving energy significantly. Conclusively, an energy-aware optimization model which describes the required system constraints was presented in this study, and a further proposal for techniques to determine the best overall solution was also made.
Predict the Average Temperatures of Baghdad City by Used Artificial Neural Ne...IJERA Editor
This paper utilizes artificial neural networks (ANN) technique to improve temperature forecast performance of
Baghdad city. Our study based on Feed Forward Backpropagation Artificial Neural Networks (BPANN)
algorithm of which trained and tested by used a real world daily average temperatures of Bagdad city for ten
years past for months of January and July. Aimed at providing forecasts in a schedule, for all Days of the month
to help the meteorologist to foresee future weather temperature accurately and easily. Forecasts by ANN model
has been compared with the actual results and the realistic output (with IMOS). The results has been Compared
to the practical temperature prediction results, and shows that the BPANN forecasts have accuracy that gave
reasonably very good result and can be considered as a good method for temperature predicting..
DYNAMIC NETWORK ANOMALY INTRUSION DETECTION USING MODIFIED SOMcscpconf
This document presents a modified Self-Organizing Map (SOM) algorithm for network anomaly intrusion detection. The proposed algorithm allows the neural network to grow dynamically based on a distance threshold, rather than having a fixed architecture. It also uses connection strength to identify neighborhood nodes for weight vector updating. The algorithm was tested on standard intrusion detection datasets and achieved a detection rate of 98% and a false alarm rate of 2%, outperforming a basic SOM approach. The modified SOM addresses limitations of fixed network architecture and random weight initialization in the standard SOM method.
Survey on deep learning applied to predictive maintenance IJECEIAES
Prognosis health monitoring (PHM) plays an increasingly important role in the management of machines and manufactured products in today’s industry, and deep learning plays an important part by establishing the optimal predictive maintenance policy. However, traditional learning methods such as unsupervised and supervised learning with standard architectures face numerous problems when exploiting existing data. Therefore, in this essay, we review the significant improvements in deep learning made by researchers over the last 3 years in solving these difficulties. We note that researchers are striving to achieve optimal performance in estimating the remaining useful life (RUL) of machine health by optimizing each step from data to predictive diagnostics. Specifically, we outline the challenges at each level with the type of improvement that has been made, and we feel that this is an opportunity to try to select a state-of-the-art architecture that incorporates these changes so each researcher can compare with his or her model. In addition, post-RUL reasoning and the use of distributed computing with cloud technology is presented, which will potentially improve the classification accuracy in maintenance activities. Deep learning will undoubtedly prove to have a major impact in upgrading companies at the lowest cost in the new industrial revolution, Industry 4.0.
IRJET- Proposed System for Animal Recognition using Image ProcessingIRJET Journal
The document proposes an animal recognition system using image processing. It would use PCA algorithms and eigenfaces methods to identify animals from images captured by existing cameras in nature reserves. This would allow authorities to be alerted of the presence of dangerous animals so people in the reserves can live without fear of attack. The system would analyze images to extract features, calculate covariances and eigenvectors to identify animals based on training data from five classes of animals. If implemented, it could automate animal detection compared to relying on human monitoring of video feeds.
IRJET- Image Reconstruction Algorithm for Electrical Impedance Tomographic Sy...IRJET Journal
This document summarizes research on using electrical impedance tomography (EIT) to reconstruct images showing conductivity distributions within a volume. It describes using MATLAB to simulate EIT data and evaluate different reconstruction algorithms, including back projection, filtered back projection, Gauss-Newton, Tikhonov regularization, NOSER, total variation, and dynamic regularization. Simulation results show total variation and dynamic regularization produce clearer reconstructed images compared to other methods. The accuracy of EIT imaging depends on hardware, electrodes, conductivity distributions, and the reconstruction algorithm.
IRJET- Reducing electricity usage in Internet using transactional dataIRJET Journal
This document summarizes a research paper that proposes a method to reduce electricity usage and costs for internet services by optimizing how transactional data is mapped across geographically distributed data centers. It formulates the problem as a stochastic programming problem to maximize energy utilization within a cost budget. An efficient online algorithm is developed using Lyapunov optimization to map user requests to data centers based on changing factors like electricity prices and workload, with the goal of significantly reducing costs compared to baseline strategies. The system architecture involves front-end servers collecting user requests and dispatching them to appropriate back-end data centers for processing.
WIND SPEED & POWER FORECASTING USING ARTIFICIAL NEURAL NETWORK (NARX) FOR NEW...Journal For Research
Continuous Depleting conventional fuel reserves and its impact as increasing global warming concerns have diverted world attention towards non-conventional energy sources. Out of different non-conventional energy sources wind energy can be consider as one of the cleanest source with minimum possible pollution or harmful emissions and has the potential to decrease the relying on conventional energy sources. Today Wind energy can play a vital role to meet our energy demands; however, it faces various issues such as intermittent nature and frequency instability. To reduce such issues the knowledge of futuristic weather conditions and wind speed trend are required. This work mainly describes the implementation of NARX Artificial neural network for wind speed & power forecasting with the help of historical data available from wind farms.
A study of a modified histogram based fast enhancement algorithm (mhbfe)sipij
Image enhancement is one of the most important issues in low-level image processing. The goal of image
enhancement is to improve the quality of an image such that enhanced image is better than the original
image. Conventional Histogram equalization (HE) is one of the most algorithms used in the contrast
enhancement of medical images, this due to its simplicity and effectiveness. However, it causes the
unnatural look and visual artefacts, where it tends to change the brightness of an images. The Histogram
Based Fast Enhancement Algorithm (HBFE) tries to enhance the CT head images, where it improves the
water-washed effect caused by conventional histogram equalization algorithms with less complexity. It
depends on using full gray levels to enhance the soft tissues ignoring other image details. We present a
modification of this algorithm to be valid for most CT image types with keeping the degree of simplicity.
Experimental results show that The Modified Histogram Based Fast Enhancement Algorithm (MHBFE)
enhances the results in term of PSNR, AMBE and entropy. We use also the Statistical analysis to ensure
the improvement of the proposed modification that can be generalized. ANalysis Of VAriance (ANOVA) is
used as first to test whether or not all the results have the same average. Then we find the significant
improvement of the modification.
Analytical framework for optimized feature extraction for upgrading occupancy...IJECEIAES
The adoption of the occupancy sensors has become an inevitable in commercial and non-commercial security devices, owing to their proficiency in the energy management. It has been found that the usages of conventional sensors is shrouded with operational problems, hence the use of the Doppler radar offers better mitigation of such problems. However, the usage of Doppler radar towards occupancy sensing in existing system is found to be very much in infancy stage. Moreover, the performance of monitoring using Doppler radar is yet to be improved more. Therefore, this paper introduces a simplified framework for enriching the event sensing performance by efficient selection of minimal robust attributes using Doppler radar. Adoption of analytical methodology has been carried out to find that different machine learning approaches could be further used for improving the accuracy performance for the feature that has been extracted in the proposed system of occuancy system.
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural Ne...Scientific Review
Radial Basis Probabilistic Neural Network (RBPNN) has a broader generalized capability that been successfully applied to multiple fields. In this paper, the Euclidean distance of each data point in RBPNN is extended by calculating its kernel-induced distance instead of the conventional sum-of squares distance. The kernel function is a generalization of the distance metric that measures the distance between two data points as the data points are mapped into a high dimensional space. During the comparing of the four constructed classification models with Kernel RBPNN, Radial Basis Function networks, RBPNN and Back-Propagation networks as proposed, results showed that, model classification on Iris Data with Kernel RBPNN display an outstanding performance in this regard.
Mixed Language Based Offline Handwritten Character Recognition Using First St...CSCJournals
Artificial Neural Network is an artificial representation of the human brain that tries to simulate its learning process. To train a network and measure how well it performs, an objective function must be defined. A commonly used performance criterion function is the sum of squares error function. Full end-to-end text recognition in natural images is a challenging problem that has recently received much attention in computer vision and machine learning. Traditional systems in this area have relied on elaborate models that incorporate carefully hand-engineered features or large amounts of prior knowledge. Language identification and interpretation of handwritten characters is one of the challenges faced in various industries. For example, it is always a big challenge in data interpretation from cheques in banks, language identification and translated messages from ancient script in the form of manuscripts, palm scripts and stone carvings to name a few. Handwritten character recognition using Soft computing methods like Neural networks is always a big area of research for long time and there are multiple theories and algorithms developed in the area of neural networks for handwritten character recognition.
The document discusses cluster analysis, an unsupervised machine learning technique used to group similar cases together. It describes how cluster analysis is used in marketing research for market segmentation, understanding customer behaviors, and identifying new product opportunities. The key steps in cluster analysis involve selecting a distance measure, clustering algorithm, determining the optimal number of clusters, and validating the results.
This document provides an overview of artificial intelligence techniques and their applications in power systems. It discusses expert systems, artificial neural networks, and fuzzy logic systems as the three major AI techniques used. It describes how each technique works and its advantages/disadvantages. The document also gives examples of how these techniques can be applied in transmission lines, power system protection, and other areas like operations, planning, control, and automation of power systems. The conclusion states that while AI shows promise for improving power system efficiency and reliability, more research is still needed to fully realize its benefits.
k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells.
This document lists 51 post graduate theses related to construction management from 1996 to 2006. The theses cover topics such as cost estimation, project scheduling, quality control, value engineering, and expert systems. The degrees earned include M.Sc. and Ph.D. The theses were conducted in Iraq and Libya to research and develop management systems for improving construction project delivery.
Electrical engineering deals with applications of electricity, electronics, and magnetism. It covers subfields like power electronics, control systems, signal processing, and telecommunications. Electrical engineers work in areas like power generation and distribution, electronics and microelectronics design, signal analysis and manipulation, and telecommunications transmission. Electrical engineering education typically requires a four to five year bachelor's degree program. Salaries range from $51k to $80k per year depending on experience and education, with employment expected to grow at an average rate.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
Comparison of Cost Estimation Methods using Hybrid Artificial Intelligence on...IJERA Editor
Cost estimating at schematic design stage as the basis of project evaluation, engineering design, and cost
management, plays an important role in project decision under a limited definition of scope and constraints in
available information and time, and the presence of uncertainties. The purpose of this study is to compare the
performance of cost estimation models of two different hybrid artificial intelligence approaches: regression
analysis-adaptive neuro fuzzy inference system (RANFIS) and case based reasoning-genetic algorithm (CBRGA)
techniques. The models were developed based on the same 50 low-cost apartment project datasets in
Indonesia. Tested on another five testing data, the models were proven to perform very well in term of accuracy.
A CBR-GA model was found to be the best performer but suffered from disadvantage of needing 15 cost drivers
if compared to only 4 cost drivers required by RANFIS for on-par performance.
Short Term Load Forecasting Using Bootstrap Aggregating Based Ensemble Artifi...Kashif Mehmood
Short Term Load Forecasting (STLF) can predict load from several minutes to week plays
the vital role to address challenges such as optimal generation, economic scheduling, dispatching and
contingency analysis. This paper uses Multi-Layer Perceptron (MLP) Artificial Neural Network
(ANN) technique to perform STFL but long training time and convergence issues caused by bias,
variance and less generalization ability, unable this algorithm to accurately predict future loads. This
issue can be resolved by various methods of Bootstraps Aggregating (Bagging) (like disjoint
partitions, small bags, replica small bags and disjoint bags) which helps in reducing variance and
increasing generalization ability of ANN. Moreover, it results in reducing error in the learning process
of ANN. Disjoint partition proves to be the most accurate Bagging method and combining outputs of
this method by taking mean improves the overall performance. This method of combining several
predictors known as Ensemble Artificial Neural Network (EANN) outperform the ANN and Bagging
method by further increasing the generalization ability and STLF accuracy.
The document summarizes electricity load forecasting techniques for power system planning. It discusses using curve fitting algorithms to forecast electricity load based on analyzing past load data from 2012. Specifically, it proposes using a Fourier series curve fitting model to predict future load based on factors like temperature, humidity, and time of day or year. The document also briefly describes other common load forecasting techniques including multiple regression, exponential smoothing, and neural networks.
A Review on Prediction of Compressive Strength and Slump by Using Different M...IRJET Journal
The document reviews different machine learning techniques for predicting the compressive strength and slump of concrete, including artificial neural networks, genetic algorithms, and hybrid algorithms. It finds that artificial neural networks trained with the Levenberg-Marquardt algorithm can predict compressive strength with over 95% accuracy. For slump prediction, federated learning achieves the best results in terms of correlation coefficient, root mean square error, and mean absolute error. A hybrid approach combining biogeography-based optimization and multilayer perceptron neural networks most accurately predicts slope stability. In general, machine learning methods show potential for effectively predicting concrete properties.
A simplified predictive framework for cost evaluation to fault assessment usi...IJECEIAES
Software engineering is an integral part of any software development scheme which frequently encounters bugs, errors, and faults. Predictive evaluation of software fault contributes towards mitigating this challenge to a large extent; however, there is no benchmarked framework being reported in this case yet. Therefore, this paper introduces a computational framework of the cost evaluation method to facilitate a better form of predictive assessment of software faults. Based on lines of code, the proposed scheme deploys adopts a machine-learning approach to address the perform predictive analysis of faults. The proposed scheme presents an analytical framework of the correlation-based cost model integrated with multiple standards machine learning (ML) models, e.g., linear regression, support vector regression, and artificial neural networks (ANN). These learning models are executed and trained to predict software faults with higher accuracy. The study considers assessing the outcomes based on error-based performance metrics in detail to determine how well each learning model performs and how accurate it is at learning. It also looked at the factors contributing to the training loss of neural networks. The validation result demonstrates that, compared to logistic regression and support vector regression, neural network achieves a significantly lower error score for software fault prediction.
A Hierarchical Feature Set optimization for effective code change based Defec...IOSR Journals
This document summarizes research on using support vector machines (SVMs) for software defect prediction. It analyzes 11 datasets from NASA projects containing code metrics and defect information for modules. The researchers preprocessed the data by removing duplicate/inconsistent instances, constant attributes, and balancing the datasets. They used SVMs with 5-fold cross validation to classify modules as defective or non-defective, achieving an average accuracy of 70% across the datasets. The researchers conclude SVMs can effectively predict defects but note earlier studies using the NASA data may have overstated capabilities due to insufficient data preprocessing.
IRJET - A Novel Approach for Software Defect Prediction based on Dimensio...IRJET Journal
This document presents a novel approach for software defect prediction using dimensionality reduction techniques. The proposed approach uses an artificial neural network to extract features from initial change measures, and then trains a classifier on the extracted features. This is compared to other dimensionality reduction techniques like principal component analysis, linear discriminant analysis, and kernel principal component analysis. Five open source datasets from NASA are used to evaluate the different techniques based on accuracy, F1 score, and area under the receiver operating characteristic curve. The results show that the artificial neural network approach outperforms the other dimensionality reduction techniques, and kernel principal component analysis performs best among those techniques. The document also discusses related work on using machine learning for software defect prediction.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
Predictive Data Mining with Normalized Adaptive Training Method for Neural Ne...IJERDJOURNAL
Abstract:- Predictive data mining is an upcoming and fast-growing field and offers a competitive edge for the benefit of organization. In recent decades, researchers have developed new techniques and intelligent algorithms for predictive data mining. In this research paper, we have proposed a novel training algorithm for optimizing neural networks for prediction purpose and to utilize it for the development of prediction models. Models developed in MATLAB Neural Network Toolbox have been tested for insurance datasets taken from a live data warehouse. A comparative study of the proposed algorithm with other popular first and second order algorithms has been presented to judge the predictive accuracy of the suggested technique. Various graphs have been presented to analyse the convergence behaviour of different algorithms towards point of minimum error.
Novel Scheme for Minimal Iterative PSO Algorithm for Extending Network Lifeti...IJECEIAES
This document presents a novel scheme for minimizing the number of iterative steps in the particle swarm optimization (PSO) algorithm to extend the lifetime of wireless sensor networks. It first discusses existing literature that uses PSO approaches to address issues like clustering, energy efficiency, and localization in wireless sensor networks. It then identifies problems with existing approaches, such as higher computational complexity due to many iterations of PSO. The proposed solution enhances the conventional PSO algorithm by introducing decision variables and optimizing parameters like inertia weight and learning coefficients to obtain the global best solution in fewer iterations. It aims to minimize the transmission energy of cluster heads using a radio energy model to improve network lifetime. The key contribution is a computationally efficient PSO algorithm that selects effective
TOWARDS PREDICTING SOFTWARE DEFECTS WITH CLUSTERING TECHNIQUESijaia
The purpose of software defect prediction is to improve the quality of a software project by building a
predictive model to decide whether a software module is or is not fault prone. In recent years, much
research in using machine learning techniques in this topic has been performed. Our aim was to evaluate
the performance of clustering techniques with feature selection schemes to address the problem of software
defect prediction problem. We analysed the National Aeronautics and Space Administration (NASA)
dataset benchmarks using three clustering algorithms: (1) Farthest First, (2) X-Means, and (3) selforganizing map (SOM). In order to evaluate different feature selection algorithms, this article presents a
comparative analysis involving software defects prediction based on Bat, Cuckoo, Grey Wolf Optimizer
(GWO), and particle swarm optimizer (PSO). The results obtained with the proposed clustering models
enabled us to build an efficient predictive model with a satisfactory detection rate and acceptable number
of features.
Tap changer optimisation using embedded differential evolutionary programming...journalBEEI
Over-compensation and under-compensation phenomena are two undesirable results in power system compensation. This will be not a good option in power system planning and operation. The non-optimal values of the compensating parameters subjected to a power system have contributed to these phenomena. Thus, a reliable optimization technique is mandatory to alleviate this issue. This paper presents a stochastic optimization technique used to fix the power loss control in a high demand power system due to the load increase, which causes the voltage decay problems leading to current increase and system loss increment. A new optimization technique termed as embedded differential evolutionary programming (EDEP) is proposed, which integrates the traditional differential evolution (DE) and evolutionary programming (EP). Consequently, EDEP was for solving optimizations problem in power system through the tap changer optimizations scheme. Results obtained from this study are significantly superior compared to the traditional EP with implementation on the IEEE 30-bus reliability test system (RTS) for the loss minimization scheme.
APPROXIMATE ARITHMETIC CIRCUIT DESIGN FOR ERROR RESILIENT APPLICATIONSVLSICS Design
When the application context is ready to accept different levels of exactness in solutions and is supported
by human perception quality, then the term ‘Approximate Computing’ tossed before one decade will
become the first priority . Even though computer hardware and software are working to generate exact
results, approximate results are preferred whenever an error is in predefined bound and adaptive. It will
reduce power demand and critical path delay and improve other circuit metrics. When it comes to
traditional arithmetic circuits, those generating correct results with limitations on performance are rapidly
getting replaced by approximate arithmetic circuits which are the need of the hour, and so on about their
design.
APPROXIMATE ARITHMETIC CIRCUIT DESIGN FOR ERROR RESILIENT APPLICATIONSVLSICS Design
When the application context is ready to accept different levels of exactness in solutions and is supported
by human perception quality, then the term ‘Approximate Computing’ tossed before one decade will
become the first priority . Even though computer hardware and software are working to generate exact
results, approximate results are preferred whenever an error is in predefined bound and adaptive. It will
reduce power demand and critical path delay and improve other circuit metrics. When it comes to
traditional arithmetic circuits, those generating correct results with limitations on performance are rapidly
getting replaced by approximate arithmetic circuits which are the need of the hour, and so on about their
design.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The document summarizes a study that used artificial neural networks (ANN) to predict chemical oxygen demand (COD) levels in an anaerobic wastewater treatment system. Four ANN backpropagation training algorithms - Levenberg-Marquardt, gradient descent with adaptive learning, gradient descent with momentum, and resilient backpropagation - were tested on a model using COD input data. The Levenberg-Marquardt algorithm produced the best results with the lowest mean squared error of 0.533 and highest regression value of 0.991, accurately predicting COD levels. The study demonstrates ANNs can effectively model and predict values in nonlinear wastewater treatment processes.
EFFICIENT USE OF HYBRID ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM COMBINED WITH N...csandit
This research study proposes a novel method for automatic fault prediction from foundry data
introducing the so-called Meta Prediction Function (MPF). Kernel Principal Component
Analysis (KPCA) is used for dimension reduction. Different algorithms are used for building the
MPF such as Multiple Linear Regression (MLR), Adaptive Neuro Fuzzy Inference System
(ANFIS), Support Vector Machine (SVM) and Neural Network (NN). We used classical
machine learning methods such as ANFIS, SVM and NN for comparison with our proposed
MPF. Our empirical results show that the MPF consistently outperform the classical methods.
The document describes a study that used artificial neural networks (ANN) to predict chemical oxygen demand (COD) levels in wastewater from an anaerobic reactor. Four different backpropagation algorithms - Levenberg-Marquardt, gradient descent with adaptive learning rate, gradient descent with momentum, and resilient backpropagation - were used to train a three-layer feedforward ANN model. The model trained with the Levenberg-Marquardt algorithm performed best with a mean squared error of 0.533 and regression coefficient of 0.991, accurately predicting COD levels. The Levenberg-Marquardt algorithm provided the most accurate ANN model for predicting COD in effluent from the ana
Similar to An Application of Genetic Programming for Power System Planning and Operation (20)
Power System State Estimation - A ReviewIDES Editor
This document provides a review of power system state estimation techniques. It discusses both static and dynamic state estimation algorithms. For static state estimation, it covers weighted least squares, decoupled, and robust estimation methods. Weighted least squares is commonly used but can have numerical instability issues. Decoupled state estimation approximates the gain matrix for faster computation. Robust estimation uses M-estimators and other techniques to handle outliers and bad data. Dynamic state estimation applies Kalman filtering, leapfrog algorithms, and other methods to continuously monitor system states over time.
Artificial Intelligence Technique based Reactive Power Planning Incorporating...IDES Editor
This document summarizes a research paper that proposes using artificial intelligence techniques and FACTS controllers for reactive power planning in real-time power transmission systems. The paper formulates the reactive power planning problem and incorporates flexible AC transmission system (FACTS) devices like static VAR compensators (SVC), thyristor controlled series capacitors (TCSC), and unified power flow controllers (UPFC). Evolutionary algorithms like evolutionary programming (EP) and differential evolution (DE) are applied to find the optimal locations and settings of the FACTS controllers to minimize losses and costs. Simulation results on IEEE 30-bus and 72-bus Indian test systems show that UPFC performs best in reducing losses compared to SVC and TCSC.
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...IDES Editor
Damping of power system oscillations with the help
of proposed optimal Proportional Integral Derivative Power
System Stabilizer (PID-PSS) and Static Var Compensator
(SVC)-based controllers are thoroughly investigated in this
paper. This study presents robust tuning of PID-PSS and
SVC-based controllers using Genetic Algorithms (GA) in
multi machine power systems by considering detailed model
of the generators (model 1.1). The effectiveness of FACTSbased
controllers in general and SVC-based controller in
particular depends upon their proper location. Modal
controllability and observability are used to locate SVC–based
controller. The performance of the proposed controllers is
compared with conventional lead-lag power system stabilizer
(CPSS) and demonstrated on 10 machines, 39 bus New England
test system. Simulation studies show that the proposed genetic
based PID-PSS with SVC based controller provides better
performance.
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...IDES Editor
This paper presents the need to operate the power
system economically and with optimum levels of voltages has
further led to an increase in interest in Distributed
Generation. In order to reduce the power losses and to improve
the voltage in the distribution system, distributed generators
(DGs) are connected to load bus. To reduce the total power
losses in the system, the most important process is to identify
the proper location for fixing and sizing of DGs. It presents a
new methodology using a new population based meta heuristic
approach namely Artificial Bee Colony algorithm(ABC) for
the placement of Distributed Generators(DG) in the radial
distribution systems to reduce the real power losses and to
improve the voltage profile, voltage sag mitigation. The power
loss reduction is important factor for utility companies because
it is directly proportional to the company benefits in a
competitive electricity market, while reaching the better power
quality standards is too important as it has vital effect on
customer orientation. In this paper an ABC algorithm is
developed to gain these goals all together. In order to evaluate
sag mitigation capability of the proposed algorithm, voltage
in voltage sensitive buses is investigated. An existing 20KV
network has been chosen as test network and results are
compared with the proposed method in the radial distribution
system.
Line Losses in the 14-Bus Power System Network using UPFCIDES Editor
Controlling power flow in modern power systems
can be made more flexible by the use of recent developments
in power electronic and computing control technology. The
Unified Power Flow Controller (UPFC) is a Flexible AC
transmission system (FACTS) device that can control all the
three system variables namely line reactance, magnitude and
phase angle difference of voltage across the line. The UPFC
provides a promising means to control power flow in modern
power systems. Essentially the performance depends on proper
control setting achievable through a power flow analysis
program. This paper presents a reliable method to meet the
requirements by developing a Newton-Raphson based load
flow calculation through which control settings of UPFC can
be determined for the pre-specified power flow between the
lines. The proposed method keeps Newton-Raphson Load Flow
(NRLF) algorithm intact and needs (little modification in the
Jacobian matrix). A MATLAB program has been developed to
calculate the control settings of UPFC and the power flow
between the lines after the load flow is converged. Case studies
have been performed on IEEE 5-bus system and 14-bus system
to show that the proposed method is effective. These studies
indicate that the method maintains the basic NRLF properties
such as fast computational speed, high degree of accuracy and
good convergence rate.
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...IDES Editor
The size and shape of opening in dam causes the
stress concentration, it also causes the stress variation in the
rest of the dam cross section. The gravity method of the analysis
does not consider the size of opening and the elastic property
of dam material. Thus the objective of study is comprises of
the Finite Element Method which considers the size of
opening, elastic property of material, and stress distribution
because of geometric discontinuity in cross section of dam.
Stress concentration inside the dam increases with the opening
in dam which results in the failure of dam. Hence it is
necessary to analyses large opening inside the dam. By making
the percentage area of opening constant and varying size and
shape of opening the analysis is carried out. For this purpose
a section of Koyna Dam is considered. Dam is defined as a
plane strain element in FEM, based on geometry and loading
condition. Thus this available information specified our path
of approach to carry out 2D plane strain analysis. The results
obtained are then compared mutually to get most efficient
way of providing large opening in the gravity dam.
Assessing Uncertainty of Pushover Analysis to Geometric ModelingIDES Editor
Pushover Analysis a popular tool for seismic
performance evaluation of existing and new structures and is
nonlinear Static procedure where in monotonically increasing
loads are applied to the structure till the structure is unable
to resist the further load .During the analysis, whatever the
strength of concrete and steel is adopted for analysis of
structure may not be the same when real structure is
constructed and the pushover analysis results are very sensitive
to material model adopted, geometric model adopted, location
of plastic hinges and in general to procedure followed by the
analyzer. In this paper attempt has been made to assess
uncertainty in pushover analysis results by considering user
defined hinges and frame modeled as bare frame and frame
with slab modeled as rigid diaphragm and results compared
with experimental observations. Uncertain parameters
considered includes the strength of concrete, strength of steel
and cover to the reinforcement which are randomly generated
and incorporated into the analysis. The results are then
compared with experimental observations.
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...IDES Editor
This document summarizes and analyzes secure multi-party negotiation protocols for electronic payments in mobile computing. It presents a framework for secure multi-party decision protocols using lightweight implementations. The main focus is on synchronizing security features to avoid agreement manipulation and reduce user traffic. The paper describes negotiation between an auctioneer and bidders, showing multiparty security is better than existing systems. It analyzes the performance of encryption algorithms like ECC, XTR, and RSA for use in the multiparty negotiation protocols.
Selfish Node Isolation & Incentivation using Progressive ThresholdsIDES Editor
The problems associated with selfish nodes in
MANET are addressed by a collaborative watchdog approach
which reduces the detection time for selfish nodes thereby
improves the performance and accuracy of watchdogs[1]. In
the related works they make use of credit based systems, reputation
based mechanisms, pathrater and watchdog mechanism
to detect such selfish nodes. In this paper we follow an approach
of collaborative watchdog which reduces the detection
time for selfish nodes and also involves the removal of such
selfish nodes based on some progressively assessed thresholds.
The threshold gives the nodes a chance to stop misbehaving
before it is permanently deleted from the network.
The node passes through several isolation processes before it
is permanently removed. Another version of AODV protocol
is used here which allows the simulation of selfish nodes in
NS2 by adding or modifying log files in the protocol.
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...IDES Editor
Wireless sensor networks are networks having non
wired infrastructure and dynamic topology. In OSI model each
layer is prone to various attacks, which halts the performance
of a network .In this paper several attacks on four layers of
OSI model are discussed and security mechanism is described
to prevent attack in network layer i.e wormhole attack. In
Wormhole attack two or more malicious nodes makes a covert
channel which attracts the traffic towards itself by depicting a
low latency link and then start dropping and replaying packets
in the multi-path route. This paper proposes promiscuous mode
method to detect and isolate the malicious node during
wormhole attack by using Ad-hoc on demand distance vector
routing protocol (AODV) with omnidirectional antenna. The
methodology implemented notifies that the nodes which are
not participating in multi-path routing generates an alarm
message during delay and then detects and isolate the
malicious node from network. We also notice that not only
the same kind of attacks but also the same kind of
countermeasures can appear in multiple layer. For example,
misbehavior detection techniques can be applied to almost all
the layers we discussed.
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...IDES Editor
The recent advancements in the wireless technology
and their wide-spread deployment have made remarkable
enhancements in efficiency in the corporate and industrial
and Military sectors The increasing popularity and usage of
wireless technology is creating a need for more secure wireless
Ad hoc networks. This paper aims researched and developed
a new protocol that prevents wormhole attacks on a ad hoc
network. A few existing protocols detect wormhole attacks but
they require highly specialized equipment not found on most
wireless devices. This paper aims to develop a defense against
wormhole attacks as an Anti-worm protocol which is based on
responsive parameters, that does not require as a significant
amount of specialized equipment, trick clock synchronization,
no GPS dependencies.
Cloud Security and Data Integrity with Client Accountability FrameworkIDES Editor
This document summarizes a proposed cloud security and data integrity framework that provides client accountability. The framework aims to address issues like lack of user control over cloud data, need for data transparency and tracking, and ensuring data integrity. It proposes using JAR (Java Archive) files for data sharing due to benefits like portability. The framework incorporates client-side verification using MD5 hashing, digital signature-based authentication of JAR files, and use of HMAC to ensure data integrity. It also uses password-based encryption of log files to keep them tamper-proof. The framework is intended to provide both accountability and security for data sharing in cloud environments.
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetIDES Editor
A System state in HTTP botnet uses HTTP protocol
for the creation of chain of Botnets thereby compromising
other systems. By using HTTP protocol and port number 80,
attacks can not only be hidden but also pass through the
firewall without being detected. The DPR based detection
leads to better analysis of botnet attacks [3]. However, it
provides only probabilistic detection of the attacker and also
time consuming and error prone. This paper proposes a Genetic
algorithm based layered approach for detecting as well as
preventing botnet attacks. The paper reviews p2p firewall
implementation which forms the basis of filtering.
Performance evaluation is done based on precision, F-value
and probability. Layered approach reduces the computation
and overall time requirement [7]. Genetic algorithm promises
a low false positive rate.
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
This document summarizes a research paper that proposes a method for enhancing data security in cloud computing through steganography. The method hides user data in digital images stored on cloud servers. When data needs to be accessed, it is extracted from the images. The document outlines the cloud architecture and security issues addressed. It then describes the proposed system architecture, security model, and data storage and retrieval process. Data is partitioned and hidden in multiple images to improve security. The goal is to prevent unauthorized access to user data stored on cloud servers.
The main tasks of a Wireless Sensor Network
(WSN) are data collection from its nodes and communication
of this data to the base station (BS). The protocols used for
communication among the WSN nodes and between the WSN
and the BS, must consider the resource constraints of nodes,
battery energy, computational capabilities and memory. The
WSN applications involve unattended operation of the network
over an extended period of time. In order to extend the lifetime
of a WSN, efficient routing protocols need to be adopted. The
proposed low power routing protocol based on tree-based
network structure reliably forwards the measured data towards
the BS using TDMA. An energy consumption analysis of the
WSN making use of this protocol is also carried out. It is
found that the network is energy efficient with an average
duty cycle of 0:7% for the WSN nodes. The OmNET++
simulation platform along with MiXiM framework is made
use of.
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...IDES Editor
The security of authentication of internet based
co-banking services should not be susceptible to high risks.
The passwords are highly vulnerable to virus attacks due to
the lack of high end embedding of security methods. In order
for the passwords to be more secure, people are generally
compelled to select jumbled up character based passwords
which are not only less memorable but are also equally prone
to insecurity. Multiple use of distributed shares has been
studied to solve the problem of authentication by algorithms
based on thresholding of pixels in image processing and visual
cryptography concepts where the subset of shares is considered
for the recovery of the original image for authentication using
correlation function[1][2].The main disadvantage in the above
study is the plain storage of shares and also one of the shares
is being supplied to the customer, which will lead to the
possibility of misuse by a third party. This paper proposes a
technique for scrambling of pixels by key based random
permutation (KBRP) within the shares before the
authentication has been attempted. Total number of shares to
be created is dependent on the multiplicity of ownership of
the account. By this method the problem of uncertainty among
the customers with regard to security, storage, retrieval of
holding of half of the shares is minimized.
This paper presents a trifocal Rotman Lens Design
approach. The effects of focal ratio and element spacing on
the performance of Rotman Lens are described. A three beam
prototype feeding 4 element antenna array working in L-band
has been simulated using RLD v1.7 software. Simulated
results show that the simulated lens has a return loss of –
12.4dB at 1.8GHz. Beam to array port phase error variation
with change in the focal ratio and element spacing has also
been investigated.
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
Hyperspectral images can be efficiently compressed
through a linear predictive model, as for example the one
used in the SLSQ algorithm. In this paper we exploit this
predictive model on the AVIRIS images by individuating,
through an off-line approach, a common subset of bands, which
are not spectrally related with any other bands. These bands
are not useful as prediction reference for the SLSQ 3-D
predictive model and we need to encode them via other
prediction strategies which consider only spatial correlation.
We have obtained this subset by clustering the AVIRIS bands
via the clustering by compression approach. The main result
of this paper is the list of the bands, not related with the
others, for AVIRIS images. The clustering trees obtained for
AVIRIS and the relationship among bands they depict is also
an interesting starting point for future research.
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...IDES Editor
A microelectronic circuit of block-elements
functionally analogous to two hydrogen bonding networks is
investigated. The hydrogen bonding networks are extracted
from â-lactamase protein and are formed in its active site.
Each hydrogen bond of the network is described in equivalent
electrical circuit by three or four-terminal block-element.
Each block-element is coded in Matlab. Static and dynamic
analyses are performed. The resultant microelectronic circuit
analogous to the hydrogen bonding network operates as
current mirror, sine pulse source, triangular pulse source as
well as signal modulator.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.