This document discusses speech emotion recognition using machine learning. It describes extracting acoustic features from speech audio data using techniques like MFCC. A convolutional neural network model is trained on these features to classify emotions into categories like happy, sad, angry etc. The model is tested on datasets like TESS, RAVDESS, achieving classification accuracies over 70%. Combining speech features with text transcriptions improves results further, demonstrating the potential of this approach for applications involving human-computer interaction.
1) The document describes research on developing a speech emotion recognition system using convolutional neural networks (CNNs). CNN models are trained on datasets containing speech samples labeled with different emotions.
2) Mel frequency cepstral coefficients are extracted from the speech samples as input features for the CNNs. Various CNN architectures and hyperparameters are tested to improve classification accuracy.
3) The best performing model achieved an accuracy of 77% for classifying emotions among 10 classes using a lighter CNN architecture trained on 80% of data and tested on 20%. Deeper CNN models achieved higher accuracy for classifying among fewer emotion classes.
IRJET- Comparative Analysis of Emotion Recognition SystemIRJET Journal
This document summarizes a research paper that analyzes different machine learning models for speech emotion recognition using the SAVEE dataset. It compares CNN, RFC, XGBoost and SVM classifiers using MFCC features extracted from the audio clips. The CNN model achieved the highest accuracy for emotion recognition when using MFCC features. The paper aims to better understand human emotional states through analyzing dependencies in speech data using these machine learning techniques.
IRJET- Speech Based Answer Sheet Evaluation SystemIRJET Journal
The document describes a proposed speech-based answer sheet evaluation system. It aims to improve on existing manual evaluation methods by utilizing speech recognition technology. The key points are:
1. The proposed system would allow examiners to provide speech input to evaluate scanned answer sheets, rather than manually entering marks. This could minimize errors and malpractices compared to existing systems.
2. Speech recognition would be implemented using Google Speech API and techniques like Hidden Markov Models and Vector Quantization. The system is intended to accurately interpret examiner speech and perform the corresponding evaluation actions.
3. By automating more of the evaluation process and providing a speech interface, the system aims to make the process easier for examiners compared to existing
IRJET- Study of Effect of PCA on Speech Emotion RecognitionIRJET Journal
This document discusses speech emotion recognition using principal component analysis (PCA). It analyzes speech features like mel frequency cepstral coefficients, pitch, energy, and formant frequency from the Berlin database containing emotions like anger, sadness, happiness, and fear. PCA is applied to reduce the feature dimension and decorrelate features. A support vector machine classifier is then used to classify emotions based on the PCA-processed features. Results show applying PCA improves the classification accuracy compared to without using PCA, with accuracy increasing from 68% to 64.5% on average.
Sentiment Analysis: A comparative study of Deep Learning and Machine LearningIRJET Journal
This document compares sentiment analysis techniques using deep learning and machine learning. It summarizes previous work using various machine learning algorithms and deep learning methods for sentiment analysis. The document then outlines the approach taken in this study, which is to determine the best sentiment analysis results using either machine learning or deep learning techniques. It describes preprocessing the Rotten Tomatoes movie review dataset and creating text matrices before selecting models for classification. The goal is to get a generalized understanding of how sentiment analysis can be performed and which practices yield optimal results.
This document presents a text-dependent speaker recognition system using neural networks that aims to improve recognition accuracy. It proposes changing the number of Mel Frequency Cepstral Coefficients (MFCCs) used in training. Voice Activity Detection is also used as a preprocessing step. Experimental results show recognition accuracy increases from 70.41% to a maximum of 92.91% as the number of MFCCs increases from 14 to 20, but then decreases with more MFCCs. The system is implemented on a Raspberry Pi for hardware acceleration.
IRJET- Emotion recognition using Speech Signal: A ReviewIRJET Journal
This document provides a review of speech emotion recognition techniques. It discusses how speech emotion recognition systems work, including common features extracted from speech like MFCCs and LPC coefficients. Classification techniques used in these systems are also examined, such as DTW, ANN, GMM, and K-NN. The document concludes that speech emotion recognition could be useful for applications requiring natural human-computer interaction, like car systems that monitor driver emotion or educational tutorials that adapt based on student emotion.
The project was started with a sole aim in mind that the design should be able to recognize the voice of a person by analyzing the speech signal. The simulation is done in MATLAB. The design of the project is based on using the Linear prediction filter coefficient (LPC) and Principal component analysis (PCA) on data (princomp) for the speech signal analysis. The Sample Collection process is accomplished by using the microphone to record the speech of male/female. After executing the program the speech is analyzed by the analysis part of our MATLAB program code and our design should be able to identify and give the judgment that the recorded speech signal is same as that of our desired output.
1) The document describes research on developing a speech emotion recognition system using convolutional neural networks (CNNs). CNN models are trained on datasets containing speech samples labeled with different emotions.
2) Mel frequency cepstral coefficients are extracted from the speech samples as input features for the CNNs. Various CNN architectures and hyperparameters are tested to improve classification accuracy.
3) The best performing model achieved an accuracy of 77% for classifying emotions among 10 classes using a lighter CNN architecture trained on 80% of data and tested on 20%. Deeper CNN models achieved higher accuracy for classifying among fewer emotion classes.
IRJET- Comparative Analysis of Emotion Recognition SystemIRJET Journal
This document summarizes a research paper that analyzes different machine learning models for speech emotion recognition using the SAVEE dataset. It compares CNN, RFC, XGBoost and SVM classifiers using MFCC features extracted from the audio clips. The CNN model achieved the highest accuracy for emotion recognition when using MFCC features. The paper aims to better understand human emotional states through analyzing dependencies in speech data using these machine learning techniques.
IRJET- Speech Based Answer Sheet Evaluation SystemIRJET Journal
The document describes a proposed speech-based answer sheet evaluation system. It aims to improve on existing manual evaluation methods by utilizing speech recognition technology. The key points are:
1. The proposed system would allow examiners to provide speech input to evaluate scanned answer sheets, rather than manually entering marks. This could minimize errors and malpractices compared to existing systems.
2. Speech recognition would be implemented using Google Speech API and techniques like Hidden Markov Models and Vector Quantization. The system is intended to accurately interpret examiner speech and perform the corresponding evaluation actions.
3. By automating more of the evaluation process and providing a speech interface, the system aims to make the process easier for examiners compared to existing
IRJET- Study of Effect of PCA on Speech Emotion RecognitionIRJET Journal
This document discusses speech emotion recognition using principal component analysis (PCA). It analyzes speech features like mel frequency cepstral coefficients, pitch, energy, and formant frequency from the Berlin database containing emotions like anger, sadness, happiness, and fear. PCA is applied to reduce the feature dimension and decorrelate features. A support vector machine classifier is then used to classify emotions based on the PCA-processed features. Results show applying PCA improves the classification accuracy compared to without using PCA, with accuracy increasing from 68% to 64.5% on average.
Sentiment Analysis: A comparative study of Deep Learning and Machine LearningIRJET Journal
This document compares sentiment analysis techniques using deep learning and machine learning. It summarizes previous work using various machine learning algorithms and deep learning methods for sentiment analysis. The document then outlines the approach taken in this study, which is to determine the best sentiment analysis results using either machine learning or deep learning techniques. It describes preprocessing the Rotten Tomatoes movie review dataset and creating text matrices before selecting models for classification. The goal is to get a generalized understanding of how sentiment analysis can be performed and which practices yield optimal results.
This document presents a text-dependent speaker recognition system using neural networks that aims to improve recognition accuracy. It proposes changing the number of Mel Frequency Cepstral Coefficients (MFCCs) used in training. Voice Activity Detection is also used as a preprocessing step. Experimental results show recognition accuracy increases from 70.41% to a maximum of 92.91% as the number of MFCCs increases from 14 to 20, but then decreases with more MFCCs. The system is implemented on a Raspberry Pi for hardware acceleration.
IRJET- Emotion recognition using Speech Signal: A ReviewIRJET Journal
This document provides a review of speech emotion recognition techniques. It discusses how speech emotion recognition systems work, including common features extracted from speech like MFCCs and LPC coefficients. Classification techniques used in these systems are also examined, such as DTW, ANN, GMM, and K-NN. The document concludes that speech emotion recognition could be useful for applications requiring natural human-computer interaction, like car systems that monitor driver emotion or educational tutorials that adapt based on student emotion.
The project was started with a sole aim in mind that the design should be able to recognize the voice of a person by analyzing the speech signal. The simulation is done in MATLAB. The design of the project is based on using the Linear prediction filter coefficient (LPC) and Principal component analysis (PCA) on data (princomp) for the speech signal analysis. The Sample Collection process is accomplished by using the microphone to record the speech of male/female. After executing the program the speech is analyzed by the analysis part of our MATLAB program code and our design should be able to identify and give the judgment that the recorded speech signal is same as that of our desired output.
A Novel, Robust, Hierarchical, Text-Independent Speaker Recognition TechniqueCSCJournals
Automatic speaker recognition system is used to recognize an unknown speaker among several reference speakers by making use of speaker-specific information from their speech. In this paper, we introduce a novel, hierarchical, text-independent speaker recognition. Our baseline speaker recognition system accuracy, built using statistical modeling techniques, gives an accuracy of 81% on the standard MIT database and our baseline gender recognition system gives an accuracy of 93.795%. We then propose and implement a novel state-space pruning technique by performing gender recognition before speaker recognition so as to improve the accuracy/timeliness of our baseline speaker recognition system. Based on the experiments conducted on the MIT database, we demonstrate that our proposed system improves the accuracy over the baseline system by approximately 2%, while reducing the computational time by more than 30%.
An evolutionary optimization method for selecting features for speech emotion...TELKOMNIKA JOURNAL
Human-computer interactions benefit greatly from emotion recognition from speech. To promote a contact-free environment in this coronavirus disease 2019 (COVID’19) pandemic situation, most digitally based systems used speech-based devices. Consequently, this emotion detection from speech has many beneficial applications for pathology. The vast majority of speech emotion recognition (SER) systems are designed based on machine learning or deep learning models. Therefore, need greater computing power and requirements. This issue was addressed by developing traditional algorithms for feature selection. Recent research has shown that nature-inspired or evolutionary algorithms such as equilibrium optimization (EO) and cuckoo search (CS) based meta-heuristic approaches are superior to the traditional feature selection (FS) models in terms of recognition performance. The purpose of this study is to investigate the impact of feature selection meta-heuristic approaches on emotion recognition from speech. To achieve this, we selected the rayerson audio-visual database of emotional speech and song (RAVDESS) database and obtained maximum recognition accuracy of 89.64% using the EO algorithm and 92.71% using the CS algorithm. For this final step, we plotted the associated precision and F1 score for each of the emotional classes
A Review Paper on Speech Based Emotion Detection Using Deep LearningIRJET Journal
This document reviews techniques for speech-based emotion detection using deep learning. It discusses how deep learning techniques have been proposed as an alternative to traditional methods for speech emotion recognition. Feature extraction is an important part of speech emotion recognition, and deep learning can help minimize the complexity of extracted features. The document surveys related work on speech emotion recognition using techniques like deep neural networks, convolutional neural networks, recurrent neural networks, and more. It examines the limitations of current approaches and the potential for deep learning to improve speech-based emotion detection.
Speech processing is considered as crucial and an intensive field of research in the growth of robust and efficient speech recognition system. But the accuracy for speech recognition still focuses for variation of context, speaker’s variability, and environment conditions. In this paper, we stated curvelet based Feature Extraction (CFE) method for speech recognition in noisy environment and the input speech signal is decomposed into different frequency channels using the characteristics of curvelet transform for reduce the computational complication and the feature vector size successfully and they have better accuracy, varying window size because of which they are suitable for non –stationary signals. For better word classification and recognition, discrete hidden markov model can be used and as they consider time distribution of speech signals. The HMM classification method attained the maximum accuracy in term of identification rate for informal with 80.1%, scientific phrases with 86%, and control with 63.8 % detection rates. The objective of this study is to characterize the feature extraction methods and classification phage in speech recognition system. The various approaches available for developing speech recognition system are compared along with their merits and demerits. The statistical results shows that signal recognition accuracy will be increased by using discrete Curvelet transforms over conventional methods.
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...ijcsit
Speech processing is considered as crucial and an intensive field of research in the growth of robust and efficient speech recognition system. But the accuracy for speech recognition still focuses for variation of context, speaker’s variability, and environment conditions. In this paper, we stated curvelet based Feature Extraction (CFE) method for speech recognition in noisy environment and the input speech signal is decomposed into different frequency channels using the characteristics of curvelet transform for reduce the computational complication and the feature vector size successfully and they have better accuracy, varying window size because of which they are suitable for non –stationary signals. For better word classification and recognition, discrete hidden markov model can be used and as they consider time distribution of
speech signals. The HMM classification method attained the maximum accuracy in term of identification rate for informal with 80.1%, scientific phrases with 86%, and control with 63.8 % detection rates. The objective of this study is to characterize the feature extraction methods and classification phage in speech
recognition system. The various approaches available for developing speech recognition system are compared along with their merits and demerits. The statistical results shows that signal recognition accuracy will be increased by using discrete Curvelet transforms over conventional methods.
This document presents research on developing a machine learning model to recognize gender from voice recordings. The researchers collected a dataset of over 60,000 audio samples from online repositories. They extracted acoustic features from the recordings like mean, median and standard deviation of dominant frequencies. These features were used to train four machine learning algorithms - Support Vector Machine, Decision Tree, Gradient Boosted Trees and Random Forest. The Random Forest model achieved the highest testing accuracy of 89.3% for gender recognition. Feature importance analysis showed that standard deviation, mean and kurtosis were the most important discriminative features for the tree-based models. The research demonstrates that machine learning can effectively perform voice-based gender recognition using acoustic features extracted from speech.
IRJET- Analysis of Music Recommendation System using Machine Learning Alg...IRJET Journal
This document analyzes different machine learning algorithms that can be used to build a music recommendation system. It first discusses how machine learning and data mining are used to extract patterns from large music datasets. It then analyzes different classification, clustering, and association algorithms that are suitable for a music recommendation system. Specifically, it applies two algorithms (Random Forest and XGBClassifier) to a music dataset and compares their performance at different training/test data splits. It finds that Random Forest achieved the highest accuracy of 75% when the split was 75% training and 25% testing data. In conclusion, ensemble techniques like Random Forest can improve the accuracy of music recommendation over single algorithms.
This document proposes an automatic emotion recognition system that analyzes audio information to classify human emotions. It uses spectral features and MFCC coefficients for feature extraction from voice signals. Then, a deep learning-based LSTM algorithm is used for classification. The system is evaluated on three audio datasets. Recurrent convolutional neural networks are proposed to capture temporal and frequency dependencies in speech spectrograms. The system aims to improve on existing methods which have lower accuracy and require more computational resources for implementation.
A survey on Enhancements in Speech RecognitionIRJET Journal
This document discusses enhancements in speech recognition and provides an overview of the history and basic model of speech recognition. It summarizes key enhancements researchers have made to improve speech recognition, especially in noisy environments. The basic model of speech recognition involves speech input, preprocessing using techniques like MFCCs, classification models like RNNs and HMMs, and output of a transcript. Researchers are working to develop robust speech recognition that can understand speech in any environment.
International journal of signal and image processing issues vol 2015 - no 1...sophiabelthome
This document presents a study on automatic speech recognition (ASR). It discusses the different types of ASR systems including speaker-dependent, speaker-independent, and speaker-adaptive systems. It also covers the different types of utterances that can be recognized, such as isolated words, connected words, continuous speech, and spontaneous speech. The document then describes the basic phases involved in ASR, including front-end analysis using techniques like pre-emphasis, framing, windowing and feature extraction. It also discusses back-end processing using acoustic and language models to map features to words. Hidden Markov models are presented as a commonly used acoustic modeling technique in ASR systems.
IRJET- Techniques for Analyzing Job Satisfaction in Working Employees – A...IRJET Journal
The document discusses techniques for analyzing job satisfaction in employees using deep learning algorithms. It proposes applying convolutional neural networks (CNNs) to facial images taken at regular intervals to classify emotions and determine if an employee is happy or stressed. CNNs would be trained to recognize emotions like surprise, fear, disgust, anger, happiness and sadness from facial expressions. The percentage of positive versus negative emotions detected over multiple images could indicate an employee's level of satisfaction. A literature review discusses limitations of existing approaches and supports using CNNs on facial expressions for more accurate analysis of employee mental health and work satisfaction.
Voice Recognition Based Automation System for Medical Applications and for Ph...IRJET Journal
This document describes a voice recognition-based automation system for medical applications and physically challenged patients. The system uses a voice recognition model, Arduino microcontroller, relays, LEDs, buzzers, and a motor to control an adjustable bed. Voice commands are recognized using techniques like MFCC and HMM and used to control devices via the Arduino. The system is intended to allow paralyzed patients to control devices like lights, alarms, and their bed using only voice commands for increased independence. Testing showed the system provided accurate voice recognition under various conditions.
Voice Recognition Based Automation System for Medical Applications and for Ph...IRJET Journal
This document describes a voice recognition-based automation system for medical applications and physically challenged patients. The system uses a voice recognition model, Arduino microcontroller, relays, LEDs, buzzers, and a motor to control an adjustable bed. Voice commands are recognized using techniques like MFCC and HMM and used to control devices via the Arduino. The system is intended to allow paralyzed patients to control devices like lights, alarms, and their bed using only voice commands for increased independence. Testing showed the system can accurately recognize commands and control devices with 99% accuracy under suitable conditions.
SPEECH CLASSIFICATION USING ZERNIKE MOMENTScscpconf
Speech recognition is very popular field of research and speech classification improves the performance
for speech recognition. Different patterns are identified using various characteristics or features of speech
to do there classification. Typical speech features set consist of many parameters like standard deviation,
magnitude, zero crossing representing speech signal. By considering all these parameters, system
computation load and time will increase a lot, so there is need to minimize these parameters by selecting
important features. Feature selection aims to get an optimal subset of features from given space, leading to
high classification performance. Thus feature selection methods should derive features that should reduce
the amount of data used for classification. High recognition accuracy is in demand for speech recognition
system. In this paper Zernike moments of speech signal are extracted and used as features of speech signal.
Zernike moments are the shape descriptor generally used to describe the shape of region. To extract
Zernike moments, one dimensional audio signal is converted into two dimensional image file. Then various
feature selection and ranking algorithms like t-Test, Chi Square, Fisher Score, ReliefF, Gini Index and
Information Gain are used to select important feature of speech signal. Performances of the algorithms are
evaluated using accuracy of classifier. Support Vector Machine (SVM) is used as the learning algorithm of
classifier and it is observed that accuracy is improved a lot after removing unwanted features.
This document describes a student project on speech-based emotion recognition. The project uses convolutional neural networks (CNN) and mel-frequency cepstral coefficients (MFCC) to classify emotions in speech into categories like happy, sad, fearful, calm and angry. The proposed system provides advantages over existing systems by allowing variable length audio inputs, faster processing, and real-time classification of more emotion categories. It achieves a test accuracy of 91.04% according to the document.
IRJET - Threat Prediction using Speech AnalysisIRJET Journal
This document describes a proposed system to analyze audio files and detect potential threats by performing speech recognition and sentiment analysis. The system would have three main steps: 1) Convert speech to text using machine learning algorithms like recurrent neural networks, 2) Perform sentiment analysis on the text using natural language processing techniques like naïve Bayes classification to determine if the text is positive, negative, or neutral, 3) Calculate an overall threat percentage and issue a warning if it exceeds a threshold. The goal is to automate audio surveillance and analysis to reduce time and human error compared to manual processes. Artificial neural networks would be used for both speech recognition and sentiment classification.
IRJET- Voice Command Execution with Speech Recognition and SynthesizerIRJET Journal
The document describes a voice command execution system using speech recognition and text-to-speech synthesis. The proposed system allows users to complete tasks using only voice commands, reducing time delays compared to traditional systems requiring mouse/keyboard input. It recognizes three types of voice commands - social commands for question answering, web commands to access URLs, and shell commands involving file/application directories. A speech synthesizer converts text to speech to provide output to the user. The system aims to enable hands-free computing for disabled users by executing commands with only voice.
IRJET- Factoid Question and Answering SystemIRJET Journal
This document describes a factoid question answering system that uses neural networks and the Tensorflow framework. The system takes in a text document and question as input. It then processes the input using techniques like gated recurrent units and support vector machines to classify the question. The system calculates attention between facts and the question, modifies its memory, and identifies the word closest to the answer to output as the response. Key aspects of the system include training a question answering engine with Tensorflow, storing and retrieving data, and generating the final answer.
SPEECH EMOTION RECOGNITION SYSTEM USING RNNIRJET Journal
This document discusses a speech emotion recognition system using recurrent neural networks (RNNs). It begins with an abstract describing speech emotion recognition and its importance. Then it provides background on speech emotion databases, feature extraction using MFCC, and classification approaches like RNNs. It reviews related work on speech emotion recognition using various methods. Finally, it concludes that MFCC feature extraction and RNN classification was used in the proposed system to take advantage of their performance in machine learning applications. The system aims to help machines understand human interaction and respond based on the user's emotion.
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
More Related Content
Similar to Speech Emotion Recognition Using Machine Learning
A Novel, Robust, Hierarchical, Text-Independent Speaker Recognition TechniqueCSCJournals
Automatic speaker recognition system is used to recognize an unknown speaker among several reference speakers by making use of speaker-specific information from their speech. In this paper, we introduce a novel, hierarchical, text-independent speaker recognition. Our baseline speaker recognition system accuracy, built using statistical modeling techniques, gives an accuracy of 81% on the standard MIT database and our baseline gender recognition system gives an accuracy of 93.795%. We then propose and implement a novel state-space pruning technique by performing gender recognition before speaker recognition so as to improve the accuracy/timeliness of our baseline speaker recognition system. Based on the experiments conducted on the MIT database, we demonstrate that our proposed system improves the accuracy over the baseline system by approximately 2%, while reducing the computational time by more than 30%.
An evolutionary optimization method for selecting features for speech emotion...TELKOMNIKA JOURNAL
Human-computer interactions benefit greatly from emotion recognition from speech. To promote a contact-free environment in this coronavirus disease 2019 (COVID’19) pandemic situation, most digitally based systems used speech-based devices. Consequently, this emotion detection from speech has many beneficial applications for pathology. The vast majority of speech emotion recognition (SER) systems are designed based on machine learning or deep learning models. Therefore, need greater computing power and requirements. This issue was addressed by developing traditional algorithms for feature selection. Recent research has shown that nature-inspired or evolutionary algorithms such as equilibrium optimization (EO) and cuckoo search (CS) based meta-heuristic approaches are superior to the traditional feature selection (FS) models in terms of recognition performance. The purpose of this study is to investigate the impact of feature selection meta-heuristic approaches on emotion recognition from speech. To achieve this, we selected the rayerson audio-visual database of emotional speech and song (RAVDESS) database and obtained maximum recognition accuracy of 89.64% using the EO algorithm and 92.71% using the CS algorithm. For this final step, we plotted the associated precision and F1 score for each of the emotional classes
A Review Paper on Speech Based Emotion Detection Using Deep LearningIRJET Journal
This document reviews techniques for speech-based emotion detection using deep learning. It discusses how deep learning techniques have been proposed as an alternative to traditional methods for speech emotion recognition. Feature extraction is an important part of speech emotion recognition, and deep learning can help minimize the complexity of extracted features. The document surveys related work on speech emotion recognition using techniques like deep neural networks, convolutional neural networks, recurrent neural networks, and more. It examines the limitations of current approaches and the potential for deep learning to improve speech-based emotion detection.
Speech processing is considered as crucial and an intensive field of research in the growth of robust and efficient speech recognition system. But the accuracy for speech recognition still focuses for variation of context, speaker’s variability, and environment conditions. In this paper, we stated curvelet based Feature Extraction (CFE) method for speech recognition in noisy environment and the input speech signal is decomposed into different frequency channels using the characteristics of curvelet transform for reduce the computational complication and the feature vector size successfully and they have better accuracy, varying window size because of which they are suitable for non –stationary signals. For better word classification and recognition, discrete hidden markov model can be used and as they consider time distribution of speech signals. The HMM classification method attained the maximum accuracy in term of identification rate for informal with 80.1%, scientific phrases with 86%, and control with 63.8 % detection rates. The objective of this study is to characterize the feature extraction methods and classification phage in speech recognition system. The various approaches available for developing speech recognition system are compared along with their merits and demerits. The statistical results shows that signal recognition accuracy will be increased by using discrete Curvelet transforms over conventional methods.
CURVELET BASED SPEECH RECOGNITION SYSTEM IN NOISY ENVIRONMENT: A STATISTICAL ...ijcsit
Speech processing is considered as crucial and an intensive field of research in the growth of robust and efficient speech recognition system. But the accuracy for speech recognition still focuses for variation of context, speaker’s variability, and environment conditions. In this paper, we stated curvelet based Feature Extraction (CFE) method for speech recognition in noisy environment and the input speech signal is decomposed into different frequency channels using the characteristics of curvelet transform for reduce the computational complication and the feature vector size successfully and they have better accuracy, varying window size because of which they are suitable for non –stationary signals. For better word classification and recognition, discrete hidden markov model can be used and as they consider time distribution of
speech signals. The HMM classification method attained the maximum accuracy in term of identification rate for informal with 80.1%, scientific phrases with 86%, and control with 63.8 % detection rates. The objective of this study is to characterize the feature extraction methods and classification phage in speech
recognition system. The various approaches available for developing speech recognition system are compared along with their merits and demerits. The statistical results shows that signal recognition accuracy will be increased by using discrete Curvelet transforms over conventional methods.
This document presents research on developing a machine learning model to recognize gender from voice recordings. The researchers collected a dataset of over 60,000 audio samples from online repositories. They extracted acoustic features from the recordings like mean, median and standard deviation of dominant frequencies. These features were used to train four machine learning algorithms - Support Vector Machine, Decision Tree, Gradient Boosted Trees and Random Forest. The Random Forest model achieved the highest testing accuracy of 89.3% for gender recognition. Feature importance analysis showed that standard deviation, mean and kurtosis were the most important discriminative features for the tree-based models. The research demonstrates that machine learning can effectively perform voice-based gender recognition using acoustic features extracted from speech.
IRJET- Analysis of Music Recommendation System using Machine Learning Alg...IRJET Journal
This document analyzes different machine learning algorithms that can be used to build a music recommendation system. It first discusses how machine learning and data mining are used to extract patterns from large music datasets. It then analyzes different classification, clustering, and association algorithms that are suitable for a music recommendation system. Specifically, it applies two algorithms (Random Forest and XGBClassifier) to a music dataset and compares their performance at different training/test data splits. It finds that Random Forest achieved the highest accuracy of 75% when the split was 75% training and 25% testing data. In conclusion, ensemble techniques like Random Forest can improve the accuracy of music recommendation over single algorithms.
This document proposes an automatic emotion recognition system that analyzes audio information to classify human emotions. It uses spectral features and MFCC coefficients for feature extraction from voice signals. Then, a deep learning-based LSTM algorithm is used for classification. The system is evaluated on three audio datasets. Recurrent convolutional neural networks are proposed to capture temporal and frequency dependencies in speech spectrograms. The system aims to improve on existing methods which have lower accuracy and require more computational resources for implementation.
A survey on Enhancements in Speech RecognitionIRJET Journal
This document discusses enhancements in speech recognition and provides an overview of the history and basic model of speech recognition. It summarizes key enhancements researchers have made to improve speech recognition, especially in noisy environments. The basic model of speech recognition involves speech input, preprocessing using techniques like MFCCs, classification models like RNNs and HMMs, and output of a transcript. Researchers are working to develop robust speech recognition that can understand speech in any environment.
International journal of signal and image processing issues vol 2015 - no 1...sophiabelthome
This document presents a study on automatic speech recognition (ASR). It discusses the different types of ASR systems including speaker-dependent, speaker-independent, and speaker-adaptive systems. It also covers the different types of utterances that can be recognized, such as isolated words, connected words, continuous speech, and spontaneous speech. The document then describes the basic phases involved in ASR, including front-end analysis using techniques like pre-emphasis, framing, windowing and feature extraction. It also discusses back-end processing using acoustic and language models to map features to words. Hidden Markov models are presented as a commonly used acoustic modeling technique in ASR systems.
IRJET- Techniques for Analyzing Job Satisfaction in Working Employees – A...IRJET Journal
The document discusses techniques for analyzing job satisfaction in employees using deep learning algorithms. It proposes applying convolutional neural networks (CNNs) to facial images taken at regular intervals to classify emotions and determine if an employee is happy or stressed. CNNs would be trained to recognize emotions like surprise, fear, disgust, anger, happiness and sadness from facial expressions. The percentage of positive versus negative emotions detected over multiple images could indicate an employee's level of satisfaction. A literature review discusses limitations of existing approaches and supports using CNNs on facial expressions for more accurate analysis of employee mental health and work satisfaction.
Voice Recognition Based Automation System for Medical Applications and for Ph...IRJET Journal
This document describes a voice recognition-based automation system for medical applications and physically challenged patients. The system uses a voice recognition model, Arduino microcontroller, relays, LEDs, buzzers, and a motor to control an adjustable bed. Voice commands are recognized using techniques like MFCC and HMM and used to control devices via the Arduino. The system is intended to allow paralyzed patients to control devices like lights, alarms, and their bed using only voice commands for increased independence. Testing showed the system provided accurate voice recognition under various conditions.
Voice Recognition Based Automation System for Medical Applications and for Ph...IRJET Journal
This document describes a voice recognition-based automation system for medical applications and physically challenged patients. The system uses a voice recognition model, Arduino microcontroller, relays, LEDs, buzzers, and a motor to control an adjustable bed. Voice commands are recognized using techniques like MFCC and HMM and used to control devices via the Arduino. The system is intended to allow paralyzed patients to control devices like lights, alarms, and their bed using only voice commands for increased independence. Testing showed the system can accurately recognize commands and control devices with 99% accuracy under suitable conditions.
SPEECH CLASSIFICATION USING ZERNIKE MOMENTScscpconf
Speech recognition is very popular field of research and speech classification improves the performance
for speech recognition. Different patterns are identified using various characteristics or features of speech
to do there classification. Typical speech features set consist of many parameters like standard deviation,
magnitude, zero crossing representing speech signal. By considering all these parameters, system
computation load and time will increase a lot, so there is need to minimize these parameters by selecting
important features. Feature selection aims to get an optimal subset of features from given space, leading to
high classification performance. Thus feature selection methods should derive features that should reduce
the amount of data used for classification. High recognition accuracy is in demand for speech recognition
system. In this paper Zernike moments of speech signal are extracted and used as features of speech signal.
Zernike moments are the shape descriptor generally used to describe the shape of region. To extract
Zernike moments, one dimensional audio signal is converted into two dimensional image file. Then various
feature selection and ranking algorithms like t-Test, Chi Square, Fisher Score, ReliefF, Gini Index and
Information Gain are used to select important feature of speech signal. Performances of the algorithms are
evaluated using accuracy of classifier. Support Vector Machine (SVM) is used as the learning algorithm of
classifier and it is observed that accuracy is improved a lot after removing unwanted features.
This document describes a student project on speech-based emotion recognition. The project uses convolutional neural networks (CNN) and mel-frequency cepstral coefficients (MFCC) to classify emotions in speech into categories like happy, sad, fearful, calm and angry. The proposed system provides advantages over existing systems by allowing variable length audio inputs, faster processing, and real-time classification of more emotion categories. It achieves a test accuracy of 91.04% according to the document.
IRJET - Threat Prediction using Speech AnalysisIRJET Journal
This document describes a proposed system to analyze audio files and detect potential threats by performing speech recognition and sentiment analysis. The system would have three main steps: 1) Convert speech to text using machine learning algorithms like recurrent neural networks, 2) Perform sentiment analysis on the text using natural language processing techniques like naïve Bayes classification to determine if the text is positive, negative, or neutral, 3) Calculate an overall threat percentage and issue a warning if it exceeds a threshold. The goal is to automate audio surveillance and analysis to reduce time and human error compared to manual processes. Artificial neural networks would be used for both speech recognition and sentiment classification.
IRJET- Voice Command Execution with Speech Recognition and SynthesizerIRJET Journal
The document describes a voice command execution system using speech recognition and text-to-speech synthesis. The proposed system allows users to complete tasks using only voice commands, reducing time delays compared to traditional systems requiring mouse/keyboard input. It recognizes three types of voice commands - social commands for question answering, web commands to access URLs, and shell commands involving file/application directories. A speech synthesizer converts text to speech to provide output to the user. The system aims to enable hands-free computing for disabled users by executing commands with only voice.
IRJET- Factoid Question and Answering SystemIRJET Journal
This document describes a factoid question answering system that uses neural networks and the Tensorflow framework. The system takes in a text document and question as input. It then processes the input using techniques like gated recurrent units and support vector machines to classify the question. The system calculates attention between facts and the question, modifies its memory, and identifies the word closest to the answer to output as the response. Key aspects of the system include training a question answering engine with Tensorflow, storing and retrieving data, and generating the final answer.
SPEECH EMOTION RECOGNITION SYSTEM USING RNNIRJET Journal
This document discusses a speech emotion recognition system using recurrent neural networks (RNNs). It begins with an abstract describing speech emotion recognition and its importance. Then it provides background on speech emotion databases, feature extraction using MFCC, and classification approaches like RNNs. It reviews related work on speech emotion recognition using various methods. Finally, it concludes that MFCC feature extraction and RNN classification was used in the proposed system to take advantage of their performance in machine learning applications. The system aims to help machines understand human interaction and respond based on the user's emotion.
Similar to Speech Emotion Recognition Using Machine Learning (20)
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...IRJET Journal
This study compares the use of Stark Steel and TMT Steel as reinforcement materials in a two-way reinforced concrete slab. Mechanical testing is conducted to determine the tensile strength, yield strength, and other properties of each material. A two-way slab design adhering to codes and standards is executed with both materials. The performance is analyzed in terms of deflection, stability under loads, and displacement. Cost analyses accounting for material, durability, maintenance, and life cycle costs are also conducted. The findings provide insights into the economic and structural implications of each material for reinforcement selection and recommendations on the most suitable material based on the analysis.
Effect of Camber and Angles of Attack on Airfoil CharacteristicsIRJET Journal
This document discusses a study analyzing the effect of camber, position of camber, and angle of attack on the aerodynamic characteristics of airfoils. Sixteen modified asymmetric NACA airfoils were analyzed using computational fluid dynamics (CFD) by varying the camber, camber position, and angle of attack. The results showed the relationship between these parameters and the lift coefficient, drag coefficient, and lift to drag ratio. This provides insight into how changes in airfoil geometry impact aerodynamic performance.
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...IRJET Journal
This document reviews the progress and challenges of aluminum-based metal matrix composites (MMCs), focusing on their fabrication processes and applications. It discusses how various aluminum MMCs have been developed using reinforcements like borides, carbides, oxides, and nitrides to improve mechanical and wear properties. These composites have gained prominence for their lightweight, high-strength and corrosion resistance properties. The document also examines recent advancements in fabrication techniques for aluminum MMCs and their growing applications in industries such as aerospace and automotive. However, it notes that challenges remain around issues like improper mixing of reinforcements and reducing reinforcement agglomeration.
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...IRJET Journal
This document discusses research on using graph neural networks (GNNs) for dynamic optimization of public transportation networks in real-time. GNNs represent transit networks as graphs with nodes as stops and edges as connections. The GNN model aims to optimize networks using real-time data on vehicle locations, arrival times, and passenger loads. This helps increase mobility, decrease traffic, and improve efficiency. The system continuously trains and infers to adapt to changing transit conditions, providing decision support tools. While research has focused on performance, more work is needed on security, socio-economic impacts, contextual generalization of models, continuous learning approaches, and effective real-time visualization.
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...IRJET Journal
This document summarizes a research project that aims to compare the structural performance of conventional slab and grid slab systems in multi-story buildings using ETABS software. The study will analyze both symmetric and asymmetric building models under various loading conditions. Parameters like deflections, moments, shears, and stresses will be examined to evaluate the structural effectiveness of each slab type. The results will provide insights into the comparative behavior of conventional and grid slabs to help engineers and architects select appropriate slab systems based on building layouts and design requirements.
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...IRJET Journal
This document summarizes and reviews a research paper on the seismic response of reinforced concrete (RC) structures with plan and vertical irregularities, with and without infill walls. It discusses how infill walls can improve or reduce the seismic performance of RC buildings, depending on factors like wall layout, height distribution, connection to the frame, and relative stiffness of walls and frames. The reviewed research paper analyzes the behavior of infill walls, effects of vertical irregularities, and seismic performance of high-rise structures under linear static and dynamic analysis. It studies response characteristics like story drift, deflection and shear. The document also provides literature on similar research investigating the effects of infill walls, soft stories, plan irregularities, and different
This document provides a review of machine learning techniques used in Advanced Driver Assistance Systems (ADAS). It begins with an abstract that summarizes key applications of machine learning in ADAS, including object detection, recognition, and decision-making. The introduction discusses the integration of machine learning in ADAS and how it is transforming vehicle safety. The literature review then examines several research papers on topics like lightweight deep learning models for object detection and lane detection models using image processing. It concludes by discussing challenges and opportunities in the field, such as improving algorithm robustness and adaptability.
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...IRJET Journal
The document analyzes temperature and precipitation trends in Asosa District, Benishangul Gumuz Region, Ethiopia from 1993 to 2022 based on data from the local meteorological station. The results show:
1) The average maximum and minimum annual temperatures have generally decreased over time, with maximum temperatures decreasing by a factor of -0.0341 and minimum by -0.0152.
2) Mann-Kendall tests found the decreasing temperature trends to be statistically significant for annual maximum temperatures but not for annual minimum temperatures.
3) Annual precipitation in Asosa District showed a statistically significant increasing trend.
The conclusions recommend development planners account for rising summer precipitation and declining temperatures in
P.E.B. Framed Structure Design and Analysis Using STAAD ProIRJET Journal
This document discusses the design and analysis of pre-engineered building (PEB) framed structures using STAAD Pro software. It provides an overview of PEBs, including that they are designed off-site with building trusses and beams produced in a factory. STAAD Pro is identified as a key tool for modeling, analyzing, and designing PEBs to ensure their performance and safety under various load scenarios. The document outlines modeling structural parts in STAAD Pro, evaluating structural reactions, assigning loads, and following international design codes and standards. In summary, STAAD Pro is used to design and analyze PEB framed structures to ensure safety and code compliance.
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...IRJET Journal
This document provides a review of research on innovative fiber integration methods for reinforcing concrete structures. It discusses studies that have explored using carbon fiber reinforced polymer (CFRP) composites with recycled plastic aggregates to develop more sustainable strengthening techniques. It also examines using ultra-high performance fiber reinforced concrete to improve shear strength in beams. Additional topics covered include the dynamic responses of FRP-strengthened beams under static and impact loads, and the performance of preloaded CFRP-strengthened fiber reinforced concrete beams. The review highlights the potential of fiber composites to enable more sustainable and resilient construction practices.
Survey Paper on Cloud-Based Secured Healthcare SystemIRJET Journal
This document summarizes a survey on securing patient healthcare data in cloud-based systems. It discusses using technologies like facial recognition, smart cards, and cloud computing combined with strong encryption to securely store patient data. The survey found that healthcare professionals believe digitizing patient records and storing them in a centralized cloud system would improve access during emergencies and enable more efficient care compared to paper-based systems. However, ensuring privacy and security of patient data is paramount as healthcare incorporates these digital technologies.
Review on studies and research on widening of existing concrete bridgesIRJET Journal
This document summarizes several studies that have been conducted on widening existing concrete bridges. It describes a study from China that examined load distribution factors for a bridge widened with composite steel-concrete girders. It also outlines challenges and solutions for widening a bridge in the UAE, including replacing bearings and stitching the new and existing structures. Additionally, it discusses two bridge widening projects in New Zealand that involved adding precast beams and stitching to connect structures. Finally, safety measures and challenges for strengthening a historic bridge in Switzerland under live traffic are presented.
React based fullstack edtech web applicationIRJET Journal
The document describes the architecture of an educational technology web application built using the MERN stack. It discusses the frontend developed with ReactJS, backend with NodeJS and ExpressJS, and MongoDB database. The frontend provides dynamic user interfaces, while the backend offers APIs for authentication, course management, and other functions. MongoDB enables flexible data storage. The architecture aims to provide a scalable, responsive platform for online learning.
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...IRJET Journal
This paper proposes integrating Internet of Things (IoT) and blockchain technologies to help implement objectives of India's National Education Policy (NEP) in the education sector. The paper discusses how blockchain could be used for secure student data management, credential verification, and decentralized learning platforms. IoT devices could create smart classrooms, automate attendance tracking, and enable real-time monitoring. Blockchain would ensure integrity of exam processes and resource allocation, while smart contracts automate agreements. The paper argues this integration has potential to revolutionize education by making it more secure, transparent and efficient, in alignment with NEP goals. However, challenges like infrastructure needs, data privacy, and collaborative efforts are also discussed.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.IRJET Journal
This document provides a review of research on the performance of coconut fibre reinforced concrete. It summarizes several studies that tested different volume fractions and lengths of coconut fibres in concrete mixtures with varying compressive strengths. The studies found that coconut fibre improved properties like tensile strength, toughness, crack resistance, and spalling resistance compared to plain concrete. Volume fractions of 2-5% and fibre lengths of 20-50mm produced the best results. The document concludes that using a 4-5% volume fraction of coconut fibres 30-40mm in length with M30-M60 grade concrete would provide benefits based on previous research.
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...IRJET Journal
The document discusses optimizing business management processes through automation using Microsoft Power Automate and artificial intelligence. It provides an overview of Power Automate's key components and features for automating workflows across various apps and services. The document then presents several scenarios applying automation solutions to common business processes like data entry, monitoring, HR, finance, customer support, and more. It estimates the potential time and cost savings from implementing automation for each scenario. Finally, the conclusion emphasizes the transformative impact of AI and automation tools on business processes and the need for ongoing optimization.
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignIRJET Journal
The document describes the seismic design of a G+5 steel building frame located in Roorkee, India according to Indian codes IS 1893-2002 and IS 800. The frame was analyzed using the equivalent static load method and response spectrum method, and its response in terms of displacements and shear forces were compared. Based on the analysis, the frame was designed as a seismic-resistant steel structure according to IS 800:2007. The software STAAD Pro was used for the analysis and design.
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...IRJET Journal
This research paper explores using plastic waste as a sustainable and cost-effective construction material. The study focuses on manufacturing pavers and bricks using recycled plastic and partially replacing concrete with plastic alternatives. Initial results found that pavers and bricks made from recycled plastic demonstrate comparable strength and durability to traditional materials while providing environmental and cost benefits. Additionally, preliminary research indicates incorporating plastic waste as a partial concrete replacement significantly reduces construction costs without compromising structural integrity. The outcomes suggest adopting plastic waste in construction can address plastic pollution while optimizing costs, promoting more sustainable building practices.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.