This document describes a vision-based hand gesture recognition system using convolutional neural networks. The system captures images of hand gestures using a camera, pre-processes the images, and classifies the gestures using a CNN model. The CNN architecture includes convolutional layers, max pooling layers, dropout layers, and fully connected layers. The system was trained on a dataset of images representing 7 different hand gestures. Testing achieved over 90% accuracy in recognizing the gestures. This vision-based approach allows for natural human-computer interaction without physical devices.
1) The document presents a real-time static hand gesture recognition system for the Devanagari number system using two feature extraction techniques: Discrete Cosine Transform (DCT) and Edge Oriented Histogram (EOH).
2) The system captures an image using a webcam, performs pre-processing, extracts the region of interest, then extracts features using DCT or EOH before matching against a training database to recognize the gesture.
3) An experiment tested 20 images and found DCT achieved a higher recognition accuracy of 18 gestures compared to 15 for EOH.
This document summarizes a survey paper on hand gesture recognition using color hex matrices and hidden Markov models. It discusses limitations of current vision-based and data glove-based recognition methods and proposes a solution using a webcam to capture hand images, convert them to RGB matrices, and recognize gestures by comparing changes in the matrices over time using hidden Markov models. The method aims to provide low-cost real-time hand gesture recognition using commonly available hardware and software.
Our paper on homogeneous motion discovery oriented reference frame for high efficiency video coding talks about the idea of segmenting the current frame into cohesive motion regions made of blocks and then using these regions to form a motion compensated prediction. This prediction when used as an additional reference frame for the current frame, shows encouraging savings in bit rate over standalone HEVC reference coder.
TRANSFER LEARNING WITH CONVOLUTIONAL NEURAL NETWORKS FOR IRIS RECOGNITIONijaia
Iris is one of the common biometrics used for identity authentication. It has the potential to recognize persons with a high degree of assurance. Extracting effective features is the most important stage in the iris recognition system. Different features have been used to perform iris recognition system. A lot of them are based on hand-crafted features designed by biometrics experts. According to the achievement of deep learning in object recognition problems, the features learned by the Convolutional Neural Network (CNN) have gained great attention to be used in the iris recognition system. In this paper, we proposed an effective iris recognition system by using transfer learning with Convolutional Neural Networks. The proposed system is implemented by fine-tuning a pre-trained convolutional neural network (VGG-16) for features extracting and classification. The performance of the iris recognition system is tested on four public databases IITD, iris databases CASIA-Iris-V1, CASIA-Iris-thousand and, CASIA-Iris-Interval. The results show that the proposed system is achieved a very high accuracy rate.
Hand gesture recognition using support vector machinetheijes
1) The document describes a system for hand gesture recognition using support vector machines. It uses Canny's edge detection algorithm and histogram of gradients (HOG) for feature extraction from input images of hand gestures.
2) The system is trained using a dataset of predefined hand gestures. During testing, it compares the features extracted from new input images to those in the training dataset and classifies the gesture using an SVM classifier.
3) Experimental results found the system could accurately recognize 20 different static hand gestures in complex backgrounds. However, the authors note that future work could focus on real-time gesture recognition and reducing complexity for faster processing.
IEEE EED2021 AI use cases in Computer VisionSAMeh Zaghloul
AI Use Cases in Computer Vision
Introduction and Overview about AI Use Cases in Computer Vision, to answer a basic question: “How Machines See?”, covering Neural Networks, Object detection and recognition, Content-based image retrieval, Object tracking, Image restoration, Scene reconstruction, Computer Vision Tools, Frameworks, Pretrained Models, and Public Train/Test Datasets.
With real-project examples on using Computer Vision in Egyptian Hieroglyph Alphabet recognition, Face Recognition/Matching, in addition to hands-on interactive session on Object/Image Tagging/Annotation on Videos/Images to prepare model training dataset.
Final Year IEEE Project 2013-2014 - Digital Image Processing Project Title a...elysiumtechnologies
This document provides information about Elysium Technologies Private Limited, an Indian technology company with over 13 years of experience. It has branches across multiple cities in India and provides services such as automated services, 24/7 help desk support, and ticketing and appointment systems. The company has over 250 developers and 20 researchers on staff.
There has been over the past few years, a very increased popularity for yoga. A lot of literatures have been published that claim yoga to be beneficial in improving the overall lifestyle and health especially in rehabilitation, mental health and more. Considering the fast-paced lives that individuals live, people usually prefer to exercise or work-out from the comfort of their homes and with that a need for an instructor arises. Hence why, we have developed a self-assisted system which can be used to detect and classify yoga asanas, which is discussed in-depth in this paper. Especially now when the pandemic has taken over the world, it is not feasible to attend physical classes or have an instructor over. Using the technology of Computer Vision, a computer-assisted system such as the one discussed, comes in very handy. The technologies such as ml5.js, PoseNet and Neural Networks are made use for the human pose estimation and classification. The proposed system uses the above-mentioned technologies to take in a real-time video input and analyze the pose of an individual, and classifies the poses into yoga asanas. It also displays the name of the yoga asana that is detected along with the confidence score.
1) The document presents a real-time static hand gesture recognition system for the Devanagari number system using two feature extraction techniques: Discrete Cosine Transform (DCT) and Edge Oriented Histogram (EOH).
2) The system captures an image using a webcam, performs pre-processing, extracts the region of interest, then extracts features using DCT or EOH before matching against a training database to recognize the gesture.
3) An experiment tested 20 images and found DCT achieved a higher recognition accuracy of 18 gestures compared to 15 for EOH.
This document summarizes a survey paper on hand gesture recognition using color hex matrices and hidden Markov models. It discusses limitations of current vision-based and data glove-based recognition methods and proposes a solution using a webcam to capture hand images, convert them to RGB matrices, and recognize gestures by comparing changes in the matrices over time using hidden Markov models. The method aims to provide low-cost real-time hand gesture recognition using commonly available hardware and software.
Our paper on homogeneous motion discovery oriented reference frame for high efficiency video coding talks about the idea of segmenting the current frame into cohesive motion regions made of blocks and then using these regions to form a motion compensated prediction. This prediction when used as an additional reference frame for the current frame, shows encouraging savings in bit rate over standalone HEVC reference coder.
TRANSFER LEARNING WITH CONVOLUTIONAL NEURAL NETWORKS FOR IRIS RECOGNITIONijaia
Iris is one of the common biometrics used for identity authentication. It has the potential to recognize persons with a high degree of assurance. Extracting effective features is the most important stage in the iris recognition system. Different features have been used to perform iris recognition system. A lot of them are based on hand-crafted features designed by biometrics experts. According to the achievement of deep learning in object recognition problems, the features learned by the Convolutional Neural Network (CNN) have gained great attention to be used in the iris recognition system. In this paper, we proposed an effective iris recognition system by using transfer learning with Convolutional Neural Networks. The proposed system is implemented by fine-tuning a pre-trained convolutional neural network (VGG-16) for features extracting and classification. The performance of the iris recognition system is tested on four public databases IITD, iris databases CASIA-Iris-V1, CASIA-Iris-thousand and, CASIA-Iris-Interval. The results show that the proposed system is achieved a very high accuracy rate.
Hand gesture recognition using support vector machinetheijes
1) The document describes a system for hand gesture recognition using support vector machines. It uses Canny's edge detection algorithm and histogram of gradients (HOG) for feature extraction from input images of hand gestures.
2) The system is trained using a dataset of predefined hand gestures. During testing, it compares the features extracted from new input images to those in the training dataset and classifies the gesture using an SVM classifier.
3) Experimental results found the system could accurately recognize 20 different static hand gestures in complex backgrounds. However, the authors note that future work could focus on real-time gesture recognition and reducing complexity for faster processing.
IEEE EED2021 AI use cases in Computer VisionSAMeh Zaghloul
AI Use Cases in Computer Vision
Introduction and Overview about AI Use Cases in Computer Vision, to answer a basic question: “How Machines See?”, covering Neural Networks, Object detection and recognition, Content-based image retrieval, Object tracking, Image restoration, Scene reconstruction, Computer Vision Tools, Frameworks, Pretrained Models, and Public Train/Test Datasets.
With real-project examples on using Computer Vision in Egyptian Hieroglyph Alphabet recognition, Face Recognition/Matching, in addition to hands-on interactive session on Object/Image Tagging/Annotation on Videos/Images to prepare model training dataset.
Final Year IEEE Project 2013-2014 - Digital Image Processing Project Title a...elysiumtechnologies
This document provides information about Elysium Technologies Private Limited, an Indian technology company with over 13 years of experience. It has branches across multiple cities in India and provides services such as automated services, 24/7 help desk support, and ticketing and appointment systems. The company has over 250 developers and 20 researchers on staff.
There has been over the past few years, a very increased popularity for yoga. A lot of literatures have been published that claim yoga to be beneficial in improving the overall lifestyle and health especially in rehabilitation, mental health and more. Considering the fast-paced lives that individuals live, people usually prefer to exercise or work-out from the comfort of their homes and with that a need for an instructor arises. Hence why, we have developed a self-assisted system which can be used to detect and classify yoga asanas, which is discussed in-depth in this paper. Especially now when the pandemic has taken over the world, it is not feasible to attend physical classes or have an instructor over. Using the technology of Computer Vision, a computer-assisted system such as the one discussed, comes in very handy. The technologies such as ml5.js, PoseNet and Neural Networks are made use for the human pose estimation and classification. The proposed system uses the above-mentioned technologies to take in a real-time video input and analyze the pose of an individual, and classifies the poses into yoga asanas. It also displays the name of the yoga asana that is detected along with the confidence score.
This document proposes an e-learning application called ELGR that uses gesture recognition to control a computer interface. Specifically, it aims to recognize finger movements and patterns to perform mouse operations like clicking, dragging, etc. The application would use color tracking rather than complex RGB-to-YCbCr conversion to identify gestures in real time. The document reviews literature on gesture recognition techniques, discusses relevant concepts in image processing and computer vision, and outlines the proposed seven-step algorithm for ELGR to provide a more natural user experience for e-learning.
Computer Based Human Gesture Recognition With Study Of AlgorithmsIOSR Journals
This document discusses computer-based human gesture recognition algorithms. It begins with an introduction to gesture recognition and its uses in human-computer interaction. It then describes two main approaches to gesture recognition: appearance-based and 3D model-based. For appearance-based recognition, it discusses active appearance models and histogram-of-motion words. For 3D model-based recognition, it discusses using 3D image data to achieve invariance to viewpoint. It also discusses representing gestures as sequences of motion primitives to achieve viewpoint independence. Finally, it discusses skeletal algorithms that represent body pose as joint configurations and angles.
Vision Based Gesture Recognition Using Neural Networks Approaches: A ReviewWaqas Tariq
The aim of gesture recognition researches is to create system that easily identifies gestures, and use them for device control, or convey in formations. In this paper we are discussing researches done in the area of hand gesture recognition based on Artificial Neural Networks approaches. Several hand gesture recognition researches that use Neural Networks are discussed in this paper, comparisons between these methods were presented, advantages and drawbacks of the discussed methods also included, and implementation tools for each method were presented as well.
Efficient mobilenet architecture_as_image_recognitEL Mehdi RAOUHI
1. The document discusses the MobileNet architecture for image recognition on mobile and embedded devices with limited computing resources. MobileNet uses depthwise separable convolutions to reduce computational costs compared to traditional convolutional neural networks.
2. MobileNet splits regular convolutions into depthwise convolutions followed by 1x1 pointwise convolutions. This factorization significantly reduces computations and model size while maintaining accuracy.
3. The document evaluates MobileNet on the Caltech101 dataset using a mobile device. MobileNet achieved 92.4% accuracy while drawing only 2.1 Watts of power, demonstrating its efficiency for resource-constrained environments.
IRJET- Smart Ship Detection using Transfer Learning with ResNetIRJET Journal
This document discusses using a convolutional neural network called ResNet with transfer learning to automatically detect ships in satellite images. ResNet is a deep residual network that can increase performance efficiency and reduce overfitting. The objective is to detect ships with high accuracy. Transfer learning is used to take a pre-trained ResNet-50 model and reuse it to classify satellite images into ship and non-ship categories, improving accuracy over traditional methods. Convolutional neural networks like ResNet are well-suited for this task as they can learn patterns directly from images without manual feature extraction.
Human motion is fundamental to understanding behaviour. In spite of advancement on single image 3 Dimensional pose and estimation of shapes, current video-based state of the art methods unsuccessful to produce precise and motion of natural sequences due to inefficiency of ground-truth 3 Dimensional motion data for training. Recognition of Human action for programmed video surveillance applications is an interesting but forbidding task especially if the videos are captured in an unpleasant lighting environment. It is a Spatial-temporal feature-based correlation filter, for concurrent observation and identification of numerous human actions in a little-light environment. Estimated the presentation of a proposed filter with immense experimentation on night-time action datasets. Tentative results demonstrate the potency of the merging schemes for vigorous action recognition in a significantly low light environment.
A presentation on Image Recognition, the basic definition and working of Image Recognition, Edge Detection, Neural Networks, use of Convolutional Neural Network in Image Recognition, Applications, Future Scope and Conclusion
This document summarizes a presentation on image processing. It introduces image processing and discusses acquiring images in digital formats. It covers various aspects of image processing like enhancement, restoration, and geometry transformations. Image processing techniques discussed include histograms, compression, analysis, and computer-aided detection. Color imaging and different image types are also introduced. The document concludes with mentioning some common image processing software.
CONVOLUTIONAL NEURAL NETWORK BASED FEATURE EXTRACTION FOR IRIS RECOGNITION ijcsit
Iris is a powerful tool for reliable human identification. It has the potential to identify individuals with a
high degree of assurance. Extracting good features is the most significant step in the iris recognition
system. In the past, different features have been used to implement iris recognition system. Most of them are
depend on hand-crafted features designed by biometrics specialists. Due to the success of deep learning in
computer vision problems, the features learned by the Convolutional Neural Network (CNN) have gained
much attention to be applied for iris recognition system. In this paper, we evaluate the extracted learned
features from a pre-trained Convolutional Neural Network (Alex-Net Model) followed by a multi-class
Support Vector Machine (SVM) algorithm to perform classification. The performance of the proposed
system is investigated when extracting features from the segmented iris image and from the normalized iris
image. The proposed iris recognition system is tested on four public datasets IITD, iris databases CASIAIris-V1,
CASIA-Iris-thousand and, CASIA-Iris- V3 Interval. The system achieved excellent results with the
very high accuracy rate.
IRJET- Hand Sign Recognition using Convolutional Neural NetworkIRJET Journal
1) The document presents a study on using a convolutional neural network (CNN) to recognize American Sign Language (ASL) alphabets captured in real-time via a webcam.
2) The researchers trained a CNN model on 1600 images of 5 ASL alphabets (E, F, I, L, V) and tested it on 320 unlabeled images, achieving a validation accuracy of 74.8%.
3) While the model showed potential, the researchers acknowledged limitations like overfitting due to the small dataset and noted areas for improvement like recognizing a broader range of ASL letters and full sentences.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
An Approach for Object and Scene Detection for Blind Peoples Using Vocal Vision.IJERA Editor
This system help the blind peoples for the navigation without the help of third person so blind person can perform its work independently. This system implemented on android device in which object detection and scene detection implemented, so after detection there will be text to speech conversion so user or blind person can get message from that android device with the help of headphone connected to that device. Our project will help blind people to understand the images which will be converted to sound with the help of webcam. We shall capture images in front of blind peoples .The captured image will be processed through our algorithms which will enhances the image data. The hardware component will have its own database. The processed image is compare with the database in the hardware component .The result after processing and comparing will be converted into speech signals. The headphones guide the blind peoples.
This document provides an overview of the syllabus for the course ECS-702 Digital Image Processing. It covers 5 units: Introduction and Fundamentals, Image Enhancement in Spatial and Frequency Domains, Image Restoration, Morphological Image Processing, and Image Segmentation. The introduction discusses key concepts like the components of an image processing system, elements of visual perception, and the fundamental steps of image acquisition, enhancement, and restoration. The syllabus then delves into specific techniques in each unit such as spatial filters, Fourier transforms, noise models, morphological operations, and segmentation approaches.
IRJET- Heuristic Approach for Low Light Image Enhancement using Deep LearningIRJET Journal
This document discusses a deep learning approach for enhancing low light images. It begins by describing the challenges of low light imaging such as low signal-to-noise ratio and increased noise. It then reviews existing image enhancement and denoising techniques that have limitations under extreme low light conditions. The proposed approach uses a convolutional neural network trained on a dataset of low and high exposure image pairs to learn an end-to-end image processing pipeline directly from raw sensor data. This aims to better handle noise and color biases compared to traditional pipelines. The goals are to enhance short exposure images while suppressing noise and applying proper color transformations.
Device for text to speech production and to braille scriptIAEME Publication
The document describes a proposed system to convert text to both speech and Braille script for blind or deaf individuals. The system would take an image of text as input, perform image processing techniques like enhancement, filtering, and edge detection, then segment and recognize characters. The recognized text would be converted to speech output using text-to-speech synthesis or to Braille script by mapping characters to Braille codes and outputting to a tactile display. The goal is to make learning materials more accessible for blind or deaf individuals by converting textbook images to audio or Braille formats.
Development of 3D convolutional neural network to recognize human activities ...journalBEEI
This document describes the development of a 3D convolutional neural network (CNN) model to recognize human activities using moderate computation capabilities. The model is trained on the KTH dataset, which contains activities like walking, running, jogging, handwaving, handclapping, and boxing. The proposed model uses 3D CNN layers and max pooling layers to extract both spatial and temporal features from video frames. Testing achieved an accuracy of 93.33% for activity recognition. The number of model parameters and operations are also calculated to show the model can perform human activity recognition with reasonable computational requirements suitable for devices with moderate capabilities.
Comprehensive Study of the Work Done In Image Processing and Compression Tech...IRJET Journal
This document summarizes research on image processing techniques to address redundancy. It discusses how overlapping pixels when merging images can cause redundancy, taking up extra space. It reviews papers analyzing redundancy problems from compression techniques. Lossy techniques like discrete cosine transform and lossless techniques like run length encoding and Huffman encoding are described for compressing images to reduce redundancy. The document also discusses using compression to eliminate irrelevant information from images.
Gesture Recognition System using Computer VisionIRJET Journal
This document presents a gesture recognition system using computer vision and convolutional neural networks. It discusses developing classifiers to recognize hand gestures and facial expressions. A dataset of 87,000 images is used to train models to classify 26 letters of the American Sign Language alphabet, as well as additional classes for space, delete and nothing. The models are trained using transfer learning with MobileNet, achieving validation accuracies of over 90% for hand gesture classification and implementing a system that recognizes and translates gestures in real-time. It concludes the paper developed robust models for American Sign Language translation and facial expression recognition using CNNs.
IRJET- Survey Paper on Vision based Hand Gesture RecognitionIRJET Journal
This document presents a survey of previous research on vision-based hand gesture recognition. It discusses various methods that have been used, including discrete wavelet transforms, skin color segmentation, orientation histograms, and neural networks. The document proposes a new methodology using webcam image capture, static and dynamic gesture definition, image processing techniques like localization, enhancement, segmentation, and morphological filtering, and a convolutional neural network for classification. The goal is to develop a more efficient and accurate system for hand gesture recognition and human-computer interaction.
Sign Language Recognition using Machine LearningIRJET Journal
This document describes a study on sign language recognition using machine learning. The researchers developed a convolutional neural network model to detect hand movements and classify them as letters of the alphabet from sign language. They used a dataset of images of American Sign Language letters and trained their CNN model on this data. Their model was able to accurately recognize the letters in real-time using input from a webcam. The document also discusses using background subtraction and other techniques to improve the model's performance at sign language recognition.
IRJET- Sign Language and Gesture Recognition for Deaf and Dumb PeopleIRJET Journal
This document describes a system for sign language and gesture recognition to help deaf and dumb people communicate. The proposed system uses image processing techniques like Histogram of Oriented Gradients (HOG) and an Artificial Neural Network (ANN) to recognize hand gestures from images taken by a webcam without the need for sensors. The system is trained on a dataset of sign language images and can recognize gestures and output corresponding voice or text. This allows for two-way communication between deaf/mute and normal individuals by converting signs to speech and text. The key advantages over previous sensor-based systems are that it does not require any hardware to be worn and can recognize a larger vocabulary of signs and words.
This document describes a proposed sign language interpreter system that uses machine learning and computer vision techniques. It aims to enable deaf and mute users to communicate through computers and the internet by recognizing static hand gestures from camera input and translating them to text. The proposed system extracts features from captured images of signs and uses a support vector machine model to classify the gestures by comparing to a dataset of labeled images. If implemented, this system could help overcome communication barriers for deaf users in an increasingly digital world.
This document proposes an e-learning application called ELGR that uses gesture recognition to control a computer interface. Specifically, it aims to recognize finger movements and patterns to perform mouse operations like clicking, dragging, etc. The application would use color tracking rather than complex RGB-to-YCbCr conversion to identify gestures in real time. The document reviews literature on gesture recognition techniques, discusses relevant concepts in image processing and computer vision, and outlines the proposed seven-step algorithm for ELGR to provide a more natural user experience for e-learning.
Computer Based Human Gesture Recognition With Study Of AlgorithmsIOSR Journals
This document discusses computer-based human gesture recognition algorithms. It begins with an introduction to gesture recognition and its uses in human-computer interaction. It then describes two main approaches to gesture recognition: appearance-based and 3D model-based. For appearance-based recognition, it discusses active appearance models and histogram-of-motion words. For 3D model-based recognition, it discusses using 3D image data to achieve invariance to viewpoint. It also discusses representing gestures as sequences of motion primitives to achieve viewpoint independence. Finally, it discusses skeletal algorithms that represent body pose as joint configurations and angles.
Vision Based Gesture Recognition Using Neural Networks Approaches: A ReviewWaqas Tariq
The aim of gesture recognition researches is to create system that easily identifies gestures, and use them for device control, or convey in formations. In this paper we are discussing researches done in the area of hand gesture recognition based on Artificial Neural Networks approaches. Several hand gesture recognition researches that use Neural Networks are discussed in this paper, comparisons between these methods were presented, advantages and drawbacks of the discussed methods also included, and implementation tools for each method were presented as well.
Efficient mobilenet architecture_as_image_recognitEL Mehdi RAOUHI
1. The document discusses the MobileNet architecture for image recognition on mobile and embedded devices with limited computing resources. MobileNet uses depthwise separable convolutions to reduce computational costs compared to traditional convolutional neural networks.
2. MobileNet splits regular convolutions into depthwise convolutions followed by 1x1 pointwise convolutions. This factorization significantly reduces computations and model size while maintaining accuracy.
3. The document evaluates MobileNet on the Caltech101 dataset using a mobile device. MobileNet achieved 92.4% accuracy while drawing only 2.1 Watts of power, demonstrating its efficiency for resource-constrained environments.
IRJET- Smart Ship Detection using Transfer Learning with ResNetIRJET Journal
This document discusses using a convolutional neural network called ResNet with transfer learning to automatically detect ships in satellite images. ResNet is a deep residual network that can increase performance efficiency and reduce overfitting. The objective is to detect ships with high accuracy. Transfer learning is used to take a pre-trained ResNet-50 model and reuse it to classify satellite images into ship and non-ship categories, improving accuracy over traditional methods. Convolutional neural networks like ResNet are well-suited for this task as they can learn patterns directly from images without manual feature extraction.
Human motion is fundamental to understanding behaviour. In spite of advancement on single image 3 Dimensional pose and estimation of shapes, current video-based state of the art methods unsuccessful to produce precise and motion of natural sequences due to inefficiency of ground-truth 3 Dimensional motion data for training. Recognition of Human action for programmed video surveillance applications is an interesting but forbidding task especially if the videos are captured in an unpleasant lighting environment. It is a Spatial-temporal feature-based correlation filter, for concurrent observation and identification of numerous human actions in a little-light environment. Estimated the presentation of a proposed filter with immense experimentation on night-time action datasets. Tentative results demonstrate the potency of the merging schemes for vigorous action recognition in a significantly low light environment.
A presentation on Image Recognition, the basic definition and working of Image Recognition, Edge Detection, Neural Networks, use of Convolutional Neural Network in Image Recognition, Applications, Future Scope and Conclusion
This document summarizes a presentation on image processing. It introduces image processing and discusses acquiring images in digital formats. It covers various aspects of image processing like enhancement, restoration, and geometry transformations. Image processing techniques discussed include histograms, compression, analysis, and computer-aided detection. Color imaging and different image types are also introduced. The document concludes with mentioning some common image processing software.
CONVOLUTIONAL NEURAL NETWORK BASED FEATURE EXTRACTION FOR IRIS RECOGNITION ijcsit
Iris is a powerful tool for reliable human identification. It has the potential to identify individuals with a
high degree of assurance. Extracting good features is the most significant step in the iris recognition
system. In the past, different features have been used to implement iris recognition system. Most of them are
depend on hand-crafted features designed by biometrics specialists. Due to the success of deep learning in
computer vision problems, the features learned by the Convolutional Neural Network (CNN) have gained
much attention to be applied for iris recognition system. In this paper, we evaluate the extracted learned
features from a pre-trained Convolutional Neural Network (Alex-Net Model) followed by a multi-class
Support Vector Machine (SVM) algorithm to perform classification. The performance of the proposed
system is investigated when extracting features from the segmented iris image and from the normalized iris
image. The proposed iris recognition system is tested on four public datasets IITD, iris databases CASIAIris-V1,
CASIA-Iris-thousand and, CASIA-Iris- V3 Interval. The system achieved excellent results with the
very high accuracy rate.
IRJET- Hand Sign Recognition using Convolutional Neural NetworkIRJET Journal
1) The document presents a study on using a convolutional neural network (CNN) to recognize American Sign Language (ASL) alphabets captured in real-time via a webcam.
2) The researchers trained a CNN model on 1600 images of 5 ASL alphabets (E, F, I, L, V) and tested it on 320 unlabeled images, achieving a validation accuracy of 74.8%.
3) While the model showed potential, the researchers acknowledged limitations like overfitting due to the small dataset and noted areas for improvement like recognizing a broader range of ASL letters and full sentences.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
An Approach for Object and Scene Detection for Blind Peoples Using Vocal Vision.IJERA Editor
This system help the blind peoples for the navigation without the help of third person so blind person can perform its work independently. This system implemented on android device in which object detection and scene detection implemented, so after detection there will be text to speech conversion so user or blind person can get message from that android device with the help of headphone connected to that device. Our project will help blind people to understand the images which will be converted to sound with the help of webcam. We shall capture images in front of blind peoples .The captured image will be processed through our algorithms which will enhances the image data. The hardware component will have its own database. The processed image is compare with the database in the hardware component .The result after processing and comparing will be converted into speech signals. The headphones guide the blind peoples.
This document provides an overview of the syllabus for the course ECS-702 Digital Image Processing. It covers 5 units: Introduction and Fundamentals, Image Enhancement in Spatial and Frequency Domains, Image Restoration, Morphological Image Processing, and Image Segmentation. The introduction discusses key concepts like the components of an image processing system, elements of visual perception, and the fundamental steps of image acquisition, enhancement, and restoration. The syllabus then delves into specific techniques in each unit such as spatial filters, Fourier transforms, noise models, morphological operations, and segmentation approaches.
IRJET- Heuristic Approach for Low Light Image Enhancement using Deep LearningIRJET Journal
This document discusses a deep learning approach for enhancing low light images. It begins by describing the challenges of low light imaging such as low signal-to-noise ratio and increased noise. It then reviews existing image enhancement and denoising techniques that have limitations under extreme low light conditions. The proposed approach uses a convolutional neural network trained on a dataset of low and high exposure image pairs to learn an end-to-end image processing pipeline directly from raw sensor data. This aims to better handle noise and color biases compared to traditional pipelines. The goals are to enhance short exposure images while suppressing noise and applying proper color transformations.
Device for text to speech production and to braille scriptIAEME Publication
The document describes a proposed system to convert text to both speech and Braille script for blind or deaf individuals. The system would take an image of text as input, perform image processing techniques like enhancement, filtering, and edge detection, then segment and recognize characters. The recognized text would be converted to speech output using text-to-speech synthesis or to Braille script by mapping characters to Braille codes and outputting to a tactile display. The goal is to make learning materials more accessible for blind or deaf individuals by converting textbook images to audio or Braille formats.
Development of 3D convolutional neural network to recognize human activities ...journalBEEI
This document describes the development of a 3D convolutional neural network (CNN) model to recognize human activities using moderate computation capabilities. The model is trained on the KTH dataset, which contains activities like walking, running, jogging, handwaving, handclapping, and boxing. The proposed model uses 3D CNN layers and max pooling layers to extract both spatial and temporal features from video frames. Testing achieved an accuracy of 93.33% for activity recognition. The number of model parameters and operations are also calculated to show the model can perform human activity recognition with reasonable computational requirements suitable for devices with moderate capabilities.
Comprehensive Study of the Work Done In Image Processing and Compression Tech...IRJET Journal
This document summarizes research on image processing techniques to address redundancy. It discusses how overlapping pixels when merging images can cause redundancy, taking up extra space. It reviews papers analyzing redundancy problems from compression techniques. Lossy techniques like discrete cosine transform and lossless techniques like run length encoding and Huffman encoding are described for compressing images to reduce redundancy. The document also discusses using compression to eliminate irrelevant information from images.
Gesture Recognition System using Computer VisionIRJET Journal
This document presents a gesture recognition system using computer vision and convolutional neural networks. It discusses developing classifiers to recognize hand gestures and facial expressions. A dataset of 87,000 images is used to train models to classify 26 letters of the American Sign Language alphabet, as well as additional classes for space, delete and nothing. The models are trained using transfer learning with MobileNet, achieving validation accuracies of over 90% for hand gesture classification and implementing a system that recognizes and translates gestures in real-time. It concludes the paper developed robust models for American Sign Language translation and facial expression recognition using CNNs.
IRJET- Survey Paper on Vision based Hand Gesture RecognitionIRJET Journal
This document presents a survey of previous research on vision-based hand gesture recognition. It discusses various methods that have been used, including discrete wavelet transforms, skin color segmentation, orientation histograms, and neural networks. The document proposes a new methodology using webcam image capture, static and dynamic gesture definition, image processing techniques like localization, enhancement, segmentation, and morphological filtering, and a convolutional neural network for classification. The goal is to develop a more efficient and accurate system for hand gesture recognition and human-computer interaction.
Sign Language Recognition using Machine LearningIRJET Journal
This document describes a study on sign language recognition using machine learning. The researchers developed a convolutional neural network model to detect hand movements and classify them as letters of the alphabet from sign language. They used a dataset of images of American Sign Language letters and trained their CNN model on this data. Their model was able to accurately recognize the letters in real-time using input from a webcam. The document also discusses using background subtraction and other techniques to improve the model's performance at sign language recognition.
IRJET- Sign Language and Gesture Recognition for Deaf and Dumb PeopleIRJET Journal
This document describes a system for sign language and gesture recognition to help deaf and dumb people communicate. The proposed system uses image processing techniques like Histogram of Oriented Gradients (HOG) and an Artificial Neural Network (ANN) to recognize hand gestures from images taken by a webcam without the need for sensors. The system is trained on a dataset of sign language images and can recognize gestures and output corresponding voice or text. This allows for two-way communication between deaf/mute and normal individuals by converting signs to speech and text. The key advantages over previous sensor-based systems are that it does not require any hardware to be worn and can recognize a larger vocabulary of signs and words.
This document describes a proposed sign language interpreter system that uses machine learning and computer vision techniques. It aims to enable deaf and mute users to communicate through computers and the internet by recognizing static hand gestures from camera input and translating them to text. The proposed system extracts features from captured images of signs and uses a support vector machine model to classify the gestures by comparing to a dataset of labeled images. If implemented, this system could help overcome communication barriers for deaf users in an increasingly digital world.
IRJET- Object Detection and Recognition for Blind AssistanceIRJET Journal
1. The document proposes a system using object and color recognition and convolutional neural networks to enhance the capabilities of visually impaired people.
2. The system uses a camera mounted on glasses to capture images which are then preprocessed, compressed, and used to train a classifier model to recognize common objects.
3. The proposed hardware implementation uses a Raspberry Pi for its small size and open source software support, including TensorFlow for training convolutional neural network models.
VIDEO BASED SIGN LANGUAGE RECOGNITION USING CNN-LSTMIRJET Journal
This document presents a proposed method for video-based sign language recognition using convolutional neural networks (CNN) and long short-term memory (LSTM). The method uses CNN to extract spatial features from video frames of sign language and LSTM to analyze the temporal characteristics of the frames to recognize the sign. Color segmentation is used to isolate the hands from video frames by detecting colored gloves worn by the signer. CNN is trained on spatial features from frames to classify signs, and LSTM is used to analyze the sequential features from CNN to recognize signs in full videos. The proposed method achieved 94% accuracy on sign recognition in testing.
SIGN LANGUAGE INTERFACE SYSTEM FOR HEARING IMPAIRED PEOPLEIRJET Journal
The document describes a proposed sign language interface system for hearing impaired people. The system aims to use machine learning algorithms like convolutional neural networks to classify hand gestures captured by a webcam into corresponding letters or words. The system would preprocess the images, extract features, then use a trained CNN model to predict the sign and output it as text and speech for better understanding by users. The goal is to help bridge communication between deaf/mute and normal people without requiring specialized gloves or sensors.
IRJET- Convenience Improvement for Graphical Interface using Gesture Dete...IRJET Journal
This document discusses a proposed system for improving graphical user interfaces using hand gesture detection. The system aims to allow users to access information from the internet without using input devices like a mouse or keyboard. It uses a webcam to capture images of hand gestures, which are then processed using techniques like skin color segmentation, principal component analysis, and template matching to recognize the gestures. The recognized gestures can then be linked to retrieving specific data from pre-defined URLs. An evaluation of the system found it had an accuracy rate of 90% in real-time testing for retrieving data from 10 different URLs using 10 unique hand gestures. The proposed system provides a more convenient interface compared to traditional mouse and keyboard methods.
Human Action Recognition using Contour History Images and Neural Networks Cla...IRJET Journal
This document proposes a new method for human action recognition using contour history images extracted from silhouettes, tracking of the body's center movement, and the relative dimensions of the bounding box containing each contour history image. Features are extracted and reduced using three different methods: dividing the contour history images into rectangles, a shallow autoencoder neural network, and a deep autoencoder neural network. The reduced features are classified using a neural network classifier. The proposed method achieved a recognition rate of 98.9% on a standard human action dataset, demonstrating its potential for real-time human action recognition applications.
Object and Currency Detection for the Visually ImpairedIRJET Journal
The document describes a proposed system to detect objects and currency using computer vision and deep learning to help visually impaired people. The system uses two neural networks - one based on MobileNet trained on COCO dataset for object and obstacle detection, and another MobileNet trained on a currency dataset using transfer learning for currency detection. When the mobile app is opened, it will use the camera to detect objects and currency in real-time, and provide voice feedback to the user. The goal is to help visually impaired people navigate surroundings and identify currency independently.
IRJET- Car Defect Detection using Machine Learning for InsuranceIRJET Journal
This document discusses using machine learning and convolutional neural networks to detect defects in cars from images for insurance purposes. The proposed system would use transfer learning with pre-trained models to classify car damage in images. A larger dataset of car damage images with detailed labels is needed to train more accurate models. The system architecture includes preprocessing techniques like color conversion, feature extraction using CNN models, and classifying damage types. Preliminary results show 99% accuracy can be achieved through transfer learning, but a larger dataset is required to develop more robust models for car defect detection.
MOUSE SIMULATION USING NON MAXIMUM SUPPRESSIONIRJET Journal
This document describes a system to simulate mouse functions using hand gestures detected by a webcam. The system uses Single Shot Multi-box Detection (SSD) and Non-Maximum Suppression (NMS) algorithms to accurately detect hand positions from images. SSD uses anchor boxes to detect objects within a divided image grid, while NMS eliminates overlapping bounding boxes to identify distinct objects. The system maps detected hand landmarks to mouse cursor positions and can perform functions like dragging and zooming. It aims to enable human-computer interaction without additional hardware by interpreting gestures captured from a webcam.
Sign Language Detection using Action RecognitionIRJET Journal
This document presents a sign language detection system using action recognition. It aims to enhance current systems' performance in terms of response time and accuracy. The proposed system uses machine learning algorithms like LSTM neural networks trained on data sets to classify sign language gestures in real-time video. It segments hand regions, extracts features, and recognizes signs with 98% accuracy for 26 gestures. The system is intended to help deaf individuals communicate through translating signs to text in real-world applications.
IRJET- Automatic Data Collection from Forms using Optical Character RecognitionIRJET Journal
1) The document presents an automated system for collecting user data from paper forms using optical character recognition (OCR).
2) It involves scanning paper forms, segmenting the user input fields, performing OCR on the input text using a convolutional recurrent neural network model, and updating the data to a database.
3) This system aims to reduce the time and effort required to manually collect and process form data compared to current methods.
From Pixels to Understanding: Deep Learning's Impact on Image Classification ...IRJET Journal
This document discusses how deep learning has significantly improved image classification and recognition abilities compared to traditional machine learning methods. It provides an overview of different deep learning network structures used for these tasks, including deep belief networks, convolutional neural networks, and recurrent neural networks. Deep learning algorithms are able to extract abstract feature representations from unlabeled image data using multi-layer neural networks, leading to more accurate image categorization than earlier approaches.
IRJET- Face Recognition using Machine LearningIRJET Journal
This document presents a modified CNN architecture for face recognition that adds two batch normalization operations to improve performance. The CNN extracts facial features using convolutional layers and max pooling, and classifies faces using a softmax classifier. The proposed approach was tested on a face database containing images of 4 individuals with varying lighting conditions. Experimental results showed the modified CNN with batch normalization achieved better recognition results than traditional methods.
IRJET- Automated Student’s Attendance Management using Convolutional Neural N...IRJET Journal
This document describes a proposed system to automate student attendance management using convolutional neural networks and face recognition. The system would take attendance automatically by detecting faces in the classroom and comparing them to a database of student faces. This would make the attendance process more efficient than current manual methods like calling roll numbers or paper sign-ins. The system would use a CNN algorithm and face detection/recognition techniques like PCA to detect and identify student faces during lectures and automatically update attendance records.
This document discusses various techniques for image segmentation. It begins with an abstract discussing image segmentation and its importance in image processing. It then discusses different types of image segmentation like semantic and instance segmentation.
The document then discusses implementation of different image segmentation techniques. It implements region-based segmentation using Mask R-CNN. It performs thresholding-based segmentation using simple thresholding, Otsu's automatic thresholding. It also implements clustering-based segmentation using K-means and Fuzzy C-means. Furthermore, it implements edge-based segmentation using gradient-based techniques like Sobel and Prewitt, and Gaussian-based techniques like Laplacian and Canny edge detectors. Code snippets and output images are provided.
IRJET- Significant Neural Networks for Classification of Product ImagesIRJET Journal
This document presents research on using neural networks for product image classification. Specifically, it proposes, implements, and evaluates a deep neural network architecture for classifying non-food e-commerce items into one of 5270 classes. The neural network architecture achieves a top-1 accuracy of 0.61061 on the classification task. The research finds that networks trained on specific domains, such as books, can be effectively transferred to similar datasets in that domain and perform better than networks pre-trained on a more general dataset like ImageNet.
Similar to IRJET- A Vision based Hand Gesture Recognition System using Convolutional Neural Networks (20)
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
1) The document discusses the Sungal Tunnel project in Jammu and Kashmir, India, which is being constructed using the New Austrian Tunneling Method (NATM).
2) NATM involves continuous monitoring during construction to adapt to changing ground conditions, and makes extensive use of shotcrete for temporary tunnel support.
3) The methodology section outlines the systematic geotechnical design process for tunnels according to Austrian guidelines, and describes the various steps of NATM tunnel construction including initial and secondary tunnel support.
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
This study examines the effect of response reduction factors (R factors) on reinforced concrete (RC) framed structures through nonlinear dynamic analysis. Three RC frame models with varying heights (4, 8, and 12 stories) were analyzed in ETABS software under different R factors ranging from 1 to 5. The results showed that displacement increased as the R factor decreased, indicating less linear behavior for lower R factors. Drift also decreased proportionally with increasing R factors from 1 to 5. Shear forces in the frames decreased with higher R factors. In general, R factors of 3 to 5 produced more satisfactory performance with less displacement and drift. The displacement variations between different building heights were consistent at different R factors. This study evaluated how R factors influence
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...IRJET Journal
This study compares the use of Stark Steel and TMT Steel as reinforcement materials in a two-way reinforced concrete slab. Mechanical testing is conducted to determine the tensile strength, yield strength, and other properties of each material. A two-way slab design adhering to codes and standards is executed with both materials. The performance is analyzed in terms of deflection, stability under loads, and displacement. Cost analyses accounting for material, durability, maintenance, and life cycle costs are also conducted. The findings provide insights into the economic and structural implications of each material for reinforcement selection and recommendations on the most suitable material based on the analysis.
Effect of Camber and Angles of Attack on Airfoil CharacteristicsIRJET Journal
This document discusses a study analyzing the effect of camber, position of camber, and angle of attack on the aerodynamic characteristics of airfoils. Sixteen modified asymmetric NACA airfoils were analyzed using computational fluid dynamics (CFD) by varying the camber, camber position, and angle of attack. The results showed the relationship between these parameters and the lift coefficient, drag coefficient, and lift to drag ratio. This provides insight into how changes in airfoil geometry impact aerodynamic performance.
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...IRJET Journal
This document reviews the progress and challenges of aluminum-based metal matrix composites (MMCs), focusing on their fabrication processes and applications. It discusses how various aluminum MMCs have been developed using reinforcements like borides, carbides, oxides, and nitrides to improve mechanical and wear properties. These composites have gained prominence for their lightweight, high-strength and corrosion resistance properties. The document also examines recent advancements in fabrication techniques for aluminum MMCs and their growing applications in industries such as aerospace and automotive. However, it notes that challenges remain around issues like improper mixing of reinforcements and reducing reinforcement agglomeration.
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...IRJET Journal
This document discusses research on using graph neural networks (GNNs) for dynamic optimization of public transportation networks in real-time. GNNs represent transit networks as graphs with nodes as stops and edges as connections. The GNN model aims to optimize networks using real-time data on vehicle locations, arrival times, and passenger loads. This helps increase mobility, decrease traffic, and improve efficiency. The system continuously trains and infers to adapt to changing transit conditions, providing decision support tools. While research has focused on performance, more work is needed on security, socio-economic impacts, contextual generalization of models, continuous learning approaches, and effective real-time visualization.
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...IRJET Journal
This document summarizes a research project that aims to compare the structural performance of conventional slab and grid slab systems in multi-story buildings using ETABS software. The study will analyze both symmetric and asymmetric building models under various loading conditions. Parameters like deflections, moments, shears, and stresses will be examined to evaluate the structural effectiveness of each slab type. The results will provide insights into the comparative behavior of conventional and grid slabs to help engineers and architects select appropriate slab systems based on building layouts and design requirements.
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...IRJET Journal
This document summarizes and reviews a research paper on the seismic response of reinforced concrete (RC) structures with plan and vertical irregularities, with and without infill walls. It discusses how infill walls can improve or reduce the seismic performance of RC buildings, depending on factors like wall layout, height distribution, connection to the frame, and relative stiffness of walls and frames. The reviewed research paper analyzes the behavior of infill walls, effects of vertical irregularities, and seismic performance of high-rise structures under linear static and dynamic analysis. It studies response characteristics like story drift, deflection and shear. The document also provides literature on similar research investigating the effects of infill walls, soft stories, plan irregularities, and different
This document provides a review of machine learning techniques used in Advanced Driver Assistance Systems (ADAS). It begins with an abstract that summarizes key applications of machine learning in ADAS, including object detection, recognition, and decision-making. The introduction discusses the integration of machine learning in ADAS and how it is transforming vehicle safety. The literature review then examines several research papers on topics like lightweight deep learning models for object detection and lane detection models using image processing. It concludes by discussing challenges and opportunities in the field, such as improving algorithm robustness and adaptability.
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...IRJET Journal
The document analyzes temperature and precipitation trends in Asosa District, Benishangul Gumuz Region, Ethiopia from 1993 to 2022 based on data from the local meteorological station. The results show:
1) The average maximum and minimum annual temperatures have generally decreased over time, with maximum temperatures decreasing by a factor of -0.0341 and minimum by -0.0152.
2) Mann-Kendall tests found the decreasing temperature trends to be statistically significant for annual maximum temperatures but not for annual minimum temperatures.
3) Annual precipitation in Asosa District showed a statistically significant increasing trend.
The conclusions recommend development planners account for rising summer precipitation and declining temperatures in
P.E.B. Framed Structure Design and Analysis Using STAAD ProIRJET Journal
This document discusses the design and analysis of pre-engineered building (PEB) framed structures using STAAD Pro software. It provides an overview of PEBs, including that they are designed off-site with building trusses and beams produced in a factory. STAAD Pro is identified as a key tool for modeling, analyzing, and designing PEBs to ensure their performance and safety under various load scenarios. The document outlines modeling structural parts in STAAD Pro, evaluating structural reactions, assigning loads, and following international design codes and standards. In summary, STAAD Pro is used to design and analyze PEB framed structures to ensure safety and code compliance.
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...IRJET Journal
This document provides a review of research on innovative fiber integration methods for reinforcing concrete structures. It discusses studies that have explored using carbon fiber reinforced polymer (CFRP) composites with recycled plastic aggregates to develop more sustainable strengthening techniques. It also examines using ultra-high performance fiber reinforced concrete to improve shear strength in beams. Additional topics covered include the dynamic responses of FRP-strengthened beams under static and impact loads, and the performance of preloaded CFRP-strengthened fiber reinforced concrete beams. The review highlights the potential of fiber composites to enable more sustainable and resilient construction practices.
Survey Paper on Cloud-Based Secured Healthcare SystemIRJET Journal
This document summarizes a survey on securing patient healthcare data in cloud-based systems. It discusses using technologies like facial recognition, smart cards, and cloud computing combined with strong encryption to securely store patient data. The survey found that healthcare professionals believe digitizing patient records and storing them in a centralized cloud system would improve access during emergencies and enable more efficient care compared to paper-based systems. However, ensuring privacy and security of patient data is paramount as healthcare incorporates these digital technologies.
Review on studies and research on widening of existing concrete bridgesIRJET Journal
This document summarizes several studies that have been conducted on widening existing concrete bridges. It describes a study from China that examined load distribution factors for a bridge widened with composite steel-concrete girders. It also outlines challenges and solutions for widening a bridge in the UAE, including replacing bearings and stitching the new and existing structures. Additionally, it discusses two bridge widening projects in New Zealand that involved adding precast beams and stitching to connect structures. Finally, safety measures and challenges for strengthening a historic bridge in Switzerland under live traffic are presented.
React based fullstack edtech web applicationIRJET Journal
The document describes the architecture of an educational technology web application built using the MERN stack. It discusses the frontend developed with ReactJS, backend with NodeJS and ExpressJS, and MongoDB database. The frontend provides dynamic user interfaces, while the backend offers APIs for authentication, course management, and other functions. MongoDB enables flexible data storage. The architecture aims to provide a scalable, responsive platform for online learning.
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...IRJET Journal
This paper proposes integrating Internet of Things (IoT) and blockchain technologies to help implement objectives of India's National Education Policy (NEP) in the education sector. The paper discusses how blockchain could be used for secure student data management, credential verification, and decentralized learning platforms. IoT devices could create smart classrooms, automate attendance tracking, and enable real-time monitoring. Blockchain would ensure integrity of exam processes and resource allocation, while smart contracts automate agreements. The paper argues this integration has potential to revolutionize education by making it more secure, transparent and efficient, in alignment with NEP goals. However, challenges like infrastructure needs, data privacy, and collaborative efforts are also discussed.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.IRJET Journal
This document provides a review of research on the performance of coconut fibre reinforced concrete. It summarizes several studies that tested different volume fractions and lengths of coconut fibres in concrete mixtures with varying compressive strengths. The studies found that coconut fibre improved properties like tensile strength, toughness, crack resistance, and spalling resistance compared to plain concrete. Volume fractions of 2-5% and fibre lengths of 20-50mm produced the best results. The document concludes that using a 4-5% volume fraction of coconut fibres 30-40mm in length with M30-M60 grade concrete would provide benefits based on previous research.
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...IRJET Journal
The document discusses optimizing business management processes through automation using Microsoft Power Automate and artificial intelligence. It provides an overview of Power Automate's key components and features for automating workflows across various apps and services. The document then presents several scenarios applying automation solutions to common business processes like data entry, monitoring, HR, finance, customer support, and more. It estimates the potential time and cost savings from implementing automation for each scenario. Finally, the conclusion emphasizes the transformative impact of AI and automation tools on business processes and the need for ongoing optimization.
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignIRJET Journal
The document describes the seismic design of a G+5 steel building frame located in Roorkee, India according to Indian codes IS 1893-2002 and IS 800. The frame was analyzed using the equivalent static load method and response spectrum method, and its response in terms of displacements and shear forces were compared. Based on the analysis, the frame was designed as a seismic-resistant steel structure according to IS 800:2007. The software STAAD Pro was used for the analysis and design.
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...IRJET Journal
This research paper explores using plastic waste as a sustainable and cost-effective construction material. The study focuses on manufacturing pavers and bricks using recycled plastic and partially replacing concrete with plastic alternatives. Initial results found that pavers and bricks made from recycled plastic demonstrate comparable strength and durability to traditional materials while providing environmental and cost benefits. Additionally, preliminary research indicates incorporating plastic waste as a partial concrete replacement significantly reduces construction costs without compromising structural integrity. The outcomes suggest adopting plastic waste in construction can address plastic pollution while optimizing costs, promoting more sustainable building practices.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.