Advanced man-machine interfaces may be built using gestural interfaces based on vision
technology, but size of pictures rather than specialized acquisition equipment. Segmentation of
the hand, tracking, and identification of the hand position are the three key issues (feature
extraction and classification). Since the first computer was invented in the modern age,
technology has impacted every aspect of our social and personal life, revolutionizing how we
live. A few examples are browsing the web, writing a message, playing a video game, or saving
and retrieving personal or business data.
The technique of turning raw data into numerical features that can be handled while keeping
the information in the original data set is known as feature extraction. Compared to using
machine learning on the raw data directly, it produces superior outcomes. It is possible to
extract features manually or automatically. Identification and description of the characteristics
that are pertinent to a particular situation are necessary for manual feature extraction, as is the
implementation of a method to extract those features. Having a solid grasp of the context or
domain may often aid in making judgements about which characteristics could be helpful.
Engineers and scientists have created feature extraction techniques for pictures, signals, and
text through many years of study. The mean of a signal's window is an illustration of a
straightforward characteristic. Automated feature extraction eliminates the need for human
involvement by automatically extracting features from signals or pictures using specialized
algorithms or deep networks. When you need to go from collecting raw data to creating
machine learning algorithms rapidly, this method may be quite helpful. An example of
automated feature extraction is wavelet scattering.
The initial layers of deep networks have essentially taken the position of feature extraction with
the rise of deep learning, albeit primarily for picture data. Prior to developing powerful
prediction models for signal and time-series applications, feature extraction continues to be the
first hurdle that demands a high level of knowledge. For this reason, among others, humancomputer interaction (HCI) has been regarded as a vibrant area of study in recent years. The
most popular input devices haven't changed much since they were first introduced, perhaps
because the current devices are still useful and efficient enough. However, it is also generally
known that with the steady release of new software and hardware in recent years, computers
have become more pervasive in daily life. The bulk of human-computer interaction (HCI) today
is based on mechanical devices such a keyboard, mouse, joystick, or game-pad, however due
to their capacity to perform a variety of tasks, a class of computational vision-based approaches
is gaining popularity in natural recognition of human motions.
A Vision based Driver Support System for Road Sign Detectionidescitation
Â
In this paper, we proposed a replacement hybrid multipath routing protocol for
MANET known as Hybrid Multipath Progressive Routing Protocol for MANET (HMPRP),
during this work we improve the performance of accepted MANET routing protocols,
namely, the Ad-hoc On-demand Distance Vector routing protocol and use of their most
popular properties to formulate a replacement Hybrid routing protocol using the received
signal strength. The proposed routing protocol optimizes the information measure usage of
MANETs by reducing the routing overload and overhead. This proposed routing protocol
additionally extends the battery lifetime of the mobile devices by reducing the specified
variety of operations for (i) Route determination (ii) for packet forwarding. Simulation
results are used to draw conclusions regarding the proposed routing algorithm and
compared it with the AODV, OLSR, and ZRP protocol. Experiments carried out based on
this proposed algorithm, shows that better performance are achieved with regard to AODV,
OLSR, and ZRP routing algorithm in terms of packet delivery ratio, throughput, energy
consumed and end-to-end packet delay.
This document discusses lane and object detection using computer vision techniques. It begins with an abstract discussing increasing safety in advanced driver assistance systems through tasks like lane and obstacle detection. It then reviews related literature on lane and object detection algorithms. The document outlines the software requirements for a lane and object detection system, including functional requirements like providing an easy user interface and non-functional requirements like performance, safety, and quality attributes. It presents the system design with architectures, UML diagrams, and discusses using a waterfall model for development. Finally, it provides a project plan with estimates for cost and effort using the COCOMO model.
IRJET- Traffic Sign Recognition for Autonomous CarsIRJET Journal
Â
The document discusses a traffic sign recognition system for autonomous vehicles using a convolutional neural network. It begins with an introduction to traffic sign recognition and its importance for autonomous driving systems. It then discusses related works on traffic sign detection and classification, as well as machine learning approaches. The document outlines the proposed methodology, which involves using a CNN model to detect and classify traffic signs from image data. It then discusses the CNN architecture and training approach in more detail. Finally, it concludes that the system can help autonomous vehicles safely navigate by recognizing traffic signs and calculating the distance to the signs in real-time.
IRJET- Image Processing based Intelligent Traffic Control and Monitoring ...IRJET Journal
Â
This document summarizes a research paper on an intelligent traffic control and monitoring system using image processing and the Internet of Things. The system aims to reduce traffic congestion by controlling traffic lights based on real-time traffic density detected through image processing of vehicle images. It consists of hardware and software modules. The hardware uses cameras to capture vehicle images and the software uses image processing techniques like object detection and classification to detect and count vehicles in real-time and estimate traffic density. This information is then used to dynamically adjust traffic light timings with the goal of optimizing traffic flow and reducing waiting times at signals. The system is meant to provide a more efficient solution to traffic management than conventional fixed-time traffic light control systems.
This document provides a comprehensive literature review and analysis of various traffic prediction techniques. It begins with an abstract that outlines the need for accurate traffic forecasting to address issues caused by increased road traffic. The document then reviews several existing traffic prediction methods and technologies, including fuzzy logic-based systems, intelligent traffic signal controllers, dynamic traffic information systems, and frameworks that utilize IoT, cloud computing, and machine learning. It identifies gaps in current literature, such as a lack of sensor data and advanced application frameworks for prediction. Finally, the document presents several comparison tables analyzing traffic prediction techniques based on the datasets, parameters, merits and demerits of each approach. The overall purpose is to conduct a systematic analysis of past work and identify future research
TRAFFIC SIGN BOARD RECOGNITION AND VOICE ALERT SYSTEM USING CNNIRJET Journal
Â
This document describes a proposed traffic sign recognition and voice alert system using convolutional neural networks. The system is designed to identify traffic signs using a CNN model trained on image datasets, then provide an audible alert to notify drivers. It aims to improve safety by ensuring drivers are aware of traffic signs. The system design involves collecting image data, training the CNN model to classify sign types, detecting signs in new images using the model, and triggering voice alerts for recognized signs. Testing showed the system could accurately perform registration, login, image uploading, model training, and sign detection/alerts as intended. The system is intended to help drivers follow traffic rules and make informed decisions to increase safety.
This document describes a proposed 3M secure transportation system that provides security for people (drivers), vehicles, and cargo. The system uses several technologies including face recognition, fingerprint verification, vehicle tracking via GPS and GSM, and QR scanning of cargo. An Android application would be developed to integrate these security features and monitor them. The system is intended for mid-sized transportation businesses to help prevent theft of vehicles and cargo.
A Vision based Driver Support System for Road Sign Detectionidescitation
Â
In this paper, we proposed a replacement hybrid multipath routing protocol for
MANET known as Hybrid Multipath Progressive Routing Protocol for MANET (HMPRP),
during this work we improve the performance of accepted MANET routing protocols,
namely, the Ad-hoc On-demand Distance Vector routing protocol and use of their most
popular properties to formulate a replacement Hybrid routing protocol using the received
signal strength. The proposed routing protocol optimizes the information measure usage of
MANETs by reducing the routing overload and overhead. This proposed routing protocol
additionally extends the battery lifetime of the mobile devices by reducing the specified
variety of operations for (i) Route determination (ii) for packet forwarding. Simulation
results are used to draw conclusions regarding the proposed routing algorithm and
compared it with the AODV, OLSR, and ZRP protocol. Experiments carried out based on
this proposed algorithm, shows that better performance are achieved with regard to AODV,
OLSR, and ZRP routing algorithm in terms of packet delivery ratio, throughput, energy
consumed and end-to-end packet delay.
This document discusses lane and object detection using computer vision techniques. It begins with an abstract discussing increasing safety in advanced driver assistance systems through tasks like lane and obstacle detection. It then reviews related literature on lane and object detection algorithms. The document outlines the software requirements for a lane and object detection system, including functional requirements like providing an easy user interface and non-functional requirements like performance, safety, and quality attributes. It presents the system design with architectures, UML diagrams, and discusses using a waterfall model for development. Finally, it provides a project plan with estimates for cost and effort using the COCOMO model.
IRJET- Traffic Sign Recognition for Autonomous CarsIRJET Journal
Â
The document discusses a traffic sign recognition system for autonomous vehicles using a convolutional neural network. It begins with an introduction to traffic sign recognition and its importance for autonomous driving systems. It then discusses related works on traffic sign detection and classification, as well as machine learning approaches. The document outlines the proposed methodology, which involves using a CNN model to detect and classify traffic signs from image data. It then discusses the CNN architecture and training approach in more detail. Finally, it concludes that the system can help autonomous vehicles safely navigate by recognizing traffic signs and calculating the distance to the signs in real-time.
IRJET- Image Processing based Intelligent Traffic Control and Monitoring ...IRJET Journal
Â
This document summarizes a research paper on an intelligent traffic control and monitoring system using image processing and the Internet of Things. The system aims to reduce traffic congestion by controlling traffic lights based on real-time traffic density detected through image processing of vehicle images. It consists of hardware and software modules. The hardware uses cameras to capture vehicle images and the software uses image processing techniques like object detection and classification to detect and count vehicles in real-time and estimate traffic density. This information is then used to dynamically adjust traffic light timings with the goal of optimizing traffic flow and reducing waiting times at signals. The system is meant to provide a more efficient solution to traffic management than conventional fixed-time traffic light control systems.
This document provides a comprehensive literature review and analysis of various traffic prediction techniques. It begins with an abstract that outlines the need for accurate traffic forecasting to address issues caused by increased road traffic. The document then reviews several existing traffic prediction methods and technologies, including fuzzy logic-based systems, intelligent traffic signal controllers, dynamic traffic information systems, and frameworks that utilize IoT, cloud computing, and machine learning. It identifies gaps in current literature, such as a lack of sensor data and advanced application frameworks for prediction. Finally, the document presents several comparison tables analyzing traffic prediction techniques based on the datasets, parameters, merits and demerits of each approach. The overall purpose is to conduct a systematic analysis of past work and identify future research
TRAFFIC SIGN BOARD RECOGNITION AND VOICE ALERT SYSTEM USING CNNIRJET Journal
Â
This document describes a proposed traffic sign recognition and voice alert system using convolutional neural networks. The system is designed to identify traffic signs using a CNN model trained on image datasets, then provide an audible alert to notify drivers. It aims to improve safety by ensuring drivers are aware of traffic signs. The system design involves collecting image data, training the CNN model to classify sign types, detecting signs in new images using the model, and triggering voice alerts for recognized signs. Testing showed the system could accurately perform registration, login, image uploading, model training, and sign detection/alerts as intended. The system is intended to help drivers follow traffic rules and make informed decisions to increase safety.
This document describes a proposed 3M secure transportation system that provides security for people (drivers), vehicles, and cargo. The system uses several technologies including face recognition, fingerprint verification, vehicle tracking via GPS and GSM, and QR scanning of cargo. An Android application would be developed to integrate these security features and monitor them. The system is intended for mid-sized transportation businesses to help prevent theft of vehicles and cargo.
IRJET- Traffic Sign Detection, Recognition and Notification System using ...IRJET Journal
Â
This document presents a traffic sign detection, recognition, and notification system using Faster R-CNN. The system takes video input containing traffic signs and converts it to frames. Faster R-CNN with ROI pooling and a classifier is used to detect traffic signs. Color and shape information are then used to refine detections. A CNN classifier recognizes the signs. The system notifies drivers of detected signs via audio messages, helping drivers comply with signs even if ignored visually. The proposed detector detects all sign categories, and recognition accuracy on the German Traffic Sign Detection Benchmark dataset exceeds 90% for 42 sign classes.
Taking into consideration the driversâ state might be a serious challenge for designing new advanced driver
assistance systems. During this paper we present a driver assistance system strongly coupled to the user. Driver
Assistance by Augmented Reality for Intelligent Automotive is an augmented reality interface informed by a several
sensors. Communicating the presence of pedestrians or bicyclists to vehicle drivers may end up in safer interactions
with these vulnerable road users. Advanced knowledge about the presence of these users on the roadway is
particularly important when their presence isn't expected or when these users are out of range of the advanced safety
systems that are becoming a daily feature in vehicles today. For example, having advanced knowledge of a pedestrian
walking along a rural roadway is important to increasing driver awareness through in-vehicle warning messages that
provide an augmented version of the roadway ahead. Voice recognition system through an android platform adds
some good flavour during this project. The strategy of voice recognition through this platform is achieved by
converting the input voice signal into text of string and subsequently it's transmitted to embedded system which
contains an arduino atmega328 microcontroller through Bluetooth as a technique of serial communication between an
android application and a control system. The received text string on an arduino is also displayed on the AR Glass. As
connected vehicles start to enter the market, it's conceivable that when the vehicle sensors detect a pedestrian on a
rural roadway, the pedestrian presence is also communicated to vehicles upstream of the pedestrian location that
haven't reached the destination. This paper presents a survey of studies related to perception and cognitive attention
of drivers when this information is presented on Augmented Reality
IRJET- Smart Parking System using Facial Recognition, Optical Character Recog...IRJET Journal
Â
The document proposes a smart parking system using facial recognition, optical character recognition (OCR), and the Internet of Things (IoT). The system uses facial recognition to identify drivers and OCR to extract text from vehicle license plates to verify if the driver and vehicle match database records. An IoT device is integrated with parking gates to automatically open them if verification is successful. The system aims to optimize parking space usage, enhance security, reduce congestion and emissions, and make the parking process more convenient. It analyzes previous research on smart parking and aims to implement an accurate and cost-effective solution.
The number of automobiles on the road will only increase due to the ever increasing population development in urban areas across the world and manufacturers ongoing manufacturing of all types of vehicles. This inevitably causes additional traffic congestion, which is exacerbated during rush hour and especially in big urban areas. Urban planners, local officials, and academics are continuously under pressure from this phenomenon to find methods to make traffic management systems safer and more cost effective. Numerous researches have been carried out to solve this ongoing issue, leading to some major advancements like specific lanes for emergency vehicles in metropolitan areas. Even with these lanes, it is sometimes challenging to meet the appropriate goal timeframes for emergency vehicles to arrive at their locations. Intelligent Transportation System is a fresh approach that aims to solve this problem ITS . By fusing present infrastructure with existing technologies, this approach can aid in problem solving. In this paper, three alternative traffic management techniques RFID, Internet of Things, and Traffic Light Systems TLS , which can be static or dynamic IoT . In the latter approach, traffic data are swiftly gathered and delivered to Big Data for analysis, while mobile applications, also known as User Interface UI , are used to evaluate traffic density in various places and offer alternate traffic relief suggestions. Bharti Kumari | Vinod Mahor "Review: Smart Traffic Management System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-5 , August 2022, URL: https://www.ijtsrd.com/papers/ijtsrd50595.pdf Paper URL: https://www.ijtsrd.com/computer-science/other/50595/review-smart-traffic-management-system/bharti-kumari
This document discusses research on smart cars and intelligent transportation systems. It describes how current vehicles use sensors and computer systems to detect surroundings and operate safely. The researchers expect that within several years, advanced automation and smart driving assistance will become standard in vehicles. The document also reviews literature on implementing safety and technology systems in smart vehicles and infrastructure to communicate and make decisions to increase safety and efficiency.
Smart Traffic Congestion Control System: Leveraging Machine Learning for Urba...IRJET Journal
Â
This document proposes a smart traffic congestion control system that leverages machine learning technologies like CNNs, YOLOv4, LSTM, and PPO to optimize traffic flow in urban environments. The system aims to dynamically adjust signal timings in real-time using data analysis and predictive modeling from cameras and sensors. Convolutional neural networks are used for congestion detection from camera images, while YOLOv4 performs object detection to ensure safety. LSTM networks capture temporal traffic data for predictions, and PPO optimizes signal timings based on current conditions. The system has potential to revolutionize traffic management by intelligently reducing congestion through data-driven decision making.
1. The document discusses a method for detecting distracted drivers using computer vision and machine learning techniques. It proposes using a convolutional neural network (CNN), specifically modifying the VGG-16 architecture, to classify images and identify different types of driver distractions or safe driving behaviors.
2. The CNN would take images of the driver as input to extract features, which would then be classified by the network to determine if the driver is distracted or driving safely. The researchers evaluated their proposed system using the StateFarm distracted driver detection dataset.
3. Previous work on detecting distracted driving is discussed, including using features like hands, face, and mouth to identify cell phone use, as well as developing datasets and classifiers to detect other dist
Research on object detection and recognition using machine learning algorithm...YousefElbayomi
Â
This document discusses research on using machine learning and deep learning for object detection. It examines applications in autonomous vehicles, image detection for agriculture, and credit card fraud detection. For autonomous vehicles, deep learning is discussed for object identification and perception, though speed and real-world performance need improvement. Image detection for agriculture uses feature extraction and machine learning for automatic fruit identification. Credit card fraud detection uses ensemble methods like LightGBM, XGBoost and CatBoost on preprocessed transaction data to identify fraudulent transactions. The document evaluates different approaches and their challenges for these applications of object detection.
A Review: Machine vision and its ApplicationsIOSR Journals
Â
Abstract:The machine vision has been used in the industrial machine designing by using the intelligent character recognition. Due to its increased use, it makes the significant contribution to ensure the competitiveness in modern development. The state of art in machine vision inspection and a critical overview of applications in various industries are presented in this paper. In its restricted sense it is also known as the computer vision or the robot vision. This paper gives the overview of Machine Vision Technology in the first section, followed by various industrial application and thefuture trends in Machine Vision. Keywords:CCD- charged coupled devices, Fruit harvesting system, HIS- Hue Saturation Intensity, Image analysis, Image enhancement, Image feature extraction, Image feature classification processing, Intelligent Vehicle tracking , Isodiscriminationn Contour, Machine Vision
Intelligent Transportation System Based On Machine Learning For Vehicle Perce...IRJET Journal
Â
The document discusses using machine learning for intelligent transportation systems based on vehicle perception. It provides an overview of how machine learning has been increasingly used for tasks like vehicle detection, counting, classification, identification and speed detection. Specifically, it describes approaches like background subtraction, optical flow and deep learning models that have been applied to problems in traffic monitoring, management and road safety. The document also reviews common computer vision algorithms in OpenCV that can be utilized for these tasks and considers factors like speed, accuracy and complexity in choosing the appropriate methods.
This document describes a proposed sign language interpreter system that uses machine learning and computer vision techniques. It aims to enable deaf and mute users to communicate through computers and the internet by recognizing static hand gestures from camera input and translating them to text. The proposed system extracts features from captured images of signs and uses a support vector machine model to classify the gestures by comparing to a dataset of labeled images. If implemented, this system could help overcome communication barriers for deaf users in an increasingly digital world.
The document discusses autonomous driving scene parsing through semantic segmentation. It begins with an introduction to autonomous vehicles and how they use sensors like cameras, radar and LiDAR to detect objects. It then reviews previous work on datasets for autonomous driving, semantic segmentation techniques like U-Net, and the need to study unconstrained environments. The paper proposes using the Indian Driving Dataset and a U-Net model with adjustments to perform semantic segmentation on Indian road scenes.
A Model for Car Plate Recognition & Speed Tracking (CPR-STS) using Machine Le...CSCJournals
Â
The transportation challenges experienced in major cities as a result of influx of people in search of greener pastures is increasing on a daily basis. This results in an increase in the number of cars plying and competing for driving space on narrow roads. Many drivers violate traffic laws as a result of this and how to prosecute them without chasing them remains an issue to be addressed. Therefore, this research presents a model that can be used to solve this challenge using machine learning algorithms. The model consists of recognition modules such as image acquisition, Gaussian blur, localization of car plate, character segmentation and optical character recognition of car plate. K-NN Algorithm was used for training licensed plate font type spanning A-Z and 0-9 while the speed tracking module used a camera which is automatically self-initiated to track the speed of any moving object within its range of focus. The performance of the model was evaluated using metrics such as recognition accuracy, positive prediction value, negative prediction value, specificity and sensitivity. A tracking accuracy of 82% was achieved.
A Survey Paper On Non Intrusive Texting While Driving Detection Using Smart P...Kim Daniels
Â
This document summarizes research on detecting texting while driving using smartphones without user intervention. It proposes using sensors like gyroscopes, accelerometers and GPS to collect touch patterns, device orientation, and vehicle speed. Three patterns are identified to distinguish drivers from passengers: 1) editing messages after speed decreases, 2) stopping edits during turns, and 3) holding the phone upright while editing. The system would identify texting while driving patterns using sensor data without reading message content, preserving privacy. Existing approaches require user input or additional devices to detect location, while this proposed method requires only the user's smartphone.
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...IRJET Journal
Â
This document discusses implementing various machine learning algorithms for traffic sign detection and recognition. It compares the accuracies of KNN, multinomial logistic regression, CNN, and random forest algorithms on a German traffic sign dataset. For real-time traffic sign detection, it uses the YOLO v4 model. The document reviews several papers on traffic sign recognition using techniques like SVM, CNN, Capsule Networks and analyzes their reported accuracies. It then describes the proposed system for traffic sign recognition using two datasets and data preprocessing steps before applying the algorithms and evaluating their performance.
Automated Identification of Road Identifications using CNN and KerasIRJET Journal
Â
The document proposes a model to automatically detect traffic signs using convolutional neural networks (CNN) and the Keras library, even if the signs are unclear or damaged. It aims to help autonomous vehicles properly identify different types of traffic signs. The methodology involves collecting a dataset of traffic sign images, training a CNN model using Keras, testing the model on new images, and using the trained model to recognize signs from user-provided inputs in real-time. Evaluation metrics like accuracy and loss are plotted to analyze the model's performance. The system is meant to achieve over 95% accuracy in identifying various traffic sign types to assist self-driving cars in safely following traffic rules.
Indian Sign Language Recognition using Vision Transformer based Convolutional...IRJET Journal
Â
This document proposes a vision transformer-based convolutional neural network approach for Indian sign language recognition using hand gestures. It aims to improve on traditional machine learning and CNN techniques. The proposed method achieves 99.88% accuracy on a test image database, outperforming state-of-the-art methods. An ablation study also supports that convolutional encoding increases accuracy for hand gesture recognition. The document discusses the challenges of existing data glove and vision-based techniques for hand gesture recognition and human-computer interaction. It aims to develop a more natural and accessible method using computer vision and deep learning.
Traffic Sign Board Detection and Recognition using Convolutional Neural Netwo...IRJET Journal
Â
This document discusses a proposed traffic sign detection and recognition system using convolutional neural networks with voice alerts. The system is trained on the German Traffic Sign Recognition Benchmark dataset containing over 40,000 images across 43 categories. It uses a CNN to detect and classify traffic signs with 97.9% accuracy. When a sign is detected, a voice alert is played through a speaker to notify the driver. The goal is to improve road safety by ensuring drivers are aware of traffic signs and regulations.
APPLICATION OF VARIOUS DEEP LEARNING MODELS FOR AUTOMATIC TRAFFIC VIOLATION D...ijitcs
Â
A rapid growth in the population and economic growth has resulted in an increasing number of vehicles on
road every year. Traffic congestion is a big problem in every metropolitan city. To reach their destination
faster and to avoid traffic, some people are violating traffic rules and regulations. Violation of traffic rules
puts everyone in danger. Maintaining traffic rules manually has become difficult over the time due to the
rapid increase in the population. This alarming situation has be taken care of at the earliest. To overcome
this, we need a real-time violation detection system to help maintain the traffic rules. The approach is to
detect traffic violations in real-time using edge computing, which reduces the time to detect. Different
machine learning models and algorithms were applied to detect traffic violations like traveling without a
helmet, line crossing, parking violation detection, violating the one-way rule etc. The model implemented
gave an accuracy of around 85%, due to memory constraints of the edge device in this case NVIDIA Jetson
Nano, as the fps is quite low.
Vehicle detection by using rear parts and tracking systemeSAT Journals
Â
Abstract Vision of Indian government; of making 100 smart cities, attracts our attention to intelligent transport system. Traffic flow analysis is a part of intelligent transport system. It mainly contains three parts: vehicle detection, classification and vehicle tracking par t. Recently, there are different detection and tracking methods like computer vision based, magnetic frequency wave based etc. With the rapid development of computer vision techniques, visual detection has become increasingly popular in the transportation field. In urban traffic video monitoring systems, traffic congestion is a common scene that causes vehicle occlusion and is a challenge for current vehicle detection methods.In practical traffic scenarios, occlusion between vehicles often occurs; therefore, it is unreasonable to treat the vehicle as a whole. To overcome this problem we can use part based detection model. In our system the vehicle is treated as an object composed of multiple salient parts, including the license plate and rear lamps. These parts are localized using their distinctive color, texture, and region feature. Furthermore, the detected parts are treated as graph nodes to construct a probabilistic graph using a Markov random field model. After that, the marginal posterior of each part is inferred using loopy belief propagation to get final vehicle detection. Finally, the vehiclesâ trajectories are estimated using a Kalman filter and a tracking-based detection technique is realized. This method we can use in daytime as well as night time and in any bad weather condition. Keywords vehicle detection, kalman filter, Markov model, tracking, rear lamps
IRJET- Traffic Sign Detection, Recognition and Notification System using ...IRJET Journal
Â
This document presents a traffic sign detection, recognition, and notification system using Faster R-CNN. The system takes video input containing traffic signs and converts it to frames. Faster R-CNN with ROI pooling and a classifier is used to detect traffic signs. Color and shape information are then used to refine detections. A CNN classifier recognizes the signs. The system notifies drivers of detected signs via audio messages, helping drivers comply with signs even if ignored visually. The proposed detector detects all sign categories, and recognition accuracy on the German Traffic Sign Detection Benchmark dataset exceeds 90% for 42 sign classes.
Taking into consideration the driversâ state might be a serious challenge for designing new advanced driver
assistance systems. During this paper we present a driver assistance system strongly coupled to the user. Driver
Assistance by Augmented Reality for Intelligent Automotive is an augmented reality interface informed by a several
sensors. Communicating the presence of pedestrians or bicyclists to vehicle drivers may end up in safer interactions
with these vulnerable road users. Advanced knowledge about the presence of these users on the roadway is
particularly important when their presence isn't expected or when these users are out of range of the advanced safety
systems that are becoming a daily feature in vehicles today. For example, having advanced knowledge of a pedestrian
walking along a rural roadway is important to increasing driver awareness through in-vehicle warning messages that
provide an augmented version of the roadway ahead. Voice recognition system through an android platform adds
some good flavour during this project. The strategy of voice recognition through this platform is achieved by
converting the input voice signal into text of string and subsequently it's transmitted to embedded system which
contains an arduino atmega328 microcontroller through Bluetooth as a technique of serial communication between an
android application and a control system. The received text string on an arduino is also displayed on the AR Glass. As
connected vehicles start to enter the market, it's conceivable that when the vehicle sensors detect a pedestrian on a
rural roadway, the pedestrian presence is also communicated to vehicles upstream of the pedestrian location that
haven't reached the destination. This paper presents a survey of studies related to perception and cognitive attention
of drivers when this information is presented on Augmented Reality
IRJET- Smart Parking System using Facial Recognition, Optical Character Recog...IRJET Journal
Â
The document proposes a smart parking system using facial recognition, optical character recognition (OCR), and the Internet of Things (IoT). The system uses facial recognition to identify drivers and OCR to extract text from vehicle license plates to verify if the driver and vehicle match database records. An IoT device is integrated with parking gates to automatically open them if verification is successful. The system aims to optimize parking space usage, enhance security, reduce congestion and emissions, and make the parking process more convenient. It analyzes previous research on smart parking and aims to implement an accurate and cost-effective solution.
The number of automobiles on the road will only increase due to the ever increasing population development in urban areas across the world and manufacturers ongoing manufacturing of all types of vehicles. This inevitably causes additional traffic congestion, which is exacerbated during rush hour and especially in big urban areas. Urban planners, local officials, and academics are continuously under pressure from this phenomenon to find methods to make traffic management systems safer and more cost effective. Numerous researches have been carried out to solve this ongoing issue, leading to some major advancements like specific lanes for emergency vehicles in metropolitan areas. Even with these lanes, it is sometimes challenging to meet the appropriate goal timeframes for emergency vehicles to arrive at their locations. Intelligent Transportation System is a fresh approach that aims to solve this problem ITS . By fusing present infrastructure with existing technologies, this approach can aid in problem solving. In this paper, three alternative traffic management techniques RFID, Internet of Things, and Traffic Light Systems TLS , which can be static or dynamic IoT . In the latter approach, traffic data are swiftly gathered and delivered to Big Data for analysis, while mobile applications, also known as User Interface UI , are used to evaluate traffic density in various places and offer alternate traffic relief suggestions. Bharti Kumari | Vinod Mahor "Review: Smart Traffic Management System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-5 , August 2022, URL: https://www.ijtsrd.com/papers/ijtsrd50595.pdf Paper URL: https://www.ijtsrd.com/computer-science/other/50595/review-smart-traffic-management-system/bharti-kumari
This document discusses research on smart cars and intelligent transportation systems. It describes how current vehicles use sensors and computer systems to detect surroundings and operate safely. The researchers expect that within several years, advanced automation and smart driving assistance will become standard in vehicles. The document also reviews literature on implementing safety and technology systems in smart vehicles and infrastructure to communicate and make decisions to increase safety and efficiency.
Smart Traffic Congestion Control System: Leveraging Machine Learning for Urba...IRJET Journal
Â
This document proposes a smart traffic congestion control system that leverages machine learning technologies like CNNs, YOLOv4, LSTM, and PPO to optimize traffic flow in urban environments. The system aims to dynamically adjust signal timings in real-time using data analysis and predictive modeling from cameras and sensors. Convolutional neural networks are used for congestion detection from camera images, while YOLOv4 performs object detection to ensure safety. LSTM networks capture temporal traffic data for predictions, and PPO optimizes signal timings based on current conditions. The system has potential to revolutionize traffic management by intelligently reducing congestion through data-driven decision making.
1. The document discusses a method for detecting distracted drivers using computer vision and machine learning techniques. It proposes using a convolutional neural network (CNN), specifically modifying the VGG-16 architecture, to classify images and identify different types of driver distractions or safe driving behaviors.
2. The CNN would take images of the driver as input to extract features, which would then be classified by the network to determine if the driver is distracted or driving safely. The researchers evaluated their proposed system using the StateFarm distracted driver detection dataset.
3. Previous work on detecting distracted driving is discussed, including using features like hands, face, and mouth to identify cell phone use, as well as developing datasets and classifiers to detect other dist
Research on object detection and recognition using machine learning algorithm...YousefElbayomi
Â
This document discusses research on using machine learning and deep learning for object detection. It examines applications in autonomous vehicles, image detection for agriculture, and credit card fraud detection. For autonomous vehicles, deep learning is discussed for object identification and perception, though speed and real-world performance need improvement. Image detection for agriculture uses feature extraction and machine learning for automatic fruit identification. Credit card fraud detection uses ensemble methods like LightGBM, XGBoost and CatBoost on preprocessed transaction data to identify fraudulent transactions. The document evaluates different approaches and their challenges for these applications of object detection.
A Review: Machine vision and its ApplicationsIOSR Journals
Â
Abstract:The machine vision has been used in the industrial machine designing by using the intelligent character recognition. Due to its increased use, it makes the significant contribution to ensure the competitiveness in modern development. The state of art in machine vision inspection and a critical overview of applications in various industries are presented in this paper. In its restricted sense it is also known as the computer vision or the robot vision. This paper gives the overview of Machine Vision Technology in the first section, followed by various industrial application and thefuture trends in Machine Vision. Keywords:CCD- charged coupled devices, Fruit harvesting system, HIS- Hue Saturation Intensity, Image analysis, Image enhancement, Image feature extraction, Image feature classification processing, Intelligent Vehicle tracking , Isodiscriminationn Contour, Machine Vision
Intelligent Transportation System Based On Machine Learning For Vehicle Perce...IRJET Journal
Â
The document discusses using machine learning for intelligent transportation systems based on vehicle perception. It provides an overview of how machine learning has been increasingly used for tasks like vehicle detection, counting, classification, identification and speed detection. Specifically, it describes approaches like background subtraction, optical flow and deep learning models that have been applied to problems in traffic monitoring, management and road safety. The document also reviews common computer vision algorithms in OpenCV that can be utilized for these tasks and considers factors like speed, accuracy and complexity in choosing the appropriate methods.
This document describes a proposed sign language interpreter system that uses machine learning and computer vision techniques. It aims to enable deaf and mute users to communicate through computers and the internet by recognizing static hand gestures from camera input and translating them to text. The proposed system extracts features from captured images of signs and uses a support vector machine model to classify the gestures by comparing to a dataset of labeled images. If implemented, this system could help overcome communication barriers for deaf users in an increasingly digital world.
The document discusses autonomous driving scene parsing through semantic segmentation. It begins with an introduction to autonomous vehicles and how they use sensors like cameras, radar and LiDAR to detect objects. It then reviews previous work on datasets for autonomous driving, semantic segmentation techniques like U-Net, and the need to study unconstrained environments. The paper proposes using the Indian Driving Dataset and a U-Net model with adjustments to perform semantic segmentation on Indian road scenes.
A Model for Car Plate Recognition & Speed Tracking (CPR-STS) using Machine Le...CSCJournals
Â
The transportation challenges experienced in major cities as a result of influx of people in search of greener pastures is increasing on a daily basis. This results in an increase in the number of cars plying and competing for driving space on narrow roads. Many drivers violate traffic laws as a result of this and how to prosecute them without chasing them remains an issue to be addressed. Therefore, this research presents a model that can be used to solve this challenge using machine learning algorithms. The model consists of recognition modules such as image acquisition, Gaussian blur, localization of car plate, character segmentation and optical character recognition of car plate. K-NN Algorithm was used for training licensed plate font type spanning A-Z and 0-9 while the speed tracking module used a camera which is automatically self-initiated to track the speed of any moving object within its range of focus. The performance of the model was evaluated using metrics such as recognition accuracy, positive prediction value, negative prediction value, specificity and sensitivity. A tracking accuracy of 82% was achieved.
A Survey Paper On Non Intrusive Texting While Driving Detection Using Smart P...Kim Daniels
Â
This document summarizes research on detecting texting while driving using smartphones without user intervention. It proposes using sensors like gyroscopes, accelerometers and GPS to collect touch patterns, device orientation, and vehicle speed. Three patterns are identified to distinguish drivers from passengers: 1) editing messages after speed decreases, 2) stopping edits during turns, and 3) holding the phone upright while editing. The system would identify texting while driving patterns using sensor data without reading message content, preserving privacy. Existing approaches require user input or additional devices to detect location, while this proposed method requires only the user's smartphone.
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...IRJET Journal
Â
This document discusses implementing various machine learning algorithms for traffic sign detection and recognition. It compares the accuracies of KNN, multinomial logistic regression, CNN, and random forest algorithms on a German traffic sign dataset. For real-time traffic sign detection, it uses the YOLO v4 model. The document reviews several papers on traffic sign recognition using techniques like SVM, CNN, Capsule Networks and analyzes their reported accuracies. It then describes the proposed system for traffic sign recognition using two datasets and data preprocessing steps before applying the algorithms and evaluating their performance.
Automated Identification of Road Identifications using CNN and KerasIRJET Journal
Â
The document proposes a model to automatically detect traffic signs using convolutional neural networks (CNN) and the Keras library, even if the signs are unclear or damaged. It aims to help autonomous vehicles properly identify different types of traffic signs. The methodology involves collecting a dataset of traffic sign images, training a CNN model using Keras, testing the model on new images, and using the trained model to recognize signs from user-provided inputs in real-time. Evaluation metrics like accuracy and loss are plotted to analyze the model's performance. The system is meant to achieve over 95% accuracy in identifying various traffic sign types to assist self-driving cars in safely following traffic rules.
Indian Sign Language Recognition using Vision Transformer based Convolutional...IRJET Journal
Â
This document proposes a vision transformer-based convolutional neural network approach for Indian sign language recognition using hand gestures. It aims to improve on traditional machine learning and CNN techniques. The proposed method achieves 99.88% accuracy on a test image database, outperforming state-of-the-art methods. An ablation study also supports that convolutional encoding increases accuracy for hand gesture recognition. The document discusses the challenges of existing data glove and vision-based techniques for hand gesture recognition and human-computer interaction. It aims to develop a more natural and accessible method using computer vision and deep learning.
Traffic Sign Board Detection and Recognition using Convolutional Neural Netwo...IRJET Journal
Â
This document discusses a proposed traffic sign detection and recognition system using convolutional neural networks with voice alerts. The system is trained on the German Traffic Sign Recognition Benchmark dataset containing over 40,000 images across 43 categories. It uses a CNN to detect and classify traffic signs with 97.9% accuracy. When a sign is detected, a voice alert is played through a speaker to notify the driver. The goal is to improve road safety by ensuring drivers are aware of traffic signs and regulations.
APPLICATION OF VARIOUS DEEP LEARNING MODELS FOR AUTOMATIC TRAFFIC VIOLATION D...ijitcs
Â
A rapid growth in the population and economic growth has resulted in an increasing number of vehicles on
road every year. Traffic congestion is a big problem in every metropolitan city. To reach their destination
faster and to avoid traffic, some people are violating traffic rules and regulations. Violation of traffic rules
puts everyone in danger. Maintaining traffic rules manually has become difficult over the time due to the
rapid increase in the population. This alarming situation has be taken care of at the earliest. To overcome
this, we need a real-time violation detection system to help maintain the traffic rules. The approach is to
detect traffic violations in real-time using edge computing, which reduces the time to detect. Different
machine learning models and algorithms were applied to detect traffic violations like traveling without a
helmet, line crossing, parking violation detection, violating the one-way rule etc. The model implemented
gave an accuracy of around 85%, due to memory constraints of the edge device in this case NVIDIA Jetson
Nano, as the fps is quite low.
Vehicle detection by using rear parts and tracking systemeSAT Journals
Â
Abstract Vision of Indian government; of making 100 smart cities, attracts our attention to intelligent transport system. Traffic flow analysis is a part of intelligent transport system. It mainly contains three parts: vehicle detection, classification and vehicle tracking par t. Recently, there are different detection and tracking methods like computer vision based, magnetic frequency wave based etc. With the rapid development of computer vision techniques, visual detection has become increasingly popular in the transportation field. In urban traffic video monitoring systems, traffic congestion is a common scene that causes vehicle occlusion and is a challenge for current vehicle detection methods.In practical traffic scenarios, occlusion between vehicles often occurs; therefore, it is unreasonable to treat the vehicle as a whole. To overcome this problem we can use part based detection model. In our system the vehicle is treated as an object composed of multiple salient parts, including the license plate and rear lamps. These parts are localized using their distinctive color, texture, and region feature. Furthermore, the detected parts are treated as graph nodes to construct a probabilistic graph using a Markov random field model. After that, the marginal posterior of each part is inferred using loopy belief propagation to get final vehicle detection. Finally, the vehiclesâ trajectories are estimated using a Kalman filter and a tracking-based detection technique is realized. This method we can use in daytime as well as night time and in any bad weather condition. Keywords vehicle detection, kalman filter, Markov model, tracking, rear lamps
Similar to Feature Extraction and Gesture Recognition Book.pdf (20)
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
Â
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Understanding Inductive Bias in Machine LearningSUTEJAS
Â
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Â
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
Â
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
Â
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
Â
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
Â
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to todayâs integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
Â
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
2. Contents
Title of Chapters Page (s)
Chapter 1 INTRODUCTION TO SIGN BOARD
DETECTION
1
Mr. Tirumala Vasu G.
Chapter 2 PROBLEM RECOGNITION 4
Ms. Samreen Fiza
Chapter 3 CLASSIFICATIONS OF IMAGE CONVERSION 7
Mr. Tirumala Vasu G.
Chapter 4 ALGORITHMIC DEVELOPMENT AND DESIGN 10
Ms. Samreen Fiza
Chapter 5 DIFFERENT FEATURES EXTRACTION 13
Mr. Tirumala Vasu G.
Chapter 6 DIFFERENT ATTRIBUTES OF FEATURES
EXTRACTION
16
Ms. Ashwini B.
Chapter 7 INTRODUCTION TO GESTURE RECOGNITION 19
Mrs. Varalakshmi K. R.
Chapter 8 SYSTEM REQUIREMENTS FOR GESTURE
RECOGNITION
22
Dr. Rajiv Ranjan Singh
Chapter 9 ARDUINO UNO BOARD FEATURES 25
Mr. Kiran Kale
Chapter 10 ULTRASONIC SENSOR (HC â SR04) 28
Dr. Rajiv Ranjan Singh
Chapter 11 SYSTEM DESIGN FOR VIRTUAL HAND
GESTURE DETECTION
31
Mr. Kiran Kale
Chapter 12 SEGMENTING AND DETECTING HANDS 34
Dr. Rajiv Ranjan Singh
Chapter 13 TESTING, VIRTUAL KEYBOARD, AND TEST
TYPES
37
Mr. Kiran Kale
3. Preface
Advanced man-machine interfaces may be built using gestural interfaces based on vision
technology, but size of pictures rather than specialized acquisition equipment. Segmentation of
the hand, tracking, and identification of the hand position are the three key issues (feature
extraction and classification). Since the first computer was invented in the modern age,
technology has impacted every aspect of our social and personal life, revolutionizing how we
live. A few examples are browsing the web, writing a message, playing a video game, or saving
and retrieving personal or business data.
The technique of turning raw data into numerical features that can be handled while keeping
the information in the original data set is known as feature extraction. Compared to using
machine learning on the raw data directly, it produces superior outcomes. It is possible to
extract features manually or automatically. Identification and description of the characteristics
that are pertinent to a particular situation are necessary for manual feature extraction, as is the
implementation of a method to extract those features. Having a solid grasp of the context or
domain may often aid in making judgements about which characteristics could be helpful.
Engineers and scientists have created feature extraction techniques for pictures, signals, and
text through many years of study. The mean of a signal's window is an illustration of a
straightforward characteristic. Automated feature extraction eliminates the need for human
involvement by automatically extracting features from signals or pictures using specialized
algorithms or deep networks. When you need to go from collecting raw data to creating
machine learning algorithms rapidly, this method may be quite helpful. An example of
automated feature extraction is wavelet scattering.
The initial layers of deep networks have essentially taken the position of feature extraction with
the rise of deep learning, albeit primarily for picture data. Prior to developing powerful
prediction models for signal and time-series applications, feature extraction continues to be the
first hurdle that demands a high level of knowledge. For this reason, among others, human-
computer interaction (HCI) has been regarded as a vibrant area of study in recent years. The
most popular input devices haven't changed much since they were first introduced, perhaps
because the current devices are still useful and efficient enough. However, it is also generally
known that with the steady release of new software and hardware in recent years, computers
have become more pervasive in daily life. The bulk of human-computer interaction (HCI) today
is based on mechanical devices such a keyboard, mouse, joystick, or game-pad, however due
to their capacity to perform a variety of tasks, a class of computational vision-based approaches
is gaining popularity in natural recognition of human motions.
Use of human motions, particularly hand gestures, has increased recently. an essential
component of human-computer intelligent interaction (HCII), which drives research into
modeling, analyzing, and hand gesture recognition. Numerous methods created for HCII may
also be used to teleconferencing, robot control and surveillance. By combining natural motions
with verbal and non-verbal communication, the gravity of the issue may be clearly shown.
Dr. Rajiv Ranjan Singh
Editor
4. ISBN 978-81-962236-3-2 March 2023
1
Feature Extraction and Gesture Recognition
CHAPTER 1
INTRODUCTION TO SIGN BOARD DETECTION
Mr. Tirumala Vasu G.
Assistant Professor, Department of Electronics and Communication Engineering,
Presidency University, Bangalore, India
Email Id- tirumala.vasu@presidencyuniversity.in
Applications for traffic sign board detection and identification systems include automated cars, driver
assistance, and safety information. Due to the intricate surroundings of the routes and surrounding
scenery, as well as the possibility of road signs being present under various situations, there are several
possible obstacles concerning the identification process. Implementation of a processing board and a
camera installed on the cars for the detecting of traffic sign boards [1]. The identification of sign boards
has been done using a variety of techniques. Several gadgets can detect traffic sign boards to provide
drivers with driving information. However, under environmental stresses, these devices malfunction. It
might be because the road sign boards are faded, or it could change depending on the external lighting
conditions. Weather problems and air pollution have an impact on driving as well. To protect drivers,
it is desirable to have a traffic sign board detection and identification system that can deliver safety
information in a variety of driving situations before anything potentially dangerous happens. There is
currently no technology available that provides the driver with this kind of information under any
circumstances. In the age of automation, this technology will be useful for providing the driver with
some safety information in real-time [2]. Additionally, it may be used for roadside upkeep. At the
moment, transportation officials physically inspect traffic sign boards. This technology allows for the
automation of the process and Figure 1 displays a schematic depiction of this study.
Figure 1: Display the Schematic Representation of Sign Board Detection [3].
By offering a quick means to locate and recognize sign boards, automatic methods of doing so
may make a significant impact. Every person ought to have noticed a range of road signs that
have vital functions, whether they were drivers, pedestrians, or travellers. Our ability to inform
5. ISBN 978-81-962236-3-2 March 2023
2
Feature Extraction and Gesture Recognition
drivers and control traffic is made possible by such crucial route directives. The motorist must
pay careful attention to the sign boards to react correctly as traffic control devices [4].
The road signs we've been following have been there for a very long time. Road signs from the
past were landmarks that indicated a route or distance. Multi-directional signage at the
intersection was common during the mediaeval era which gave instructions to both towns and
communities. Many people have used graphic signs in response to the introduction of
motorized cars and the growing demand on the roadways. Additionally, it aims to improve
traffic safety by using efficient warning, control, and observations were done [5]. Standardized
sign boards with a thorough description of their dimensions and form are suggested by the
Motor Vehicle Act of 1988. Road sign types that have recently been widely characterized
include:
⢠Mandatory signs
⢠Cautionary signs,
⢠Informatory signs
Nothing more than a sign's colour serves as another example of how it functions. Blue circles
provide essential guidance, such as compulsory turn left and other phrases. Descriptive
indications are in blue rectangles. Red dominates the color scheme of the triangle-shaped
signage. The kid's "STOP" and "Provide WAY" are the only exceptions to the principles of
form and color that give more value to any sign.
The quality of a person's life and the development of society are two factors that are strongly
correlated with mobility. This is the foundation of international trade and services, and as a
result, the backbone of the world economy. The level of human mobility has increased in
industrialized nations as a result of the mass production of automobiles and the growth of
transportation infrastructure. However, increased mobility also has important unfavourable
effects. For the creation, acquisition, and maintenance of vehicles during mass mobilization, a
significant sum of money is needed. Vehicle exhaust pollution and noise levels are now
reaching severe levels. In actuality, too congested roads reduce travel efficiency and increase
the risk of traffic-related injuries to individuals [6]. The Intelligent Transportation System (ITS)
for the automobile sector should be made available to maximize productivity, comfort, and
accessible safety for the community, and enhanced the management of traffic networks [4].
It is necessary to implement Advanced Driver Assistance Systems (ADAS) for drivers and
more efficient automobile infrastructure. Future research on autonomous cars should be more
focused on advanced human automobile systems. One of the most crucial elements of any
cutting-edge automobile application is camera-based software and computer vision. The use of
computer vision may be dated back to the early 1990s when researchers worked to create
camera-based systems for mobile autonomous robots and self-driving vehicles. There are many
thorough analyses of computer vision applications in connected cars. Applications for detecting
obstacles, pedestrians, and roads are discovered and assessed. According to the WHO (World
Health Organization), traffic accidents result in more than 1.2 million fatalities yearly and a far
greater number of non-fatal occurrences, all of which have a serious impact on the safety and
well-being of victims and their families. Research on the avoidable causes of accidents on the
road is necessary to improve traffic safety. The main source of data used to guide research and
policy on the causes of road accidents is police reports on collisions. The driver's carelessness
and lack of attention resulted in several injuries. Either they disregard traffic signals or they
6. ISBN 978-81-962236-3-2 March 2023
3
Feature Extraction and Gesture Recognition
run over signboards. Most crashes are caused by disregarding a one-way sign board or the
speed restriction sign board. The notion of signboard identification and recognition, which
warns the motorist when a sign board is spotted, was developed to avert these incidents [7].
Bibliography
[1] Y. D. and A. Kumar, âA New Area based Algorithm for Traffic Sign Board Detection
and Classification by SVM Approach,â Int. J. Comput. Appl., 2018, doi:
10.5120/ijca2018917945.
[2] R. Nayak and D. Chandra, âRF Based Sign Board Detection and Collision Avoidance
System,â J. Adv. Res. Electr. Electron. Eng. (ISSN 2208-2395), 2017, doi:
10.53555/nneee.v4i7.165.
[3] Y. D. Chincholkar and A. Kumar, âTRAFFIC SIGN BOARD DETECTION AND
RECOGNITION FOR AUTONOMOUS VEHICLES AND DRIVER ASSISTANCE
SYSTEMS,â ICTACT J. Image Video Process., 2019, doi: 10.21917/ijivp.2019.0277.
[4] G. Loy and N. Barnes, âFast shape-based road sign detection for a driver assistance
system,â 2004. doi: 10.1109/iros.2004.1389331.
[5] K. Kousalya, R. S. Mohana, E. Kavitha, T. Kavya, and S. Keerthana, âInvestigation of
applying various filters for traffic sign board detection using convolution neural
network,â 2021. doi: 10.1063/5.0068623.
[6] K. Ashok Kumar, C. Venkata Deepak, and D. Vamsi Rattaiah Chowdary, âSign Board
Monitoring and Vehicle Accident Detection System Using IoT,â 2019. doi:
10.1088/1757-899X/590/1/012015.
[7] E. O. Polat, G. Mercier, I. Nikitskiy, E. Puma, T. Galan, S. Gupta, M. Montagut, J. J.
Piqueras, M. Bouwens, T. Durduran, G. Konstantatos, S. Goossens, and F. Koppens,
âFlexible graphene photodetectors for wearable fitness monitoring,â Sci. Adv., 2019,
doi: 10.1126/sciadv.aaw7846.
7. ISBN 978-81-962236-3-2 March 2023
4
Feature Extraction and Gesture Recognition
CHAPTER 2
PROBLEM RECOGNITION
Ms. Samreen Fiza
Assistant Professor, Department of Electronics and Communication Engineering,
Presidency University, Bangalore, India
Email Id- samreenfiza@presidencyuniversity.in
Both automated and manual vehicles need effective automatic signal detection and recognition
systems. The protection gap caused by mistakes in human interpretation must be closed by
manual autos. Given the speed and amount of the data, there is a higher need for effective
feature recognition and extraction algorithms [1]. Some of the issues are listed below:
⢠The significance of sign boards for segmentation techniques and identification tasks.
⢠Recognize colour spaces and how to convert between them.
⢠Determining the most straightforward method to extract traits from sign boards.
⢠An effective algorithm for real-time classification and identification of sign boards.
⢠To put certain algorithms into use on hardware.
Driver Support System (DSS):
It continuously monitors and interprets signboards. It works to alleviate road congestion,
increase security, and prevent risky circumstances like collisions. One of the subjects that hasn't
attracted much attention is the capacity to comprehend and comprehend sign boards. When a
motorist reaches the speed limit, certain gadgets can alert individuals. Future intelligent smart
cars will be able to make an informed about their direction, route, etc. depending on the
indicators they have detected. While it may potentially be a component of a completely
autonomous system, help may very well be provided right now to allow transport to manage
its speed, send a message alerting the driver when it exceeds the legal limit, or even reveal the
availability of a signal sooner [2]. The main idea is to help the driver with a few tasks so they
can concentrate on driving. Highway upkeep: Used for testing, installation, and sign board
status. Due to the sign plywood sheets' constant change in appearance with distance and the
official's intense concentration required to find the cracked boards, operating and monitoring a
camera is problematic. When a sign board meets the necessary conditions, the Road Sign
Detection and Identification System will automatically carry out the function and notify the
operator when a sign is discovered but not tagged [3].
Figure 1: Illustrated the Block Diagram of the Driver Support System [4].
8. ISBN 978-81-962236-3-2 March 2023
5
Feature Extraction and Gesture Recognition
Using a car security camera, pictures are captured. Such files are optimally handled for picture
scaling. An image's symbol is located via color detection. You may use the image as a warning
to find objects that are near the red or blue hue of a notice board. Shape detection is utilised to
distinguish between the genuine sign board and these undesired images. The output from the
detection stage is sent into the neural networks at the recognition step. The edge detection
pictures and the CNN output images are evaluated against the training set of images, and the
results are compared [5].
The main steps are:
⢠Resizing images.
⢠Selecting potential colour-dependent artefacts.
⢠Sorting potential items based on form.
⢠Employing a neural network to recognize the filtered object.
⢠Display the identified sign on LCD as the suggested system consists of several sign
board recognition devices cooperating.
i. Image Acquisition:
A picture is acquired by a camera that is placed on a moving vehicle. More than 3400 photo
samples are gathered under different lighting circumstances to design and test the algorithm
[6].
ii. Segmentation of colour:
The sign board has to be isolated from the rest of the image depending on the use throughout
the sign boards. All the tiny blobs and abnormalities that are too small to see sign boards are
removed from the previous stage's output. A pattern tries to match algorithm labels and verifies
the remaining elements [7].
iii. Pattern Matching Algorithm:
Calculating the Euclidean distance amongst patterns in two images and then trying to match
them is the basic notion behind pattern matching [8].
Bibliography
[1] S. K. and A. Kumar, âDetection and Recognition of Road Traffic Signs - A Survey,â
Int. J. Comput. Appl., 2017, doi: 10.5120/ijca2017913038.
[2] J. E. Glass, J. D. Grant, H. Y. Yoon, and K. K. Bucholz, âAlcohol problem recognition
and help seeking in adolescents and young adults at varying genetic and environmental
risk,â Drug Alcohol Depend., 2015, doi: 10.1016/j.drugalcdep.2015.05.006.
[3] P. Fastrez and J. B. HauĂŠ, âDesigning and evaluating driver support systems with the
user in mind,â Int. J. Hum. Comput. Stud., 2008, doi: 10.1016/j.ijhcs.2008.01.001.
[4] P. PaclĂk, J. NovoviÄovĂĄ, P. Pudil, and P. Somol, âRoad sign classification using Laplace
kernel classifier,â Pattern Recognit. Lett., 2000, doi: 10.1016/S0167-8655(00)00078-7.
[5] E. T. Stanforth, B. E. McCabe, M. P. Mena, and D. A. Santisteban, âStructure of
Problem Recognition Questionnaire with Hispanic/Latino Adolescents,â J. Subst. Abuse
Treat., 2016, doi: 10.1016/j.jsat.2016.08.005.
[6] A. Morales, P. Horstrand, R. Guerra, R. Leon, S. Ortega, M. DĂaz, J. M. MeliĂĄn, S.
LĂłpez, J. F. LĂłpez, G. M. Callico, E. Martel, and R. Sarmiento, âLaboratory
Hyperspectral Image Acquisition System Setup and Validation,â Sensors, 2022, doi:
9. ISBN 978-81-962236-3-2 March 2023
6
Feature Extraction and Gesture Recognition
10.3390/s22062159.
[7] D. Hema and D. S. Kannan, âInteractive Color Image Segmentation using HSV Color
Space,â Sci. Technol. J., 2019, doi: 10.22232/stj.2019.07.01.05.
[8] P. Suresh, R. Sukumar, and S. Ayyasamy, âEfficient pattern matching algorithm for
security and Binary Search Tree (BST) based memory system in Wireless Intrusion
Detection System (WIDS),â Comput. Commun., 2020, doi: 10.1016/j.comcom.2019.11.
035.
10. ISBN 978-81-962236-3-2 March 2023
7
Feature Extraction and Gesture Recognition
CHAPTER 3
CLASSIFICATIONS OF IMAGE CONVERSION
Mr. Tirumala Vasu G.
Assistant Professor, Department of Electronics and Communication Engineering,
Presidency University, Bangalore, India
Email Id- tirumala.vasu@presidencyuniversity.in
Gray Scale Conversion:
The major purpose of the processing procedure is to transform photographs to grayscale images
for smoother operation since the road sign imagery is coloured RGB images. Therefore, a 24-
bit RGB picture is transformed into an 8-bit grayscale image for quicker real-time
interpretation. The RGB picture of a driving process with a traffic signature and the equivalent
grey scale image is included in the testing findings below. This result is produced as the first
stage of preprocessing, which involves preprocessing a grayscale picture frame for background
subtraction [1].
Figure 1: Illustrated the Color and Gray Scale Image.
Edge Detection:
Background subtraction is a kind of image processing that evaluates the proximity of a line or
an edge in a picture generating appropriate plans for them. The main driving force behind edge
segmentation is to condense the image information to reduce the amount of information that
has to be produced. The limit pixels that connect two separate locations with shifting image
adequate attributes frequently constitute define an edge [2]. There are several methodologies
and calculations to find the edge in image preparation, but careful administration performs
better than those of other calculations due to its high precision and minimal handling volume.
For increased accuracy and performance, the edge detector by Canny is employed. The
experiment is therefore carried out to extract the edges from a grayscale display case. Their
input grayscale imagery is shown alongside the measured image frame [3].
Figure 2: Illustrated the Grayscale image with Edge.
11. ISBN 978-81-962236-3-2 March 2023
8
Feature Extraction and Gesture Recognition
Extraction Of Traffic Sign Board
The shape retrieval for the traffic sign board is conducted using the Hough Transform
technique. This technique simultaneously detects traffic sign boards in a variety of traffic
circumstances with high accuracy. The findings below were achieved using a documentary
series of road scenes and a Windows 7 operating environment with 4GB RAM [4].
Figure 3: Illustrated the Traffic sign board Detection by Hough Transform.
Feature Classification
SVM classifier is used to categorize the traffic display board into classes to help the driver.
The SVM classifier was chosen for its speedy real-time response. Using SVM, the pro-per-
response is determined as Support vector machines categorize binary depending on whether a
match was discovered or not; there will be no ambiguity in the classification. They are also
substantially speedier. Each dataset class for something like the training set has a MAT file
ready. One per lesson, a training set has been provided. Then, using a machine learning
approach called the SVM classifier, we recover the class name from the observed traffic
signature. The class name may be anything like informative, turns, stop, or no turns. Once the
kind of traffic signature has been identified, it may be shown on the computer for autonomous
or driving assistance cars. With the aid of a processing board and a camera installed on a
vehicle, the traffic sign board is shown together with the name of the class after first being
compared with the database. The findings shown below were acquired using MATLABâs
dataset class name. To help with information about just the detected traffic sign board, the class
name of the sign board is shown [5].
Figure 4: Illustrated the Results Obtained after SVM Classification.
Bibliography
[1] N. Jothiaruna, K. Joseph Abraham Sundar, and M. Ifjaz Ahmed, âA disease spot
segmentation method using comprehensive color feature with multi-resolution channel
12. ISBN 978-81-962236-3-2 March 2023
9
Feature Extraction and Gesture Recognition
and region growing,â Multimed. Tools Appl., 2021, doi: 10.1007/s11042-020-09882-7.
[2] J. Konrad, M. Wang, P. Ishwar, C. Wu, and D. Mukherjee, âLearning-based, automatic
2D-to-3D image and video conversion,â IEEE Trans. Image Process., 2013, doi:
10.1109/TIP.2013.2270375.
[3] S. Xie and Z. Tu, âHolistically-Nested Edge Detection,â Int. J. Comput. Vis., 2017, doi:
10.1007/s11263-017-1004-z.
[4] J. A and C. K. Selva Vijila, âDetection and Recognition of Traffic Sign using FCM with
SVM,â J. Adv. Chem., 2017, doi: 10.24297/jac.v13i6.5773.
[5] K. Kim, N. T. Duc, M. Choi, and B. Lee, âEEG microstate features for schizophrenia
classification,â PLoS One, 2021, doi: 10.1371/journal.pone.0251842.
13. ISBN 978-81-962236-3-2 March 2023
10
Feature Extraction and Gesture Recognition
CHAPTER 4
ALGORITHMIC DEVELOPMENT AND DESIGN
Ms. Samreen Fiza
Assistant Professor, Department of Electronics and Communication Engineering,
Presidency University, Bangalore, India
Email Id- samreenfiza@presidencyuniversity.in
Block Diagram Description:
Figure 1 shows the block diagram of the suggested remedy. The target image is first split and
freed from attention and other unneeded artefacts. By comparing the regions of all red
remnants, it is possible to determine whenever an item is necessary. Items with a small surface
area are not eligible. A clever method is used to find edges on the intended artefacts. This
describes the objects and facilitates our understanding of their outside structure. Such property
documents may be tagged so that we can search for associated component objects. The labelled
objects are sent via the Hough Transform [1]. We establish if our artefact is cylindrical using
the Hough Transform. The target circle is discovered by studying circular artefacts, and once
detected, the target circle should be destroyed from the picture. We unfortunately only have a
picture of numerals that have been labelled as black. Then, both noise and undesired items will
be blocked. After removing the noise, we split individual pictures to separate the values. Then,
the numerals would be detected by the optical character detection application [2].
Figure 1: Illustrated the Algorithmic Report [3].
i. Image Pre-processing
⢠Resizing
The camera is processing the information that was supplied. The resizing is complete after the
photograph has been edited. Pixels have been either added or subtracted during resizing. It's
also known as resampling. The pixels' width and height impulse response functions on how
much the picture is magnified. A picture may be resized in several ways, such as by cropping
it to a smaller size. Here, we're compressing the picture since it's a lot more straightforward
[4].
14. ISBN 978-81-962236-3-2 March 2023
11
Feature Extraction and Gesture Recognition
⢠Gray Scale Conversion
The RGB Color System is a combination in which Red (R), Green (G), and Blue (B) colours
are integrated to recreate various hues, and is used to represent the input picture. The RGB
graphic from the road sign photograph is converted to a greyscale picture. Images in grey scale
provide characteristics about the light. Each pixel's value corresponds to the number of light.
A word or byte ought to be employed to represent each pixel. The brightness of the 8-bit
representation ranges from 0 to 255, with 0 being black and 1 denoting white. This translation
minimizes the complexities of the mathematical procedures done in each picture and makes
them accessible. To extract information, the RGB picture of something like the road sign that
was captured is processed to a greyscale image [5].
Figure 2: Illustrated the Image as Input.
It improves the image given that there is a significant probability of poor picture quality as we
are analyzing photographs of various locations as well as situations. Because of the poor
resolution, the picture is blurry, noisy, hazy, curved, and many other things. This case's circles
are likewise particularly challenging to trace. To get rid of it, we need to improve the picture
quality. We should carry out some relationship between performances. Making the image
appear as a binary image might perhaps be done first [6].
Bibliography
[1] C. L. Chen and M. C. Tsai, âKinematic and Dynamic Analysis of Magnetic Gear with
Dual-Mechanical Port Using Block Diagrams,â IEEE Trans. Magn., 2018, doi:
10.1109/TMAG.2018.2846794.
[2] N. I. Pak, T. A. Stepanova, I. V. Bazhenova, and I. V. Gavrilova, âMultidimensional
algorithmic thinking development on mental learning platform,â J. Sib. Fed. Univ. -
Humanit. Soc. Sci., 2019, doi: 10.17516/1997â1370â0410.
[3] Executive Office of the President, âBig Data: A Report on Algorithmic Systems,
Opportunity , and Civil Rights,â White House Tech. Rep., 2016.
[4] C. Gillmann, P. Arbelaez, J. T. Hernandez, H. Hagen, and T. Wischgoll, âAn
uncertainty-aware visual system for image pre-processing,â J. Imaging, 2018, doi:
10.3390/jimaging4090109.
[5] N. Goyal, N. Kumar, and Kapil, âOn solving leaf classification using linear regression,â
Multimed. Tools Appl., 2020, doi: 10.1007/s11042-020-09899-y.
15. ISBN 978-81-962236-3-2 March 2023
12
Feature Extraction and Gesture Recognition
[6] S. M. K. Chaitanya and P. Rajesh Kumar, âKidney images classification using cuckoo
search algorithm and artificial neural network,â Int. J. Sci. Technol. Res., 2020.
16. ISBN 978-81-962236-3-2 March 2023
13
Feature Extraction and Gesture Recognition
CHAPTER 5
DIFFERENT FEATURES EXTRACTION
Mr. Tirumala Vasu G.
Assistant Professor, Department of Electronics and Communication Engineering, Presidency
University, Bangalore, India
Email Id- tirumala.vasu@presidencyuniversity.in
Color Detection
The sign board has to be distinguishable from the rest of the collection based on its colour
details. The colour and shape of the road signs are taken into consideration for categorization
in a thorough way to increase the segmentation's accuracy. Through feature extraction, it is
important to keep out a large portion of the non-target zone. The default representation for
recording by photographers is RGB space, which is the accepted standard. The RGB space
concept could be employed to calculate any value. Therefore, it is difficult to obtain good
segmentation when the light environment changes. That is to say, the value of the r-Picture
only reflects the red component of the visual, but the g-Image shows the average value of the
underlying image. When we subtract, the red valuation artifact remains and there is zero in the
remaining amount, indicating that some of the sections are null, which represents the absence
of an object in each of those areas [1].
Figure 1: Illustrated the Detection of Red Color [2].
Edge Detection
Edge includes data on the shape of the picture. One of the most used tools for processing images
is this one. This is done to recognize the locations where the picture intensity abruptly changes
or disappears [3]. Edges are well recognized to contain significant local changes in the picture's
strength. The area outside of the two main areas of the photograph is often where the edges are
seen. According to the object border, surface limitation, and peculiarity, the intensity of
something like the object changes. By eliminating unnecessary data, processing an image using
a method of edge detection requires much fewer data. There are many different kinds of edges,
including step edges, ramp edges, slope edges, and roof edges. Four steps must be followed to
locate each of these edges [4]. Below are a few of them:
⢠Smoothening,
⢠Enhancement,
⢠Detection,
17. ISBN 978-81-962236-3-2 March 2023
14
Feature Extraction and Gesture Recognition
⢠Localization
Derivatives will be used to explore edges. Through: It is possible to distinguish the spots on
the edge.
⢠Finding the first component minimum or local maxima.
⢠The zero crossing of the following equation.
The following are some factors for the best edge identification: Good Detection: The optimal
detecting will reduce the probability of a set of circumstances detected by noise, false edges
detected by noise, and false borders detected by noise.
Good Localization
The measured edges should be similar to the real edges. Just one single-edged response: caught
in the first criterion. If there are two reactions on the very same edge, one of the edges has to
be incorrect. The first criteria, however, did not include the multiple response criteria by the
mathematical form and it had to be made explicit.
Canny Edge Detection
The measured edges need to approach the actual edges. It consisted of only one answer with
only one edge, and it answered the first requirement. One of the edges must be erroneous if
you see two responses on the same edge. The multiple answer qualification by the mathematical
form, however, was not a part of the first criteria and therefore needed to be stated explicitly
[5].
This method greatly minimizes the probability to be processed while keeping valuable
contextual information on object borders, simplifying picture evaluation. Although every edge
detector has verified this concept, the clever detection phase is the best in terms of lower
mistake rates. Canny incorporated certain essential ideas to improve the existing edge detector.
The first and most conspicuous is the low mistake rate. The margins of pictures must not be
skipped, and there must be minimal responses to blank edges. The following consideration is
where the edge markings are located. It is well known that there is little to no difference seen
between the edge pixels the detector detects and or the actual edge. A single response to a
single edge makes up the third criterion.
It was accomplished because none of the inclusion conditions was broad enough to completely
preclude numerous responses at the margins. First, the clever edge detector smoothest the
picture to eliminate noise on the strength of these parameters. Then, high spatial derivative
areas are illuminated using the image gradient. The program then surrounds such areas and
shrinks any pixels that do not exceed the limit. The gradient has now become even flatter due
to hysteresis. Two forms of hysteresis exist. A non-edge has now been formed if the magnitude
is 0 unless it is below the original point. An edge is recognized when the intensity is greater
than the strict limit [6].
Bibliography
[1] K. Y. Kim, S. H. Yoon, I. K. Kim, H. G. Kim, D. K. Kim, E. Le Shim, and Y. J. Choi,
âFlexible narrowband organic photodiode with high selectivity in color detection,â
Nanotechnology, 2019, doi: 10.1088/1361-6528/ab35ff.
[2] F. Su, G. Fang, and J. J. Zou, âA novel colour model for colour detection,â J. Mod. Opt.,
2017, doi: 10.1080/09500340.2016.1261194.
[3] A. Munjiza and J. P. Latham, âSome computational and algorithmic developments in
computational mechanics of discontinua,â Philos. Trans. R. Soc. A Math. Phys. Eng.
18. ISBN 978-81-962236-3-2 March 2023
15
Feature Extraction and Gesture Recognition
Sci., 2004, doi: 10.1098/rsta.2004.1418.
[4] M. Z. Osman, M. A. Maarof, and M. F. Rohani, âImproved dynamic threshold method
for skin colour detection using multi-colour space,â Am. J. Appl. Sci., 2015, doi:
10.3844/ajassp.2016.135.144.
[5] R. Pradeep Kumar Reddy and C. Nagaraju, âImproved canny edge detection technique
using S-membership function,â Int. J. Eng. Adv. Technol., 2019, doi:
10.35940/ijeat.E7419.088619.
[6] B. Iqbal, W. Iqbal, N. Khan, A. Mahmood, and A. Erradi, âCanny edge detection and
Hough transform for high resolution video streams using Hadoop and Spark,â Cluster
Comput., 2020, doi: 10.1007/s10586-019-02929-x.
19. ISBN 978-81-962236-3-2 March 2023
16
Feature Extraction and Gesture Recognition
CHAPTER 6
DIFFERENT ATTRIBUTES OF FEATURES EXTRACTION
Ms. Ashwini B.
Assistant Professor, Department of Electronics and Communication Engineering, Presidency
University, Bangalore, India
Email Id- ashwini.b@presidencyuniversity.in
An image's edges, shape, color, and area of interest are derived from earlier processing. These
properties are now very important in the last stage of picture processing. They match certain
areas of the picture. This is now used in the next phase of pattern matching [1].
Classification
Road sign detection has been studied for more than 20 years, and several methods have been
developed. Given that they are more adaptable and successful overall than other tactics, CNNs.
A critical element for many applications is sign board recognition (TSR), which operation
procedures learning and computer vision methods. The driver may be brought to the attention
for inappropriate behaviours and associated dangers thanks to signing board recognition. The
identification of sign boards has markedly increased since 2011 with the release of the GTSRB
public benchmark dataset, which presents a variety of demanding difficulties, including
volatility in point of view, poor brightness, motion blur, occlusion, fading colors, and low
resolution. Convolutional neural systems have achieved the greatest results thanks to
discerning characteristics among a selection of remarkable solutions that have been published
in the literature. Later in 2012, the 8-layer Alex Nets reported that a huge amount of excitement
in machine learning and computer vision will be sparked by the Image Net Large Scale
Developed As a key Challenge [2]. After ILSVRC 2012, CNNs were expanded to perform
several tasks, including target identification and picture segmentation, and as a consequence,
more and more spectacular results have been attained according to a variety of benchmarks.
Here, pattern matching is being used. Instead of using the whole picture, the pattern-matching
technologies are based on the object's polygon. It cuts down on processing time, which is
essential for fulfilling real-time needs. The algorithm is scaling as well as transform-invariant.
Tested with 500 images and multiple movies, it demonstrates a high degree of accuracy with a
97% identification rate [3].
Pattern Matching Algorithm
This algorithm's fundamental concept is to calculate the Euclidean distances seen between
patterns of 2 images, as shown in Figure 1. Assuming that A and B are respective A and B
points, where A represents a conventional signboard and B represents a candidate entity, the
Euclidean distance between A and B's vertices is then calculated [4].
Figure 1: Illustrated the Pattern Matching [5].
The algorithm is comprised of steps given by,
20. ISBN 978-81-962236-3-2 March 2023
17
Feature Extraction and Gesture Recognition
Step 1: Isolation of Candidate Object
The candidate object shall be separated through a minimal rectangle. The outline of pattern A
is chosen as one of the sign boards and is drawn entirely throughout the region. The red triangle
alarm illustrates pattern A. The 3 vertexes of Pattern A are (xmin, ymax), (xmax, ymax) and
(xmax + xmin)/2 and (ymin).
Step 2: Pattern B is matched with the pattern A
Candidate Object, which is Pattern B, shape is examined by illustrating the contours. Then the
patterns of A and B pattern are matched to each other to find similarities between them in color
and shape. According to the Figure 2, the pattern B color was perceived throughout the process
of color segmentation. The preliminary similarity is determined by comparing the number of
vertexes of the two patterns, that is B's pattern with A's pattern. For instance, triangles have
three vertexes, and a triangle depicts a danger or an alarming sign; the rectangle consists of
four vertexes and it depicts an information sign etc. Pattern B's vertexes which are highlighted
with the help of dots as shown in figure 3.8 are then determined.
Figure 2: Illustrated the 8 Vertexes of pattern B.
Step 4: Threshold Distance Calculation
The Euclidean distances from the vertexes of pattern A to the vertexes of pattern B must be
lesser than the threshold T, i.e., D1 < T, D2 < T, Dn < T for proper matching. The greater
distance in vertical coordinates shows that the greater the matching difference in the 2 patterns,
A and B.
The formula supporting the threshold value can be interpreted in mathematical terms as
đťđ = (đąđŚđđą â đąđŚđ˘đ§) Ă (đ˛đŚđđą â đ˛đŚđ˘đ§) â đđđŹđ¤ đđŤđđ
The threshold value is adaptable to the object size, i.e., the larger the object, the larger the blob
region and the larger the threshold value, which is shown in equation 8. This technique avoids
the requirement for standardization. It means that the strategy varies depending on the size of
the signboard, thus the gap between the vehicle and the road signboard.
Bibliography
[1] K. Bian, M. Zhou, F. Hu, and W. Lai, âRF-PCA: A New Solution for Rapid
Identification of Breast Cancer Categorical Data Based on Attribute Selection and
Feature Extraction,â Front. Genet., 2020, doi: 10.3389/fgene.2020.566057.
[2] A. Kaklotar, âResearch On Different Feature Extraction And Mammogram
Classification Techniques,â INDIAN J. Appl. Res., 2019, doi: 10.36106/ijar/2612679.
21. ISBN 978-81-962236-3-2 March 2023
18
Feature Extraction and Gesture Recognition
[3] P. Ghamisi, J. A. Benediktsson, G. Cavallaro, and A. Plaza, âAutomatic framework for
spectral-spatial classification based on supervised feature extraction and morphological
attribute profiles,â IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 2014, doi:
10.1109/JSTARS.2014.2298876.
[4] N. Patel and N. Shekokar, âImplementation of pattern matching algorithm to defend
SQLIA,â 2015. doi: 10.1016/j.procs.2015.03.078.
[5] M. Tahir, M. Sardaraz, and A. A. Ikram, âEPMA: Efficient pattern matching algorithm
for DNA sequences,â Expert Syst. Appl., 2017, doi: 10.1016/j.eswa.2017.03.026.
22. ISBN 978-81-962236-3-2 March 2023
19
Feature Extraction and Gesture Recognition
CHAPTER 7
INTRODUCTION TO GESTURE RECOGNITION
Mrs. Varalakshmi K. R.
Assistant Professor, Department of Electronics and Communication Engineering, Presidency
University, Bangalore, India
Email Id- varalakshmi.kr@presidencyuniversity.in
The most recent innovation is the virtual keyboard technology. The virtual keyboard
technology enables users to type on any flat surface as if it were a keyboard by combining
artificial intelligence or sensor technologies. Almost any current platform may be used with
Virtual Keyboards to produce multilingual text content that can then be sent straight to PDAs
or even web pages [1], [2].
⢠Virtual Keyboard, a compact, useful, well-made, and simple-to-use tool, serves as the
ideal cross-platform text input solution.
⢠The primary attributes are platform-independent multilingual support for copy/paste,
built-in language options, and keyboard text input.
⢠No changes to the system's current language settings, the same operations support as in
a standard text editor, a simple and user-friendly interface and design, and a tiny file
size.
Computer scientists are increasingly interested in the new, intuitive ways that people might
communicate with computers, which is a discipline known as human-computer interaction
(HCI). Hand gesture recognition, which enables the use of hand gestures to operate computers,
is one of the most extensively investigated issues in this area. How to make hand motions
understandable to computers is the main challenge in gesture interaction. The two primary
categories of the methodologies at hand are Data-Glove based and Vision-Based approaches
[3]. The Data-Glove-based approaches digitize hand and finger movements into multi-
parametric data using sensor devices. Collecting information on hand arrangement and
movement is made simple by the additional sensors. The gadgets, however, are highly pricey
and provide customers with a very burdensome experience. The Vision Based techniques, in
contrast, merely need a camera, creating a natural contact between people and computers
without the need for any additional hardware. These systems often describe artificial vision
systems that are mostly developed in software, which supplement biological vision. This
strategy is the least expensive and bulkiest. Additionally, these systems need to be adjusted to
satisfy the demands, which include accuracy and resilience. The hand gesture approach has the
advantage of being simpler to use than other techniques now in use. The traditional manner of
interacting with a computer through a mouse, keyboard, and controllers will need to alter as a
result of this methodology since it allows for hand movements [4], [5].
In this work, we used software that handled the 2-D real-time video from a camera and
examined it frame by frame to identify the hand motion at each frame utilizing computer vision,
the second technique. By running the picture through some filters and applying our
computations to the binary image, we were able to identify the gesture from the image using
image processing methods. Real-time functionality is one of the numerous requirements that
the program must meet. This is required for the interface to be fully interactive and
23. ISBN 978-81-962236-3-2 March 2023
20
Feature Extraction and Gesture Recognition
understandable. This is quantified in terms of frames per second (fps), which effectively tells
us how often the program is refreshed. There is a lag between the actual occurrence and the
recognition if the refresh rate is slow. When movements are made quickly one after another,
the event could not even be noticed. A real-time function is essential because of this. Flexibility
and how effectively it connects with both new and old applications are other requirements. The
software must be able to handle external programs with ease to be a contender for real-world
applications. Both the user and the application developer will gain from this. The program must
also be sufficiently precise to be used in practice. To meet each of these criteria, we used a
large collection of training sets of hand gestures for various persons and settings, applying a
variety of variables to each set to identify the proper gesture. Additionally, we need to select
an appropriate programming language to handle real-time hand tracking. After conducting
extensive research on the subject, we chose to implement our software's image processing
section using the Intel Open Source Computer Vision (OpenCV) Library in Python using
Microsoft Visual Studio 2010 [6].
This choice was made due to OpenCV's advantages in real-time image processing. Our project
encountered some issues when it was being developed, including issues with accuracy, the
surroundings, light, and tracking speed. We dealt with these issues as much as we could; people
will discuss in more depth about them once our product is accurate enough to be put to use. In
this project, we've also included a straightforward Arduino-based hand gesture control that lets
you use hand gestures to operate a few web browser features like switching between tabs,
navigating up and down web pages, switching between tasks (applications), pausing or playing
videos, and adjusting the volume (in VLC Player). The Hand Gesture Control of a Computer
using Arduino truly works on a very basic concept. To measure the distance between your hand
and the ultrasonic sensor, all you need to do is utilize two ultrasonic sensors and an Arduino
board. This knowledge enables the computer to take appropriate action.
The suggested system may be implemented by employing a built-in camera or a webcam that
recognizes hand movements and hand tips and analyzes these frames to carry out certain mouse
actions. It may be utilized in circumstances when a physical mouse is not an option.
Technology reduces the need for gadgets while enhancing human-computer connection. A
novel idea for virtual keyboard adaption is put forward. Our goal is to create an interface that
is user-friendly by adjusting the key size and centroid distance between the keys. We provide
several states (keyboards) based on key size and center distance as a means of adaptation to
decipher human motions use mathematical algorithms. Simple finger movements may be used
by users to interact with the keyboard without actually touching it. Cameras and computer
vision algorithms have been used in a variety of ways throughout history to translate sign
language. Additionally, gesture recognition approaches include the detection and recognition
of posture, gait, proxemics, and human behaviors.
Bibliography
[1] Y. Hu, Y. Wong, Q. Dai, M. Kankanhalli, W. Geng, and X. Li, âSEMG-based gesture
recognition with embedded virtual hand poses and adversarial learning,â IEEE Access,
2019, doi: 10.1109/ACCESS.2019.2930005.
[2] A. Waichal, âSurvey paper on Hand Gesture Recognition Based Virtual Mouse,â
INTERANTIONAL J. Sci. Res. Eng. Manag., 2022, doi: 10.55041/ijsrem11641.
[3] W. Fang, Y. Ding, F. Zhang, and J. Sheng, âGesture recognition based on CNN and
24. ISBN 978-81-962236-3-2 March 2023
21
Feature Extraction and Gesture Recognition
DCGAN for calculation and text output,â IEEE Access, 2019, doi:
10.1109/ACCESS.2019.2901930.
[4] A. G. Sooai, Khamid, K. Yoshimoto, H. Takahashi, S. Sumpeno, and M. H. Purnomo,
âDynamic hand gesture recognition on 3D virtual cultural heritage ancient collection
objects using k-nearest neighbor,â Eng. Lett., 2018.
[5] K. M. Sagayam and D. J. Hemanth, âHand posture and gesture recognition techniques
for virtual reality applications: a survey,â Virtual Real., 2017, doi: 10.1007/s10055-016-
0301-0.
[6] A. S. Al-Shamayleh, R. Ahmad, M. A. M. Abushariah, K. A. Alam, and N. Jomhari, âA
systematic literature review on vision based gesture recognition techniques,â Multimed.
Tools Appl., 2018, doi: 10.1007/s11042-018-5971-z.
25. ISBN 978-81-962236-3-2 March 2023
22
Feature Extraction and Gesture Recognition
CHAPTER 8
SYSTEM REQUIREMENTS FOR GESTURE RECOGNITION
Dr. Rajiv Ranjan Singh
Professor and Hod, Department of Electronics and Communication Engineering, Presidency
University, Bangalore, India
Email Id- rajivranjansingh@presidencyuniversity.in
HARDWARE COMPONENTS
⢠Arduino Uno Rev3
⢠Raspberry Pico
⢠Ultrasonic Sensor (Hc â Sr04)
⢠IR Sensor
ARDUINO UNO REV3
The finest board for learning electronics and coding is the Arduino UNO. The Arduino family's
UNO board is the most popular and well-documented model. A microcontroller board called
Arduino Uno is based on the ATmega328P. (datasheet). It contains a 16 MHz ceramic
resonator (CSTCE16M0V53-R0), 6 analog inputs, 14 digital input/output pins (of which 6 may
be used as PWM outputs), a USB port, a power connector, an ICSP header, and a reset button.
As illustrated in figure 1, it comes with everything required to support the microcontroller; to
get started, just connect it to a computer using a USB connection or power it using an AC-to-
DC converter or battery [1].
Figure 1: Illustrate the ARDUINO UNO REV3.
26. ISBN 978-81-962236-3-2 March 2023
23
Feature Extraction and Gesture Recognition
DIGITAL PINS:
The board has 14 digital pins built into it. Depending on your needs, you may utilize these pins
as an input or an output. Two values are sent to these pins: HIGH or LOW. These pins are in
the HIGH state when they get 5V, and they stay in the LOW state when they receive 0V [2].
ANALOG PINS:
On the PCB, there are 6 analog pins accessible. Unlike digital pins, which can only accept
HIGH or LOW, these pins may accept any value.
PWM PINS:
Six of the board's 14 digital I/O pins are designated as PWM pins. When these pins are
triggered, they produce an analog signal using digital methods.
SPI PINS:
The board has an SPI communication protocol built in, which is mostly used to keep the
microcontroller in touch with additional peripherals like shift resistors or sensors.
TWO PINS:
For SPI communication between devices, MOSI (Master Output Slave Input) and MISO
(Master Input Slave Output) are utilized. The controller uses these pins to transmit and receive
data [3].
I2C PINS:
SDL and SCL are the two pins that make up this two-wire communication system. SCL is a
serial clock pin that is used to synchronize all data transmission across the I2C bus, whereas
SDL is a serial data pin that transports the data [4].
UART PINS:
Additionally, this board supports the UART serial connection standard. There are two pins on
it: Tx and Rx. Rx is a reception pin that is used to receive serial data, while Tx is a transmission
pin used to send serial data.
LED
On the PCB, there are four LEDs. The built-in LED on pin 13 is one example, while the power
LED is another. Additionally, two Rx and Tx LEDs light up when serial data is sent or received
to the board.
VIN, 5V, GND, RESET
Vin. The Arduino Board's input voltage is what it is. It is distinct from the 5 V that we get from
a USB port. Additionally, this pin may access a voltage that is provided by the power connector.
5V
Voltage control capabilities are included on this board. There are three methods to activate this
board: through USB, the board's Vin pin, or the DC power connector. While Vin and Power
Jack offers a voltage range of 7V to 20V, USB only supports a value of roughly 5V. Be aware
27. ISBN 978-81-962236-3-2 March 2023
24
Feature Extraction and Gesture Recognition
that supplying electricity via 3.3V or 5V pins will circumvent voltage regulation, damaging the
board if the voltage exceeds a particular threshold [5].
GND
The ground pin is this. On the board, many ground pins may be utilized depending on the
situation.
RESET
The software running on the board is reset via this pin. Through programming, IDE can reset
the board in place of a physical reset on the board.
Bibliography
[1] M. Al-Hammadi, G. Muhammad, W. Abdul, M. Alsulaiman, M. A. Bencherif, and M.
A. Mekhtiche, âHand Gesture Recognition for Sign Language Using 3DCNN,â IEEE
Access, 2020, doi: 10.1109/ACCESS.2020.2990434.
[2] A. Jaramillo-YĂĄnez, M. E. BenalcĂĄzar, and E. Mena-Maldonado, âReal-time hand
gesture recognition using surface electromyography and machine learning: A systematic
literature review,â Sensors (Switzerland). 2020. doi: 10.3390/s20092467.
[3] J. M. Maro, S. H. Ieng, and R. Benosman, âEvent-Based Gesture Recognition With
Dynamic Background Suppression Using Smartphone Computational Capabilities,â
Front. Neurosci., 2020, doi: 10.3389/fnins.2020.00275.
[4] C. Xu, Y. Jiang, J. Zhou, and Y. Liu, âSemi-supervised joint learning for hand gesture
recognition from a single color image,â Sensors (Switzerland), 2021, doi:
10.3390/s21031007.
[5] Y. Li, Z. He, X. Ye, Z. He, and K. Han, âSpatial temporal graph convolutional networks
for skeleton-based dynamic hand gesture recognition,â Eurasip J. Image Video Process.,
2019, doi: 10.1186/s13640-019-0476-x.
28. ISBN 978-81-962236-3-2 March 2023
25
Feature Extraction and Gesture Recognition
CHAPTER 9
ARDUINO UNO BOARD FEATURES
Mr. Kiran Kale
Assistant Professor, Department of Electronics and Communication Engineering,
Presidency University, Bangalore, India
Email Id- kirandhanaji.kale@presidencyuniversity.in
A simple USB interface. As USB is similar to a serial device, this enables interaction. The chip
on the board connects directly to your USB port and functions as a virtual serial port on your
computer. The advantage of this configuration is that serial communication is a very simple,
tried-and-true protocol, while USB enables connection with contemporary computers and
makes it pleasant [1]. The ATmega328 chip, the brain of the microcontroller, is readily
available. Numerous hardware features, including timers, internal and external interrupts,
PWM pins, and various sleep modes, are included. It is an open-source design, and one benefit
of being open-source is that it has a sizable user and support community. This makes it simple
to assist with project debugging.
The microcontroller is not sped up by the 16 MHz clock, which is fast enough for most
applications. It has a function of built-in voltage control, and it is quite straightforward to
regulate power inside of it. Without requiring any extra power, this may also be powered
straight from a USB port. This controls an external power supply connected up to 12 volts,
converting it to 5 volts and 3.3 volts. 6 analog pins and 13 digital pins. You may attach external
hardware to your Arduino Uno board using this kind of pin. These pins serve as a key to
bringing the Arduino Uno's computational power outside of the computer. You may start using
your electronics as soon as you connect them to the sockets that each of these pins corresponds
to. This features an ICSP connection for connecting the Arduino directly as a serial device
without using the USB port. If your chip becomes corrupt and can no longer be utilized with
your computer, you will need to re-bootload it via this port [2]. It features a 32 KB flash
memory that may be used to store your code. Digital pin 13 is connected to an internal LED to
facilitate code debugging and speed up the process. It also features a button to reset the chip's
software.
RASPBERRY PI PICO
The term Raspberry Pi has gained popularity in the business and among enthusiasts for a while
now. It conjures up images of a comparatively tiny circuit board with an SD card slot, a few
USB connections, maybe an ethernet port, and HDMI display connectors that can run a Linux-
based operating system. In brief, rather than microcontrollers, Raspberry Pi is well-known for
its series of microprocessors [1], [3]. This time, though, the business has created its
microcontroller for makers and created a development board to provide it to themâthe
raspberry Pi Pico! Many programmers choose to use a microcontroller for basic I/O chores and
a microprocessor for heavier computationally demanding, timing-sensitive jobs. Up until this
point, Raspberry Pis have only offered CPUs to the tech world. Now, they have completed their
line-up by including a controller as well [4], [5].
29. ISBN 978-81-962236-3-2 March 2023
26
Feature Extraction and Gesture Recognition
NUTSHELL RASPBERRY PI PICO
A microcontroller development board from Raspberry Pi called the Pico is said to cost about
$4. Since certain jobs are more suited for a microcontroller than a microprocessor, such as
essential timing and interrupt capabilities, the pico will have its purposes in the world of
electronics. The board comes in compact, breadboard-friendly packaging that nearly has an
Arduino N footprint.
Figure 1: Illustrate the RASPBERRY PI PICO
PI PICO LIMITATIONS FOR RASPBERRY
Micropython may be used to program the Raspberry Pi Pico, much as ESP32 and ESP8266.
However, it lacks inbuilt WiFi, Bluetooth, or BLE peripherals, unlike ESP32 as well as
ESP8266. Therefore, we will need to add external WiFi, Bluetooth, or BLE modules to utilize
this board for IoT applications. For inexpensive applications, it is a fantastic option.
Since a microcontroller is far less complicated than a microcomputer, it is much simpler to
program one to do simple tasks. A microcontroller may simply and competently do simple
tasks like measuring temperature data, rotating a servo, or blinking an LED. You do get certain
benefits including a speedy startup and increased battery life.
Limitations come with simplicity. For instance, using a microcomputer makes it much simpler
to link your prototype to the internet. You could, for instance, wish to link your prototype to
an email service so that it can ping you when certain criteria are reached. Instead of attempting
to make those ideas work with the Raspberry Pi Pico, I'd use a Raspberry Pi Zero.
Bibliography
[1] A. P. U. Siahaan et al., âArduino Uno-based water turbidity meter using LDR and LED
sensors,â Int. J. Eng. Technol., 2018, doi: 10.14419/ijet.v7i4.14020.
[2] Y. Irawan, R. Wahyuni, and H. Fonda, âFolding Clothes Tool Using Arduino Uno
Microcontroller And Gear Servo,â J. Robot. Control, vol. 2, no. 3, 2021, doi:
10.18196/jrc.2373.
[3] J. Susilo, A. Febriani, U. Rahmalisa, and Y. Irawan, âCar parking distance controller
using ultrasonic sensors based on arduino uno,â J. Robot. Control, 2021, doi:
10.18196/jrc.25106.
30. ISBN 978-81-962236-3-2 March 2023
27
Feature Extraction and Gesture Recognition
[4] H. Sanjaya, N. K. Daulay, and R. Andri, âLampu Otomatis Berbasic Arduino Uno
Menggunakan SmartPhone Android,â JURIKOM (Jurnal âŚ, 2022.
[5] Ridarmin, Fauzansyah, Elisawati, and P. Eko, âPrototype Robot Line Follower Arduino
Uno,â J. Inform. Manaj. dan Komput., 2019.
31. ISBN 978-81-962236-3-2 March 2023
28
Feature Extraction and Gesture Recognition
CHAPTER 10
ULTRASONIC SENSOR (HC â SR04)
Dr. Rajiv Ranjan Singh
Professor and HoD, Department of Electronics and Communication Engineering,
Presidency University, Bangalore, India
Email Id- rajivranjansingh@presidencyuniversity.in
This ultrasonic sensor, also known as an ultrasonic transducer, is built around a receiver and
transmitter that are primarily used to determine how far away the target object is. The time it
takes for waves to send and receive will determine how far the object is from the sensor [1]. It
mostly depends on sound waves, which are what are employed in non-contact technology. The
required distance of the target object is measured without causing any damage, giving you
accurate and dependable information. This sensor, which has a range of 2 cm to 400 cm, is
used in a wide range of applications, including humidifiers, wireless charging, sonar, burglar
alarms, medical ultrasonography, and non-destructive testing. I'll try to cover the key details
associated with the HC-SR04 in this post to give you a better grasp of it and how it may be
used in the key applications in accordance with your needs and expectations [2]. Let's go
straight to the information about this ultrasonic sensor, as shown in Figure 1.
⢠The HC-SR04 ultrasonic sensor uses non-contact technology, which means that no real
physical contact is made between the sensor and the item being measured. Its main
function is to compute the target object's distance.
⢠The receiver and the transmitter are the sensor's two main parts; the former converts
electrical impulses into ultrasonic waves, while the latter converts ultrasonic waves
back into electrical signals.
⢠To get the HCSR04 Datasheet, click the button below:
⢠At the receiving end, these ultrasonic waves may be detected and presented as sound
signals.
Figure 1: Illustrate the Ultrasonic Sensor.
HC-SR04 SENSOR FEATURES
⢠Working Voltage: DC 5V
⢠Working Frequency: 40Hz
32. ISBN 978-81-962236-3-2 March 2023
29
Feature Extraction and Gesture Recognition
⢠Working Current: 15Ma
⢠Min Range: 2cm
⢠Max Range: 4m
⢠Measuring Angle: 15 degree
⢠Trigger Input Signal: 10¾S TTL pulse
⢠The relationship between the range and the TTL lever signal input and output for echo
⢠Dimension 45 * 20 * 15mm
The ultrasonic ranging module HC-SR04 provides the 2 cm to 400 cm non-contact
measurement function, with a ranging accuracy of up to 3 mm. The modules includes ultrasonic
transmitters, receivers, or control circuits [3]. The underlying operating concept is:
⢠After applying an IO trigger for at least a 10us high-level signal, the Module
automatically broadcasts eight pulse signals at a frequency of 40 kHz and checks to see
whether a pulse signal is received.
⢠The time between sending and receiving ultrasonic waves is referred to as the high
output IO duration if the signal is returned at a high level. Test distance equals a high-
level sound speed of 340 M/S.
WORKING
The HC-SR04 distance sensor is often used with microcontroller or microprocessor platforms
as Arduino, PIC, ARM, Raspberry Pi, etc. The rules listed below must be observed regardless
the type of computer equipment being used, making them universal.
Power the sensor with a regulated +5V via the Vcc and Ground pins. Since the sensor only uses
less than 15 mA of energy, it may be directly powered by the inbuilt 5 V pins (If available).
Since the Trigger and Echo pins are both I/O pins, they may both be connected to the
microcontroller's I/O pins. To start the measurement, the trigger pin must be set high for 10 uS
before being turned off. The receiver will then wait for the ultrasonic wave to return once the
transmitter emits one at a frequency of 40Hz as a result [4]. The time it took for the wave to
return to the sensor after it was reflected by any object is represented by the duration for which
the Echo pin is high. Since the Echopin indicates how long it takes for the wave to return to the
Sensor, the MCU/MPU monitors how long it stays high. This information is used to compute
the distance in the manner stated in the header above [5].
APPLICATIONS
⢠Applied to measure distances over a broad range, from 2 cm to 400 cm
⢠Used with robots such as bipedal robots, route finding robots, obstacle avoider
robots, etc. to avoid and recognize impediments.
⢠The things nearby may be mapped by rotating the sensor.
⢠Since waves may travel through water, it's feasible to gauge a location's depth,
including such wells and pits.
Bibliography
[1] M. Shen, Y. Wang, Y. Jiang, H. Ji, B. Wang, and Z. Huang, âA new
positioning method based on multiple ultrasonic sensors for autonomous
33. ISBN 978-81-962236-3-2 March 2023
30
Feature Extraction and Gesture Recognition
mobile robot,â Sensors (Switzerland), 2020, doi: 10.3390/s20010017.
[2] J. Lesmana, A. Halim, and A. P. Irawan, âDesign of automatic hand
sanitizer with ultrasonic sensor,â 2020. doi: 10.1088/1757-
899X/1007/1/012164.
[3] G. Arun Francis, M. Arulselvan, P. Elangkumaran, S. Keerthivarman, and
J. Vijaya Kumar, âObject detection using ultrasonic sensor,â Int. J. Innov.
Technol. Explor. Eng., 2019.
[4] B. Ma, J. Teng, H. Zhu, R. Zhou, Y. Ju, and S. Liu, âThree-dimensional
wind measurement based on ultrasonic sensor array and multiple signal
classification,â Sensors (Switzerland), 2020, doi: 10.3390/s20020523.
[5] A. Rocchi, E. Santecchia, F. Ciciulla, P. Mengucci, and G. Barucca,
âCharacterization and Optimization of Level Measurement by an Ultrasonic
Sensor System,â IEEE Sens. J., 2019, doi: 10.1109/JSEN.2018.2890568.
34. ISBN 978-81-962236-3-2 March 2023
31
Feature Extraction and Gesture Recognition
CHAPTER 11
SYSTEM DESIGN FOR VIRTUAL HAND GESTURE DETECTION
Mr. Kiran Kale
Assistant Professor, Department of Electronics and Communication Engineering,
Presidency University, Bangalore, India
Email Id- kirandhanaji.kale@presidencyuniversity.in
VIRTUAL KEYBOARD
With the use of artificial intelligence and sensor technologies, Virtual Keyboard enables users
to type on any surface like a keyboard. A flashlight-sized device created by Virtual Devices
allows users to type on an image of a keyboard that is projected onto any surface. Alternatives
included voice recognition, handwriting recognition, input (for SM Sin cell phones), and
others. But none of them can replace a full-fledged keyboard for accuracy and ease. The privacy
of speech input is an additional concern. Even PDA foldable keyboards haven't taken off yet.
Consequently, a new generation of virtual input devices is already being shown, and they may
significantly alter the way we write. When the fingers are pushed down, the gadget senses
movement [1]. The system precisely ascertains the intended keystrokes from such motions and
transforms them into text.
SYSTEMS DESIGN
In this phase, with support for manipulating numbers and geometric choices, we are likely to
estimate where the user's fingers are located (in representation space). The single-personality
succession is produced at a lower level using word processing tools. In this controlled method,
we are likely to provide the user's calculable finger locations as input and provide the potential
that the creation of these tips is anticipated and expected to withstand the row of keys mat.
People often use a strategy called "shadow reasoning," which was developed to address this
drawback. The letters AâZ are on the keyboard. Each letter key on the 21.0 7.6 cm-long
keyboard measures 1.8 1.4 cm. The letter is added to the text above the keyboard and the closest
key label is illuminated in red after each key press [2].
A tapping sound is also performed. A backspace key with a size of 6.0 x 1.4 cm may be used
to erase previous characters. For simple and predictable triggering, we increased the size of the
backspace key and moved it away from other keys on the keyboard. You may backspace both
letters from the current word and letters from previous words. The size of the space key is 8.5
by 1.4 cm. When the space key is pressed, the user's string of noisy (x, y) tap positions on the
keyboard plane are converted into the most likely word based on the letter sequence they are
now typing. To enable deterministic triggering, we increased the size of the space key and
isolated it from the rest of the keyboard, similar to the backspace key [3].
In the beginning, shadow reasoning looks for the user's hands' shadow in the environment. By
thresholding the idea in 8-shard HLS color space coordinates, we are likely to accomplish this.
We are probably going to increase a threshold of L = nil by looking at different check
photographs. built a two-fold representation S that accurately captures the shadows within the
representation. We're going to look at the 20 by 20 square neighborhood of each fingertip
position in the two-fold countenance S for each fingertip position p. The process' early stages
35. ISBN 978-81-962236-3-2 March 2023
32
Feature Extraction and Gesture Recognition
result in learning about concept-space relationships that estimate an unspecified location where
the user has frantically pressed a row of keys. In this section, we'll show how to transform the
representation-room coordinates to a row of keystrokes that fits the space exactly, sororal
keystrokes that are correct for view. In contrast, we tend to view it from plans within the lecture
on sound system apparition. The angle-fixing approach we are prone to second-hand wasn't
seen in some of the citations we choose to read. An image taken by each camera in our system
may represent a visual projection of a three-dimensional environment [4].
VIRTUAL MOUSE
Many practical applications, such as virtual mice, remote controllers, sign language
recognition, and immersive gaming technologies, use fingertip detection. Therefore, one of the
primary objectives of vision-based technology in recent decades, particularly with
conventional red-green-blue (RGB) cameras, has been virtual mouse control through fingertip
recognition from pictures. With the gesture-based interface, we suggest in this work, users may
interact with a computer by leveraging fingertip detection in RGB-D inputs. From depth
pictures generated by the Kinect V2 skeletal tracker, the hand area of interest and the palm's
center are first extracted and converted to binary images. A border-tracing method is then used
to extract and characterize the hand contours. Based on the hand-contour coordinates, the K-
cosine algorithm locates the fingertip. Finally, the position of the fingertip is translated to RGB
pictures to operate the mouse cursor based on a virtual screen. In our study, mouse movement,
left-clicking, and right-clicking are all taken into account. The convex hull interprets the
division of the fingers as flaws, making it possible to utilize it for a variety of motions and
design directives [5]. This image recognition approach exclusively concentrates on defects and
conditional statements. The photos that must typically be handled must be recorded using a
live-streaming camcorder. A very basic connection for this procedure is provided by the Open
CV Python module. The device index will identify the camcorder from which or where the
video was anticipated to have been taken. Typically, passing 0 or -1 allows the user to pick the
integrated camcorder. By passing, choose the extrinsic or subordinate camera. So, After
completing this procedure, images may be safeguarded frame by frame.
DESIGN AND SYSTEM ENVIRONMENT
A single color camera will be used by the system, and it will be placed next to the computer,
perpendicularly above a surface with a black background. On the monitor will be shown the
camera's output. The system will be operated by the user gesturing in front of the camera
without the need for them to wear a colorful wristband. Skin detection will be used to determine
the hand's location and shape. The application in this project makes use of image analysis to
recover crucial information and applies it to the mouse connect of the calculation using the
established rodent functions. Only the three colored fingers are generated from enhanced
webcam video recordings. Their centers are established using the hierarchy of significance,
and based on their relative locations, the action that has to be taken is decided. Python is picked
as the foundational language because it is connected with open source, is user-friendly, and
fosters a friendly environment. Python ILDE, which comes with several required libraries, is
used to bundle the Anaconda coda. The environment is associate-friendly. Py Auto GUI and
Open CV are the packages with which you need to comply. Python could include a module
called PyAuto GUI that allows programmatic control of the mouse and keyboard. We can
handle mouse events by opening a CV via which we do so. The three colors we often utilize
36. ISBN 978-81-962236-3-2 March 2023
33
Feature Extraction and Gesture Recognition
for our fingers are red, yellow, and blue. It is software that employs image processing to extract
important data and add it to the computer's mouse interface's finely detailed concepts. To write
the file, Python is used. It implements the exploitation of mouse movements using the cro cross-
platform process module Open CV. specialized library for Python GUI PyAuto.
Bibliography
[1] K. B. Park, S. H. Choi, J. Y. Lee, Y. Ghasemi, M. Mohammed, and H. Jeong, âHands-
free human-robot interaction using multimodal gestures and deep learning in wearable
mixed reality,â IEEE Access, 2021, doi: 10.1109/ACCESS.2021.3071364.
[2] A. Waichal, âSurvey paper on Hand Gesture Recognition Based Virtual Mouse,â
INTERANTIONAL J. Sci. Res. Eng. Manag., 2022, doi: 10.55041/ijsrem11641.
[3] X. Shao, X. Feng, Y. Yu, Z. Wu, and P. Mei, âA Natural Interaction Method of Multi-
Sensory Channels for Virtual Assembly System of Power Transformer Control
Cabinet,â IEEE Access, 2020, doi: 10.1109/ACCESS.2020.2981539.
[4] M. M. Alam and S. M. Mahbubur Rahman, âAffine transformation of virtual 3D object
using 2D localization of fingertips,â Virtual Real. Intell. Hardw., 2020, doi:
10.1016/j.vrih.2020.10.001.
[5] B. Wang, R. Zhang, C. Xi, J. Sun, and X. Yang, âVirtual and real-time synchronous
interaction for playing table tennis with holograms in mixed reality,â Sensors
(Switzerland), 2020, doi: 10.3390/s20174857.
37. ISBN 978-81-962236-3-2 March 2023
34
Feature Extraction and Gesture Recognition
CHAPTER 12
SEGMENTING AND DETECTING HANDS
Dr. Rajiv Ranjan Singh
Professor and Hod, Department of Electronics and Communication Engineering,
Presidency University, Bangalore, India
Email Id- rajivranjansingh@presidencyuniversity.in
The hand was located using depth pictures. These pictures were taken by a Microsoft Kinect
V2sensor, which uses depth photographs as input to estimate each user's body part and uses
different user actions to match the learned body parts to the depth images. This allows the
camera to gather data on 25 joints, including the hip, spine, head, shoulder, hand, foot, and
thumb. The hand region of interest (HRI) and the center of the palm are quickly and accurately
retrieved using the depth picture of a Kinect skeletal tracker [1]. The noise in the hand area was
reduced using a median filter and morphological processing. After that, a blob-detection
technique based on Kinect's depth signals with preset thresholds was used to choose the hand
area and export the binary picture. Sets of pixels that correspond to the hands are produced as
a consequence of this procedure [2].
CONSTRAINTS IN THE SYSTEM AND HAND-CONTOUR EXTRACTION
The curve of the furthest points taken from the hand-segmentation picture makes up the hand
contours. The contour extraction stage in the fingertip detection procedure is crucial for
defining the fingertip positions. Using the Moore-Neighbor technique, the hand contours are
found in this stage. One of the most popular techniques for extracting the outlines of objects
(or areas) from a picture is this one. The method can locate the regional boundaries by scanning
all of the pixels in the pictures once the binary images of the hand areas have been discovered
[3].
The hand's contour pixels were acquired as an ordered array after the operation. The extraction
of the fingertip uses these variables. In the next session, fingertip detection is implemented in
detail.
A dark backdrop must be used to photograph the hand.
⢠The camera has to be parallel to the hand.
⢠The hand must be positioned vertically (inclination angle = 0) or at most 20 degrees to
the left or right.
⢠The hand must stay at the same height that was used to measure its original
measurements.
⢠The software only identifies a few motions and distinguishes between different tasks
based on how the gesture was executed.
THE ARDUINO-BASED HAND GESTURE CONTROL SYSTEM
We will develop gesture-controlled laptops or PCs as part of this research. It is built using
Python and Arduino together. We may use hand gestures to operate some computer activities,
such as playing and pausing videos, navigating left and right through picture slideshows,
scrolling up and down across web pages, and more, rather than utilizing a keyboard, mouse, or
38. ISBN 978-81-962236-3-2 March 2023
35
Feature Extraction and Gesture Recognition
joystick. This is the reason I decided to control VLC Media Player with hand gestures. The
project's basic concept is to use two Ultrasonic Sensors (HC-SR04) with an Arduino board.
The two sensors will be positioned on top of a laptop screen, and the distance between the hand
and the sensor will be calculated. Python, which is running on the computer, will receive the
data from the Arduino that is delivered over the serial connection and use it to carry out certain
operations [4].
THE PRINCIPLE OF THE PROGRAM
The Hand Gesture Control of a Computer using Arduino truly works on a very basic concept.
To measure the distance between your hand and the ultrasonic sensor, all you need to do is
utilize two ultrasonic sensors and an Arduino board. This knowledge enables the computer to
take appropriate action. It's crucial to consider where the ultrasonic sensors are placed. At each
end of a laptop screen, place the two Ultrasonic Sensors. A Python program collects the
Arduino distance data, and a specialized module called Py Auto GUI turns the data into
keyboard clicks. Python is used to build the Arduino-based Hand Gesture Control of Computer
project. Therefore, I advise you to work on this simple project first, which uses Python to
control an LED on an Arduino onboard. This project includes instructions for setting up the
Serial Library (essential for interacting with Arduino), installing Python on your computer,
using the Arduino with Python, and setting up the project scripts [5].
DESIGN
Figure 1: Illustrate the Circuit Diagram.
Although the circuit's design is extremely straightforward, the way the components are
assembled is crucial. The Arduino's Pins 11 and 10 are linked to the Trigger and Echo Pins of
the first Ultrasonic Sensor, which is located on the left side of the screen. The Trigger and Echo
Pins are linked to Arduino Pins 6 and 5 for the second ultrasonic sensor, as shown in figure 1.
When it comes time to position the sensors, set both ultrasonic sensors on top of the laptop
screen, with the left sensor on the left end as well as the right sensor on the right. To secure the
sensor son to the screen, use double-sided tape.
Bibliography
[1] N. Alves and T. Chau, âVision-based segmentation of continuous mechanomyographic
grasping sequences,â IEEE Trans. Biomed. Eng., 2008, doi: 10.1109/TBME.2007.
902223.
39. ISBN 978-81-962236-3-2 March 2023
36
Feature Extraction and Gesture Recognition
[2] V. Chang, J. M. Saavedra, V. CastaĂąeda, L. Sarabia, N. Hitschfeld, and S. Härtel, âGold-
standard and improved framework for sperm head segmentation,â Comput. Methods
Programs Biomed., 2014, doi: 10.1016/j.cmpb.2014.06.018.
[3] J. Ya, T. Liu, P. Zhang, J. Shi, L. Guo, and Z. Gu, âNeuralAS: Deep Word-Based
Spoofed URLs Detection against Strong Similar Samples,â 2019. doi:
10.1109/IJCNN.2019.8852416.
[4] D. M. Zhang, G. Q. Jin, F. Dai, Q. S. Yuan, X. G. Bao, and Y. D. Zhang, âSalient Object
Detection Based on Deep Fusion of Hand-Crafted Features,â Jisuanji Xuebao/Chinese
J. Comput., 2019, doi: 10.11897/SP.J.1016.2019.02076.
[5] N. Dhungel, G. C.-⌠D. Learning, and U. 2016, âA Deep Learning approach to fully
automated analysis of Masses in Mammograms,â J. Med. Image Anal., 2016.
40. ISBN 978-81-962236-3-2 March 2023
37
Feature Extraction and Gesture Recognition
CHAPTER 13
TESTING, VIRTUAL KEYBOARD, AND TEST TYPES
Mr. Kiran Kale
Assistant Professor, Department of Electronics and Communication Engineering,
Presidency University, Bangalore, India
Email Id- kirandhanaji.kale@presidencyuniversity.in
Virtual keyboard is put through a number of tests to make sure it functions well, is completely
useable, is highly compatible, as well as being reliable. Some of the tests included are listed
below:
⢠Unit tests
⢠Integration tests
⢠Functional tests
⢠Compatibility tests
⢠User interface tests
⢠Performance tests
⢠Usability tests
⢠Security tests
⢠Data capture tests
The aforementioned checks may be performed manually or automatically on actual or virtual
devices. The aforementioned checks may be performed manually or automatically on actual or
virtual devices. Every test type has a purpose and an aim [1]. Not every testing method must
be used, nor are they all necessary at the same time. Together, the tests provide for a more
objective assessment of the keyboard's quality since they complement one another. The
keyboard's quality is assessed by testing. Since quality is objective, you should always be able
to assess a product's quality, which includes not just knowing when something fails but also
understanding where improvements may be made. So how do we evaluate a keyboard's quality?
The primary evaluation criterion is if each of its features or levels functions properly [2].
TESTING METHODS
A number of testing techniques are necessary to examine each of the layers mentioned above.
There will be human and automated testing utilizing various methodologies, computer
languages, and frameworks. The parameters vary depending on the test being run and the
platform it is running on [3].
The following testing procedures (or testing strategies) are used in Fleksy:
⢠Identify the functionality that needs to be produced
⢠Research and document the functionality that needs to be developed
⢠Validate the functionality that needs to be developed
⢠Specify the tests that are required to test every layer that is involved.
⢠Increase functionality
⢠Create UI tests that may be executed
⢠Create unit tests linked to the functionality created
41. ISBN 978-81-962236-3-2 March 2023
38
Feature Extraction and Gesture Recognition
⢠Develop end-to-end tests that validate essential functionality
⢠Make preparations for this new development in the continuous integration system.
⢠Confirm that the automated tests are run and the program builds.
⢠If required, evaluate the application manually.
PROCESS OF PRE-DEVELOPMENT
A preliminary inquiry is necessary before writing the code for a new capability, either by
utilizing mockups or virtualizing the functionality to demonstrate it to users beforehand. As an
alternative, you may record development ideas, which must be confirmed and approved by the
parties involved (stakeholders) [4]. In this method, concepts that won't be beneficial or are
inappropriate for development might be discarded. The acceptance criteria required to cover
each of the features may then be defined by Quality Assurance ("QA"), often via the creation
of high-level tests in Gherkin, such as: Case 1: To automatically insert a period, the user double-
taps the spacebar. Given a user with the "Double space period" option enabled she can
automatically write a period and a blank space by tapping the spacebar twice after typing a
word. The next word will also begin in uppercase. The acceptance criteria may then be used as
a guide for developers and QA as they create and validate the functionality. All of this is a
component of testing, which is a crucial step in avoiding cost overruns. Thorough testing may
identify issues early on, saving you money from having to remedy things later [5].
OPTIMIZATION PROCESS
Each platform's developer is required to create the code in compliance with all previously
published documentation and to test this functionality using various unit tests (UT) and/or
integration tests (IT). End-to-end tests may be prepared concurrently by QA to further cover
the testing done on this capability, if required.
MANUAL EVALUATION
It is feasible (and advised) to do certain sorts of manual testing that are either highly costly or
difficult to recreate using automated tests after the new development has been confirmed with
automated tests. Additionally, tests that must be overseen by a person, such as those that
examine the retina, fingerprint, usability, guesswork, etc. The QA team at Fleksy conducts the
following tests, both internally and externally:
⢠Compatibility tests: ensuring that everything functions and shows properly on various
device kinds, models, and iterations.
⢠Usability tests: evaluating performance on various devices for usability.
⢠Exploratory tests: unrestricted testing without adhering to predetermined test cases
⢠Functional tests: using example test cases as a guide.
DOCUMENTATION FOR TEST
⢠It's best practice to follow various procedures to ensure that the testing is well
documented, such as:
⢠Calendar of releases: to create the test cases, work with QA to plan the manual test
execution, report the findings, etc.
⢠Platform for documentation: Examples include Confluence, Jira, a tool for managing
test cases, or simply Excel, where it is possible to share test results with the whole team.
42. ISBN 978-81-962236-3-2 March 2023
39
Feature Extraction and Gesture Recognition
⢠Test result reports: Using graphs or brief statistics, each team member may be notified
of the faults found in the releases, their criticality, status, etc.
⢠Quality reports: Providing frequent (monthly and/or quarterly) updates to the whole
team on various data pertaining to the product's quality, such as test coverage statistics,
the status of mistakes that have been found, improvements in lead time, etc.
Bibliography
[1] P. Garaizar and M. A. Vadillo, âMetronome LKM: An open source virtual keyboard
driver to measure experiment software latencies,â Behav. Res. Methods, 2017, doi:
10.3758/s13428-017-0958-7.
[2] C. H. Kung, T. C. Hsieh, and S. Smith, âUsability study of multiple vibrotactile feedback
stimuli in an entire virtual keyboard input,â Appl. Ergon., 2021, doi:
10.1016/j.apergo.2020.103270.
[3] C. Lian, X. Ren, Y. Zhao, X. Zhang, R. Chen, S. Wang, X. Sha, and W. J. Li, âTowards
a Virtual Keyboard Scheme Based on Wearing One Motion Sensor Ring on Each Hand,â
IEEE Sens. J., 2021, doi: 10.1109/JSEN.2020.3023964.
[4] R. Scherer, G. R. MĂźller, C. Neuper, B. Graimann, and G. Pfurtscheller, âAn
asynchronously controlled EEG-based virtual keyboard: Improvement of the spelling
rate,â IEEE Trans. Biomed. Eng., 2004, doi: 10.1109/TBME.2004.827062.
[5] K. Vertanen, D. Gaines, C. Fletcher, A. M. Stanage, R. Watling, and P. O. Kristensson,
âVelociWatch: Designing and evaluating a virtual keyboard for the input of challenging
text,â 2019. doi: 10.1145/3290605.3300821.
43. Publisher
M/s CIIR Research Publications
B-17, Sector-6, Noida,
Uttar Pradesh, India.
201301
Email: info@ciir.in
MARCH 2023
ISBN 978-81-962236-3-2
Š ALL RIGHTS ARE RESERVED WITH CIIR RESEARCH PUBLICATIONS