Gesture is a most natural way of communication between human and computer in real system. Hand gesture is
one of the important methods of non-verbal communications for humans. A simulation tool, MATLAB based
colour image processing is used to recognize hand gesture. With the help of wireless communication, it is easier
to interact with the robot. The objective of this project is to build a password protected wireless gesture control
robot using Arduino, RF transmitter and receiver module. The continuous images are processed and the
command signal is sent to the Arduino Uno microcontroller and according to the number of fingers, it sends the
commands to the RF transmitter which is received by the transmitter and is processed at the receiver end which
drives the motor to a particular direction. The robot moves forward, backward, right and left when we show one,
two, three, four fingers (fingers with some red color band or tape) respectively. As soon as the hand is moved
off from the frame immediately it will stop. This can be used for physically disabled people who can’t use their
hands to move the wheel chair. And it can also be used in various military applications where radioactive
substances which can’t be touched by the human hand
Design of Image Projection Using Combined Approach for TrackingIJMER
Over the years the techniques and methods that have been used to interact with the
computers have evolved significantly. From the primitive use of punch cards to the latest touch screen
panels we can see the vast improvement in interaction with the system. There are many new ways of
projection and interaction technologies that can reshape our perception and interaction
methodologies. Also projection technology is very useful for creating various geometric displays. In
earlier generations, the projector technology was used for projecting images and videos on single
screen, using large and bulky setup. To overcome the earlier limitations we are designing “Wireless
Image Projection Tracking”, which is a system that uses IR (Infrared) technology to track the body in
the IR range and uses their movements for image orientation and manipulations like zoom, tilt/rotate,
and scale. We are presenting a method of mapping IR light source position and orientation to an
image. By using this system we can also track single and multiple IR light source positions and also it
can be used effectively to see the image projection in 3D view. Extension in this technology can further
be useful for future tracking capabilities to implement the touch screen feature for commercial
applications.
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Altera’s FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Altera’s Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
Mems Sensor Based Approach for Gesture Recognition to Control Media in ComputerIJARIIT
Gesture Recognition is the method of identifying and understanding meaningful movements of the arms, hands,
face, or sometimes head. It is one of the most important aspects in the field of Human-Computer interface. There has been a
continuous research in this field because of its ability for application in user interfaces. Gesture Recognition is one of the
important areas of research for engineers and scientists. Nowadays the industry is working on the different implementation for
the trouble free, natural and easy product which can be easy to handle. This paper proposed a method to work with motion
sensors and interpret the motion of hand into various applications in a virtual interface. The Micro-Electro-Mechanical
Systems (MEMS) accelerometers are used to capture the dynamic hand gesture. These sensors information is transferred to
the microcontroller from where these data are transferred wirelessly to the computer system for actual processing of the data
with the use of various algorithms.
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
The presentation contains a technology for identifying any type of body motions commonly originating from hand and face using artificial neural network.This include identifying sign language also.This technology is for speech impaired individuals.
// I have shared an IJCSE standard paper in this topic
Day by day lots of efforts are been taken towards
developing an intelligent and natural interface between computer
system and users. And looking at the technologies now a day’s it
has become possible by means of variety of media information like
visualization, audio, paint etc. Gesture has become important part
of human communication to convey the information. Thus In this
paper we proposed a method for HAND GESTURE
RECOGNIZATION which includes Hand Segmentation, Hand
Tracking and Edge Traversal Algorithm. We have designed a
system which is limited to the hardware parts such as computer
and webcam. The system consists of four modules: Hand
Tracking and Segmentation, Feature Extraction, Neural
Training, and Testing. The objective of this system to explore the
utility of a neural network-based approach to the recognition of
the hand gestures that create a system that will easily identify the
gesture and use them for device control and convey information
instead of normal inputs devices such as mouse and keyboard.
Design of Image Projection Using Combined Approach for TrackingIJMER
Over the years the techniques and methods that have been used to interact with the
computers have evolved significantly. From the primitive use of punch cards to the latest touch screen
panels we can see the vast improvement in interaction with the system. There are many new ways of
projection and interaction technologies that can reshape our perception and interaction
methodologies. Also projection technology is very useful for creating various geometric displays. In
earlier generations, the projector technology was used for projecting images and videos on single
screen, using large and bulky setup. To overcome the earlier limitations we are designing “Wireless
Image Projection Tracking”, which is a system that uses IR (Infrared) technology to track the body in
the IR range and uses their movements for image orientation and manipulations like zoom, tilt/rotate,
and scale. We are presenting a method of mapping IR light source position and orientation to an
image. By using this system we can also track single and multiple IR light source positions and also it
can be used effectively to see the image projection in 3D view. Extension in this technology can further
be useful for future tracking capabilities to implement the touch screen feature for commercial
applications.
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Altera’s FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Altera’s Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
Mems Sensor Based Approach for Gesture Recognition to Control Media in ComputerIJARIIT
Gesture Recognition is the method of identifying and understanding meaningful movements of the arms, hands,
face, or sometimes head. It is one of the most important aspects in the field of Human-Computer interface. There has been a
continuous research in this field because of its ability for application in user interfaces. Gesture Recognition is one of the
important areas of research for engineers and scientists. Nowadays the industry is working on the different implementation for
the trouble free, natural and easy product which can be easy to handle. This paper proposed a method to work with motion
sensors and interpret the motion of hand into various applications in a virtual interface. The Micro-Electro-Mechanical
Systems (MEMS) accelerometers are used to capture the dynamic hand gesture. These sensors information is transferred to
the microcontroller from where these data are transferred wirelessly to the computer system for actual processing of the data
with the use of various algorithms.
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
The presentation contains a technology for identifying any type of body motions commonly originating from hand and face using artificial neural network.This include identifying sign language also.This technology is for speech impaired individuals.
// I have shared an IJCSE standard paper in this topic
Day by day lots of efforts are been taken towards
developing an intelligent and natural interface between computer
system and users. And looking at the technologies now a day’s it
has become possible by means of variety of media information like
visualization, audio, paint etc. Gesture has become important part
of human communication to convey the information. Thus In this
paper we proposed a method for HAND GESTURE
RECOGNIZATION which includes Hand Segmentation, Hand
Tracking and Edge Traversal Algorithm. We have designed a
system which is limited to the hardware parts such as computer
and webcam. The system consists of four modules: Hand
Tracking and Segmentation, Feature Extraction, Neural
Training, and Testing. The objective of this system to explore the
utility of a neural network-based approach to the recognition of
the hand gestures that create a system that will easily identify the
gesture and use them for device control and convey information
instead of normal inputs devices such as mouse and keyboard.
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
This paper contains a technology for identifying any type of body motions commonly originating from hand and face using artificial neural network.This include identifying sign language also.This technology is for speech impaired individuals.
// I have shared a presentation in this topic
An Approach for Object and Scene Detection for Blind Peoples Using Vocal Vision.IJERA Editor
This system help the blind peoples for the navigation without the help of third person so blind person can perform its work independently. This system implemented on android device in which object detection and scene detection implemented, so after detection there will be text to speech conversion so user or blind person can get message from that android device with the help of headphone connected to that device. Our project will help blind people to understand the images which will be converted to sound with the help of webcam. We shall capture images in front of blind peoples .The captured image will be processed through our algorithms which will enhances the image data. The hardware component will have its own database. The processed image is compare with the database in the hardware component .The result after processing and comparing will be converted into speech signals. The headphones guide the blind peoples.
Natural Hand Gestures Recognition System for Intelligent HCI: A SurveyEditor IJCATR
Gesture recognition is to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head,
and/or body. Hand Gestures have greater importance in designing an intelligent and efficient human–computer interface. The applications
of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper a survey on
various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture
methods are explained with different tools and algorithms applied on gesture recognition system, including connectionist models, hidden
Markov model, and fuzzy clustering. Challenges and future research directions are also highlighted.
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...Waqas Tariq
This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm where three algorithms for hand segmentation using different color spaces with required morphological processing have were utilized. Hand tracking and segmentation algorithm (HTS) is found to be most efficient to handle the challenges of vision based system such as skin color detection, complex background removal and variable lighting condition. Noise may contain, sometime, in the segmented image due to dynamic background. An edge traversal algorithm was developed and applied on the segmented hand contour for removal of unwanted background noise.
The Gesture Recognition Technology is rapidly growing technology and this PPT describes about the working of gesture recognition technology,the sub fields in it, its applications and the challenges it faces.
A Framework For Dynamic Hand Gesture Recognition Using Key Frames ExtractionNEERAJ BAGHEL
Abstract—Hand Gesture Recognition is one of the natural
ways of human computer interaction (HCI) which has wide
range of technological as well as social applications. A dynamic
hand gesture can be characterized by its shape, position and
movement. This paper presents a user independent framework
for dynamic hand gesture recognition in which a novel algorithm
for extraction of key frames is proposed. This algorithm is based
on the change in hand shape and position, to find out the most
important and distinguishing frames from the video of the hand
gesture, using certain parameters and dynamic threshold. For
classification, Multiclass Support Vector Machine (MSVM) is
used. Experiments using the videos of hand gestures of Indian
Sign Language show the effectiveness of the proposed system for
various dynamic hand gestures. The use of key frame extraction
algorithm speeds up the system by selecting essential frames and
therefore eliminating extra computation on redundant frames.
Gesture recognition is the process of understanding and
interpreting meaningful movements of the hands, arms, face,
or sometimes head. It is of great need in designing an
efficient human-computer interface. The technology has
been in study in recent years because of its potential for
application in user interfaces
The Gesture Recognition Technology is rapidly growing technology and this PPT describes about the working of gesture recognition technology,the sub fields in it, its applications and the challenges it faces.
Real Time Hand Gesture Recognition Based Control of Arduino Robotijtsrd
The main objective of this project aims at delivering the need for one of many applications of the Internet of Things domain. It is a combination of various domains such as image processing and robotics for development in the field of the Internet Of Things IoT now also termed as Internet Of Everything IoE after seeing its involvement in almost everything happening in our day to day lives. With increased dependency on home automation and wireless controlled equipment and gadgets, this project also shares some scope in home automation. To improve the functionality of gadgets and to increase their efficiency we thought of creating a device to cater to these needs and explore alongside our own interest in the field of the Internet of Things. This paper presents the design, functioning, and successful testing of a rover controlled wirelessly with help of hand gestures. Gesture Recognition has played important role in the field of Human Computer Interaction HCI . Vision based hand gesture recognition provides a great solution to various machine vision based applications by providing an easy interaction channel. For various automated machine control applications, an effective real time communication approach is required. In this paper, a vision based hand gesture based approach is presented for providing a real time control of Arduino based robot. Combining all of these concepts into a single device that can recognize hand gestures and sends data wirelessly to remote devices for surveillance and other purposes. Dr. Sunil Chavan | Prof. Revti Jadhav | Jayesh Mankar | Suyash Thakur | Shahid Ansari | Vatsal Panchal "Real-Time Hand Gesture Recognition Based Control of Arduino Robot" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-4 , June 2022, URL: https://www.ijtsrd.com/papers/ijtsrd49934.pdf Paper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/49934/realtime-hand-gesture-recognition-based-control-of-arduino-robot/dr-sunil-chavan
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
This paper contains a technology for identifying any type of body motions commonly originating from hand and face using artificial neural network.This include identifying sign language also.This technology is for speech impaired individuals.
// I have shared a presentation in this topic
An Approach for Object and Scene Detection for Blind Peoples Using Vocal Vision.IJERA Editor
This system help the blind peoples for the navigation without the help of third person so blind person can perform its work independently. This system implemented on android device in which object detection and scene detection implemented, so after detection there will be text to speech conversion so user or blind person can get message from that android device with the help of headphone connected to that device. Our project will help blind people to understand the images which will be converted to sound with the help of webcam. We shall capture images in front of blind peoples .The captured image will be processed through our algorithms which will enhances the image data. The hardware component will have its own database. The processed image is compare with the database in the hardware component .The result after processing and comparing will be converted into speech signals. The headphones guide the blind peoples.
Natural Hand Gestures Recognition System for Intelligent HCI: A SurveyEditor IJCATR
Gesture recognition is to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head,
and/or body. Hand Gestures have greater importance in designing an intelligent and efficient human–computer interface. The applications
of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper a survey on
various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture
methods are explained with different tools and algorithms applied on gesture recognition system, including connectionist models, hidden
Markov model, and fuzzy clustering. Challenges and future research directions are also highlighted.
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...Waqas Tariq
This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm where three algorithms for hand segmentation using different color spaces with required morphological processing have were utilized. Hand tracking and segmentation algorithm (HTS) is found to be most efficient to handle the challenges of vision based system such as skin color detection, complex background removal and variable lighting condition. Noise may contain, sometime, in the segmented image due to dynamic background. An edge traversal algorithm was developed and applied on the segmented hand contour for removal of unwanted background noise.
The Gesture Recognition Technology is rapidly growing technology and this PPT describes about the working of gesture recognition technology,the sub fields in it, its applications and the challenges it faces.
A Framework For Dynamic Hand Gesture Recognition Using Key Frames ExtractionNEERAJ BAGHEL
Abstract—Hand Gesture Recognition is one of the natural
ways of human computer interaction (HCI) which has wide
range of technological as well as social applications. A dynamic
hand gesture can be characterized by its shape, position and
movement. This paper presents a user independent framework
for dynamic hand gesture recognition in which a novel algorithm
for extraction of key frames is proposed. This algorithm is based
on the change in hand shape and position, to find out the most
important and distinguishing frames from the video of the hand
gesture, using certain parameters and dynamic threshold. For
classification, Multiclass Support Vector Machine (MSVM) is
used. Experiments using the videos of hand gestures of Indian
Sign Language show the effectiveness of the proposed system for
various dynamic hand gestures. The use of key frame extraction
algorithm speeds up the system by selecting essential frames and
therefore eliminating extra computation on redundant frames.
Gesture recognition is the process of understanding and
interpreting meaningful movements of the hands, arms, face,
or sometimes head. It is of great need in designing an
efficient human-computer interface. The technology has
been in study in recent years because of its potential for
application in user interfaces
The Gesture Recognition Technology is rapidly growing technology and this PPT describes about the working of gesture recognition technology,the sub fields in it, its applications and the challenges it faces.
Real Time Hand Gesture Recognition Based Control of Arduino Robotijtsrd
The main objective of this project aims at delivering the need for one of many applications of the Internet of Things domain. It is a combination of various domains such as image processing and robotics for development in the field of the Internet Of Things IoT now also termed as Internet Of Everything IoE after seeing its involvement in almost everything happening in our day to day lives. With increased dependency on home automation and wireless controlled equipment and gadgets, this project also shares some scope in home automation. To improve the functionality of gadgets and to increase their efficiency we thought of creating a device to cater to these needs and explore alongside our own interest in the field of the Internet of Things. This paper presents the design, functioning, and successful testing of a rover controlled wirelessly with help of hand gestures. Gesture Recognition has played important role in the field of Human Computer Interaction HCI . Vision based hand gesture recognition provides a great solution to various machine vision based applications by providing an easy interaction channel. For various automated machine control applications, an effective real time communication approach is required. In this paper, a vision based hand gesture based approach is presented for providing a real time control of Arduino based robot. Combining all of these concepts into a single device that can recognize hand gestures and sends data wirelessly to remote devices for surveillance and other purposes. Dr. Sunil Chavan | Prof. Revti Jadhav | Jayesh Mankar | Suyash Thakur | Shahid Ansari | Vatsal Panchal "Real-Time Hand Gesture Recognition Based Control of Arduino Robot" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-4 , June 2022, URL: https://www.ijtsrd.com/papers/ijtsrd49934.pdf Paper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/49934/realtime-hand-gesture-recognition-based-control-of-arduino-robot/dr-sunil-chavan
This paper presents the maneuver of mouse pointer and performs various mouse operations such as left
click, right click, double click, drag etc using gestures recognition technique. Recognizing gestures is a
complex task which involves many aspects such as motion modeling, motion analysis, pattern recognition
and machine learning.
Keeping all the essential factors in mind a system has been created which recognizes the movement of
fingers and various patterns formed by them. Color caps have been used for fingers to distinguish it from
the background color such as skin color. Thus recognizing the gestures various mouse events have been
performed. The application has been created on MATLAB environment with operating system as windows
7.
Hand gesture recognition system has received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. In this work Hand segmentation using color models is introduced for obtaining hand gestures or detecting user’s hand by color segmentation technique for faster, better, robust, accurate and real-time applications. There are many such color models available for human hand and human skin detection with relative advantages and disadvantages in the field of Image Processing. For the purpose of hand Segmentation mix model approach has been adopted for best results. For detection of Hand from an image. The proposed approach is found to be accurate and effective for multiple conditions
This presentation deals with the combination of Sixth Sense technology and Robotics.
The Autonomous Robots are controlled using basic Hand gestures through sixth sense technology.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Automobile Management System Project Report.pdfKamal Acharya
The proposed project is developed to manage the automobile in the automobile dealer company. The main module in this project is login, automobile management, customer management, sales, complaints and reports. The first module is the login. The automobile showroom owner should login to the project for usage. The username and password are verified and if it is correct, next form opens. If the username and password are not correct, it shows the error message.
When a customer search for a automobile, if the automobile is available, they will be taken to a page that shows the details of the automobile including automobile name, automobile ID, quantity, price etc. “Automobile Management System” is useful for maintaining automobiles, customers effectively and hence helps for establishing good relation between customer and automobile organization. It contains various customized modules for effectively maintaining automobiles and stock information accurately and safely.
When the automobile is sold to the customer, stock will be reduced automatically. When a new purchase is made, stock will be increased automatically. While selecting automobiles for sale, the proposed software will automatically check for total number of available stock of that particular item, if the total stock of that particular item is less than 5, software will notify the user to purchase the particular item.
Also when the user tries to sale items which are not in stock, the system will prompt the user that the stock is not enough. Customers of this system can search for a automobile; can purchase a automobile easily by selecting fast. On the other hand the stock of automobiles can be maintained perfectly by the automobile shop manager overcoming the drawbacks of existing system.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Quality defects in TMT Bars, Possible causes and Potential Solutions.PrashantGoswami42
Maintaining high-quality standards in the production of TMT bars is crucial for ensuring structural integrity in construction. Addressing common defects through careful monitoring, standardized processes, and advanced technology can significantly improve the quality of TMT bars. Continuous training and adherence to quality control measures will also play a pivotal role in minimizing these defects.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Password Based Hand Gesture Controlled Robot
1. Shanmukha rao. Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 4, (Part - 3) April 2016, pp.63-69
www.ijera.com 63|P a g e
Password Based Hand Gesture Controlled Robot
1
Shanmukha Rao, 2
CH Rajasekhar
1
Assistant Professor Dept. Of E.C.E., M.V.G.R., India
2
Student Final Year B.Tech Dept. Of E.C.E.,M.V.G.R., India
ABSTRACT
Gesture is a most natural way of communication between human and computer in real system. Hand gesture is
one of the important methods of non-verbal communications for humans. A simulation tool, MATLAB based
colour image processing is used to recognize hand gesture. With the help of wireless communication, it is easier
to interact with the robot. The objective of this project is to build a password protected wireless gesture control
robot using Arduino, RF transmitter and receiver module. The continuous images are processed and the
command signal is sent to the Arduino Uno microcontroller and according to the number of fingers, it sends the
commands to the RF transmitter which is received by the transmitter and is processed at the receiver end which
drives the motor to a particular direction. The robot moves forward, backward, right and left when we show one,
two, three, four fingers (fingers with some red color band or tape) respectively. As soon as the hand is moved
off from the frame immediately it will stop. This can be used for physically disabled people who can’t use their
hands to move the wheel chair. And it can also be used in various military applications where radioactive
substances which can’t be touched by the human hand.
I. INTRODUCTION
In recent years, robotics is a current
emerging technology and can be a replacement to
humans, they still need to be controlled by humans
itself. Robots can be wired or wireless, both having
a controller device. Both have pros and cons
associated with them. Beyond controlling the
robotic system through physical devices, recent
method of gesture control has become very
popular. The main purpose of using gestures is that
it provides a more natural way of controlling and
provides a rich and intuitive form of interaction
with the robotic system.
Based on the captured hand gesture the
finger count is sent to the microcontroller on
transmitter side and the count is sent to the
receiver. Now the command signal is processed in
the receiver Arduino board and if the count is one it
moves forward. If the count is two it moves
backword similarly in remaining directions. Based
on the received command signal the Arduino runs a
program which moves the car. The car moves as
long as the input gesture is shown to the web
camera. As soon as the hand is moved off from the
frame immediately it will stop. This can be used for
physically disabled people who can’t use their
hands to move the wheel chair. And it can also be
used in various military applications where
radioactive substances which can’t be touched by
the human hand. Here hand gesture
controlledrobotic arms can be used.
These days many types of wireless robots are being
developed and are put to varied applications and
uses. Human hand gestures are natural and with the
help of wireless communication, it is easier to
interact with the robot in a friendly way.
II. BLOCK DIAGRAM
From the block diagram it is observed that
first the video capturing is started and each frames
are taken analysed after user enters correct
password. The password entered is compared with
the predefined string in the code and if it matches
then it enters into. The main loop. First the video
capturing is started and each frame is analysed.
Then the red color object in the image is tracked
and the count of red objects is considered. Here we
first converted rgb image into grey scale and from
grey to binary image using thresholding. Now for
RESEARCH ARTICLE OPEN ACCESS
2. Shanmukha rao. Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 4, (Part - 3) April 2016, pp.63-69
www.ijera.com 64|P a g e
each count a number is assigned and that number is
sent to the Arduino at the transmitter side using
serial communication. Now that signal is
transmitted. At the receiver based on the command
signal the motors and driven. Here 433Mhz
transmitter and receiver is used for transmission
and reception. It can be simply programmed using
Arduino Uno board.
III. WHY GESTURE
Gesture recognition enables humans to
communicate with the machine (HMI) and interact
naturally without any mechanical devices. Using
the concept of gesture recognition, it is possible to
point a finger at the computer screen so that
the cursor will move accordingly. This could
potentially make conventional input devices such
as mouse, keyboards and even touch-
screens redundant.
Gesture recognition can be conducted with
techniques from computer vision and processing.
The literature includes ongoing work in the
computer vision field on capturing gestures or more
general human pose and movements by cameras
connected to a computer.
In computer interfaces, two types of
gestures are distinguished. consider online gestures,
which can also be regarded as direct manipulations
like scaling and rotating. In contrast, offline
gestures are usually processed after the interaction
is finished; e. g. a circle is drawn to activate a
context menu.
IV. APPLICATIONS
In the near future, gesture recognition
technology will routinely add yet another
dimension to human interactions with consumer
electronic devices -- such as PCs, media tablets and
smartphones.
Camera-based tracking for gesture
recognition has actually been in use for some time.
Leading video game consoles -- Microsoft’s Xbox
and Sony’s PlayStation -- both have gesture
recognition built-in; known as Kinect and
PlayStation Eye respectively.
These devices are in their seventh and
eighth generation. However, several challenges
remain for gesture recognition technology for
mobile devices, including effectiveness of the
technology in adverse light conditions, variations in
the background, and high power consumption.
Mostly in military application, industrial
robotics, construction vehicles in civil side, medical
application for surgery. In this field it is quite
complicated to control the robot or particular
machine with remote or switches, sometime the
operator may get confused in the switches and
button itself, so a new concept is introduced to
control the machine with the movement of hand
which will simultaneously control the movement of
robot.
Gesture in military applications
For handicapped patients
Some patients cannot control the
wheelchair with their arms. The wheelchair is
operated with the help of gestures that is simply
counting the fingers, which in turn controls the
wheelchair with the help of hand gesture. The
wheelchair moves front, back, right and left. Due to
which disabled and partially paralyzed patient can
freely move.
Gesture Controlled Wheel Chair
3. Shanmukha rao. Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 4, (Part - 3) April 2016, pp.63-69
www.ijera.com 65|P a g e
Components used in this project:
1) Web Cam
2) Arduino Uno (microcontroller board)
3) 433 MHz RF transmitter and receiver
4) L293D motor driver IC
5) Chassis and wheels
6) DC motors
7) PC installed with mat lab and Arduino software
CBIR (Content-Based Image Retrieval) is the
process of retrieving images from a database or
library of digital images according to the visual
content of the images. In other words, it is the
retrieving of images that have similar content of
colours, textures or shapes. Images have always
been an inevitable part of human communication
and its roots millennia ago. Images make the
communication process more interesting,
illustrative, elaborate, understandable and
transparent In CBIR system, it is usual to group the
image features in three main classes: colour, texture
and shape. Ideally, these features should be
integrated to provide better discrimination in the
comparison process. Colour is by far the most
common visual feature used in CBIR, primarily
because of the simplicity of extracting colour
information from images. To extract information
about shape and texture feature are much more
complex and costly tasks, usually performed after
the initial filtering provided by colour features.
Many applications require simple methods
for comparing pairs of images based on their
overall appearance. For example, a user may wish
to retrieve all images similar to a given image from
a large database of images. Color histograms are a
popular solution to this problem, the histogram
describes the gray-level or color distribution for a
given image, they are computationally efficient, but
generally insensitive to small changes in camera
position. Color histograms also have some
limitations. A color histogram provides no spatial
information; it merely describes which colors are
present in the image, and in what quantities. In
addition, color histograms are sensitive to both
compression artifacts and changes in overall image
brightness. For the design of histogram based
method the main things we require are appropriate
color space, a color quantization scheme, a
histogram representation, and a similarity metric. A
digital image in this context is a set of pixels. Each
pixel represents a color. Colors can be represented
using different color spaces depending on the
standards used by the researcher or depending on
the application such as Red-Green-Blue (RGB),
Hue-Saturation-Value (HSV), YIQ or YUV etc.
V. IMAGE CONVERSION
It is also known as an RGB image. A true
color image is an image in which each pixel is
specified by three values one each for the red, blue,
and green components of the pixel scalar. M by-n-
by-3 array of class uint8, uint16, single, or double
whose pixel values specify intensity values. For
single or double arrays, values range from [0, 1].
For uint8, values range from [0, 255].
For uint16, values range from [0, 65535].
It is also known as an intensity, gray scale,
or gray level image. Array of class uint8, uint16,
int16, single, or double whose pixel values specify
intensity values. For single or double arrays,
values range from [0, 1]. For uint8, values range
from [0,255]. For uint16, values range from [0,
65535]. For int16, values range from [-32768,
32767].
RGB to GREY scale conversion: Red color has more
wavelength of all the three colors, and green is the
color that has not only less wavelength then red
color but also green is the color that gives more
soothing effect to the eyes.
It means that to decrease the contribution
of red color, and increase the contribution of the
green color, and put blue color contribution in
between these two.
So the new equation that form is:
New grayscale image = ((0.3 * R) + (0.59 * G) +
(0.11 * B)).
According to this equation, Red has
contribute 30%, Green has contributed 59% which
is greater in all three colors and Blue has
contributed 11%. Applying this equation to the
image, we get this
4. Shanmukha rao. Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 4, (Part - 3) April 2016, pp.63-69
www.ijera.com 66|P a g e
What is Arduino: Arduino is an open-source
prototyping platform based on easy-to-use
hardware and software. Arduino boards are able to
read inputs - light on a sensor, a finger on a button,
or a Twitter message - and turn it into an output -
activating a motor, turning on an LED, publishing
something online. You can tell your board what to
do by sending a set of instructions to the
microcontroller on the board. To do so you use
the Arduino programming language (based on
Wiring), and the Arduino Software (IDE), based
on Processing.
Over the years Arduino has been the brain
of thousands of projects, from everyday objects to
complex scientific instruments. A worldwide
community of makers - students, hobbyists, artists,
programmers, and professionals - has gathered
around this open-source platform, their
contributions have added up to an incredible
amount of accessible knowledge that can be of
great help to novices and experts alike.
Arduino Uno R3
VI. AURDUINO MICRO-
CONTROLLER
A micro-controller is a small computer on
a single integrated circuit containing a processor
core, memory, and programmable input/output
peripherals.
The important part is that a micro-
controller contains the processor (which all
computers have) and memory, and some
input/output pins that can control. (often called
GPIO - General Purpose Input Output Pins).
The Arduino Uno is a microcontroller
board based on the ATmega328 (datasheet). It has
14 digital input/output pins (of which 6 can be used
as PWM outputs), 6 analog inputs, a 16 MHz
ceramic resonator, a USB connection, a power jack,
an ICSP header, and a reset button. It contains
everything needed to support the microcontroller;
simply connect it to a computer with a USB cable
or power it with a AC-to-DC adapter or battery to
get started.
The Uno differs from all preceding boards
in that it does not use the FTDI USB-to-serial
driver chip. Instead, it features the Atmega16U2
(Atmega8U2 up to version R2) programmed as a
USB-to-serial converter.
SPECIFICATIONS:
Microcontroller : ATmega328
Operating Voltage : 5V
Input Voltage (recommended) : 7-12V
Input Voltage (limits) : 6-20V
Digital I/O Pins : 14
Analog Input Pins : 6
DC Current per I/O Pin : 40 mA
DC Current for 3.3V Pin : 50 mA
Flash Memory : 32 KB
(ATmega328) of which 0.5 KB used by bootloader
SRAM : 2 KB
EEPROM : 1 KB
Clock Speed : 16
MHz
6.2 Matlab Functions used:
1. FramesPerTrigger: Specify number of frames
to acquire per trigger using selected video source
Description: The FramesPerTrigger property
specifies the number of frames the video input
object acquires each time it executes a trigger using
the selected video source.
When the value of the FramesPerTrigger property
is set to Inf, the object keeps acquiring frames until
an error occurs or you issue a stop command.
2. Returned Color Space: Specify color space
used in MATLAB.
Description: The ReturnedColorSpace property
specifies the color space you want the toolbox to
use when it returns image data to the MATLAB®
workspace. This is only relevant when you are
accessing acquired image data with the
getsnapshot, getdata, and peek data functions.
3. Bwareaopen: Remove small objects from binary
image.
Syntax
BW2 = bwareaopen(BW, P)
BW2 = bwareaopen(BW, P, conn)
Description: BW2 = bwareaopen(BW, P) removes
from a binary image all connected components
(objects) that have fewer than P pixels, producing
another binary image, BW2. This operation is
known as an area opening. The default connectivity
is 8 for two dimensions, 26 for three dimensions,
and conndef(ndims(BW), 'maximal') for higher
dimensions.
BW2 = bwareaopen(BW, P, conn) specifies the
desired connectivity. conn can have any of the
following scalar values.
4. bwlabel: Label connected components in 2-D
binary image.
Syntax
L = bwlabel(BW, n)
5. Shanmukha rao. Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 4, (Part - 3) April 2016, pp.63-69
www.ijera.com 67|P a g e
[L, num] = bwlabel(BW, n)
Description: L = bwlabel(BW, n) returns a matrix
L, of the same size as BW, containing labels for the
connected objects in BW. The variable n can have a
value of either 4 or 8, where 4 specifies 4-
connected objects and 8 specifies 8-connected
objects. If the argument is omitted, it defaults to 8.
The elements of L are integer values greater than or
equal to 0. The pixels labeled 0 are the background.
The pixels labelled 1 make up one object; the
pixels labeled 2 make up a second object; and so
on.
[L, num] = bwlabel(BW, n) returns in num the
number of connected objects found in BW.
The functions bwlabel, bwlabeln, and bwconncomp
all compute connected components for binary
images. bwconncomp replaces the use of bwlabel
and bwlabeln. It uses significantly less memory and
is sometimes faster than the other functions.
5. graythresh Global image threshold using Otsu's
method
Syntax
level = graythresh(I)
[level EM] = graythresh(I)
Description: level = graythresh(I) computes a
global threshold (level) that can be used to convert
an intensity image to a binary image with im2bw.
level is a normalized intensity value that lies in the
range [0, 1].
The graythresh function uses Otsu's method, which
chooses the threshold to minimize the intraclass
variance of the black and white pixels.
Multidimensional arrays are converted
automatically to 2-D arrays using reshape. The
graythresh function ignores any nonzero imaginary
part of I.
[level EM] = graythresh(I) returns the effectiveness
metric, EM, as the second output argument. The
effectiveness metric is a value in the range [0 1]
that indicates the effectiveness of the thresholding
of the input image. The lower bound is attainable
only by images having a single gray level, and the
upper bound is attainable only by two-valued
images.
6. im2bw: Convert image to binary image, based
on threshold.
Syntax
BW = im2bw(I, level)
BW = im2bw(X, map, level)
BW = im2bw(RGB, level)
DescriptionBW = im2bw(I, level) converts the
grayscale image I to a binary image. The output
image BW replaces all pixels in the input image
with luminance greater than level with the value 1
(white) and replaces all other pixels with the value
0 (black). Specify level in the range [0,1]. This
range is relative to the signal levels possible for the
image's class. Therefore, a level value of 0.5 is
midway between black and white, regardless of
class. To compute the level argument, you can use
the function graythresh. If you do not specify level,
im2bw uses the value 0.5.
BW = im2bw(X, map, level) converts the indexed
image X with colormap map to a binary image.
BW = im2bw(RGB, level) converts the truecolor
image RGB to a binary image.
If the input image is not a grayscale image, im2bw
converts the input image to grayscale, and then
converts this grayscale image to binary by
thresholding.
7. imsubtract: Subtract one image from another or
subtract constant from image
Syntax
Z = imsubtract(X,Y)
Description: Z = imsubtract(X,Y) subtracts each
element in array Y from the corresponding element
in array X and returns the difference in the
corresponding element of the output array Z. X and
Y are real, nonsparse numeric arrays of the same
size and class, or Y is a double scalar. The array
returned, Z, has the same size and class as X unless
X is logical, in which case Z is double.
If X is an integer array, elements of the output that
exceed the range of the integer type are truncated,
and fractional values are rounded.
8. medfilt2: 2-D median filtering
Syntax
B = medfilt2(A, [m n])
B = medfilt2(A)
B = medfilt2(A, 'indexed', ...)
B = medfilt2(..., padopt)
Description: Median filtering is a nonlinear
operation often used in image processing to reduce
"salt and pepper" noise. A median filter is more
effective than convolution when the goal is to
simultaneously reduce noise and preserve edges.
B = medfilt2(..., padopt) controls how the matrix
boundaries are padded. padopt may be 'zeros' (the
default), 'symmetric', or 'indexed'. If padopt is
'symmetric', A is symmetrically extended at the
boundaries. If padopt is 'indexed', A is padded with
ones if it is double; otherwise it is padded with
zeros.
9. rgb2gray: Convert RGB image or colormap to
grayscale
Syntax
I = rgb2gray(RGB)
6. Shanmukha rao. Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 4, (Part - 3) April 2016, pp.63-69
www.ijera.com 68|P a g e
newmap = rgb2gray(map)
Description: I = rgb2gray(RGB) converts the
truecolor image RGB to the grayscale intensity
image I. rgb2gray converts RGB images to
grayscale by eliminating the hue and saturation
information while retaining the luminance.
newmap = rgb2gray(map) returns a grayscale
colormap equivalent to map.
P.code: It is password protected as only the user
with the right password can access the robot and
others can’t. The program is converted in to
pseudocode P-CODE in order to protect the source
code and password in the program.
The P-files are much smaller than a zipped version
of the M-files. This seems to imply, that the P-files
are a kind of byte coded. For large M-files with
10'000th of lines opening the P-file the first time is
faster than for the M-file. But this can be an effect
of the file size.
P-code files are encrypted and are very hard to
decrypt and find out original source code.
VII. WORKING
In this project first using web camera
capturing the video is initiated and then each frame
obtained is analysed using image processing
techniques. In this project Content based image
retrieval is used in which only color of the object is
considered. Now Putting some red color on the
fingers the fingers are shown to the web camera.
From each frame it analyses number of red color
objects and then it will make a count.
Based on the count Matlab sends a
command to the Arduino microcontroller which is
connected to the usb port of the computer. To this
Arduino board a 433MHZ RF transmitter is
connected. Now based on the received command
by the Arduino Uno microcontroller it will use
transmitter to transmit the signal to the robot car.
Now at the receiver Arduino uno is connected to
the 433Mhz receiver and upon the received value
the microcontroller analyses the code and runs the
DC motors accordingly which are connected to the
motor driver ic L293D.
VIII. RESULTS
First upload the transmitter program in the
Arduino board which is connected to the personal
computer.
Now upload the receiver program on the
Arduino board which is connected to the robot car.
Now open Matlab software which is installed on
your personal computer.Open the Matlab code and
run the program.
Now based on the input shown to the web
cam (number of fingers) the robot car moves in that
direction.
It is shown below,
When one finger is shown to the webcam it moves
in forward direction.
Figure 1: Car moving in forward direction
When two fingers are shown to the
webcam it will move in backward direction.
Figure 2: Car moving in backward direction
When Three fingers are shown to the webcam it
will move towards left side.
Figure 3: Car moving in left direction
7. Shanmukha rao. Int. Journal of Engineering Research and Applications www.ijera.com
ISSN: 2248-9622, Vol. 6, Issue 4, (Part - 3) April 2016, pp.63-69
www.ijera.com 69|P a g e
When four fingers are shown to the webcam then it
will move towards right side.
Figure 4: Car moving in right direction
From the above pictures it is clearly
observed that based on the input given to the web
cam the robot direction changes. The robot will
move in particular direction as long as the input
gesture is shown to the webcam.
If there is no input shown to the webcam, then
immediately the robot will stop moving.
IX. CONCLUSION
In this project the robot is controlled by
using hand gestures. These gestures are given as
input to the web camera which is connected to the
personal computer. The Matlab code analyses the
finger count and it sends a command signal to the
microcontroller which is connected to the personal
computer via USB. Using serial communication,
the command signal is received to the
microcontroller and the signal is transmitted. At the
receiver end the microcontroller receives the signal
and the microcontroller analyses the signal based
on the gesture shown the wheels of the robot car
are rotated. For example, if one finger is shown it
moves in forward direction and if two fingers are
shown it moves in backword direction and if three
fingers are shown it moves in left direction and if
four fingers are shown it moves in right direction.
To stop the robot from moving simply don’t show
any gesture to the web camera. This project is just a
prototype for gesture controlled wheel chair for
physically disabled people. It can be also used in
nuclear plants where radioactive material can be
handled by using a hand gesture controlled robotic
arm.
REFERENCES
[1]. Design and Implementation of a Wireless
Gesture Controlled Robot International
Journal of Computer Applications (0975 –
8887) Volume 79 – No 13, October 2013.
[2]. Hand gesture Based Direction Control of
robocar using Arduino Microcontroller.
International journal of recent technology
and engineering(IJRTE) ISSN: 2277-
3878, Volume-3 Issue-3,
July 2014.
[3]. Implementation of a
Wireless Gesture
Controlled Robotic Arm.
International Journal of
Innovative Research in
Computer and
Communication Engineering (An ISO
3297: 2007 Certified Organization) Vol. 3,
Issue 1, January 2015
Ch. Rajasekhar: pursuing
B.TECH final year department of
ECE from MVGR college of
engineering in 2016. He is a
member of IEEE student chapter.
N. Shanmukharao: completed his, B.Tech from
Jntu, Hyderabad in 2004 and M.Tech from andhra
university now he is working as assistant professor,
dept. of ece, mvgrce, Vijayanagaram. He has 11
years of experience in teaching. He has published 3
general papers.
Research areas: 1. Signal processing, Image
processing and Communication Systems.