The document describes a digital pen system for gesture recognition and control of devices. It contains an accelerometer to capture gesture data and a microcontroller to process the data and control output devices. The system can recognize digits 0-7 written in air and uses gestures to control a fan and light. It aims to provide an alternative interface for human-machine interaction, especially for disabled users. The hardware includes an ATmega32 microcontroller, accelerometer module, LCD display, and output devices. The system works by measuring acceleration data during gestures and identifying the gestures to control external devices.
The document details and evaluates different technologies for gesture recognition, including computer vision, accelerometers, and gloves. It provides a literature review of papers on vision-based and accelerometer-based gesture recognition techniques. The document proposes parameters for evaluating and comparing these technologies, such as resolution, accuracy, latency, range of motion, user comfort, and cost. It assigns weights to these parameters based on the goals of developing a gesture recognition system for research purposes.
Gesture recognition technology uses cameras to read human body movements and gestures as a form of input to control devices and applications. A camera captures gestures like hand movements and facial expressions and sends that data to a computer for interpretation. Gesture recognition allows humans to interact with machines naturally without physical devices by using gestures to control cursors, activate menus, or control games and other applications. There are different methods for capturing and interpreting gestures including using wired gloves, depth cameras, stereo cameras, single cameras, or motion controllers.
This document describes an ink-less electro pen device for human-computer interaction. The pen contains an accelerometer, microcontroller, and wireless transmission module. It allows a user to write digits and letters normally, which are then transmitted to a computer. The computer applies a trajectory recognition algorithm using features like zero-crossing points and range to identify the handwritten input. The algorithm segments acceleration data, extracts features, and uses a probabilistic neural network classifier to recognize digits and some letters with good accuracy. The portable pen provides a natural writing interface without needing keyboards. It aims to improve usability in applications where pen input is convenient.
This document provides an overview of gesture recognition technology, including what gestures are, the history and basic workings of gesture recognition, different types of gesture recognition and sensing technologies, algorithms used, applications, and challenges. It discusses hand, facial, and sign language recognition and technologies like wired gloves, cameras, and controllers. Benefits include interacting without mouse/keyboard and with 3D environments without physical contact. Applications include rehabilitation, sign language, gaming, and assisting those with disabilities.
Mems Sensor Based Approach for Gesture Recognition to Control Media in ComputerIJARIIT
Gesture Recognition is the method of identifying and understanding meaningful movements of the arms, hands,
face, or sometimes head. It is one of the most important aspects in the field of Human-Computer interface. There has been a
continuous research in this field because of its ability for application in user interfaces. Gesture Recognition is one of the
important areas of research for engineers and scientists. Nowadays the industry is working on the different implementation for
the trouble free, natural and easy product which can be easy to handle. This paper proposed a method to work with motion
sensors and interpret the motion of hand into various applications in a virtual interface. The Micro-Electro-Mechanical
Systems (MEMS) accelerometers are used to capture the dynamic hand gesture. These sensors information is transferred to
the microcontroller from where these data are transferred wirelessly to the computer system for actual processing of the data
with the use of various algorithms.
The International Journal of Engineering and Sciencetheijes
This document summarizes a research paper on a hand sign interpreter system that uses a sensor glove to recognize sign language gestures and translate them into voice signals in real time. The system aims to help normal people communicate more effectively with those who are speech impaired. It uses flex sensors on a glove to detect hand shapes and an accelerometer to detect hand orientations. The signals are fed to a microprocessor that analyzes the signals and retrieves the corresponding audio files from memory to be played through a speaker. The system is designed to be low-cost and portable compared to other sign language recognition systems on the market.
A Digital Pen with a Trajectory Recognition AlgorithmIOSR Journals
Abstract : Now a days, the development of miniaturization technologies in electronic circuits and components has seriously decreased the dimension and weight of consumer electronic products, those are smart phones and handheld computers, and thus prepared them more handy and convenient. This paper contains an accelerometer-based digital pen for handwritten digit and gesture trajectory recognition applications. The digital pen consists of a triaxial accelerometer, a microcontroller, and an Zigbee wireless transmission module for sensing and collecting accelerations of handwriting and gesture trajectories. with this project we can do human computer interaction. Users can utilize this pen to write digits or make hand gestures, and the accelerations of hand motions calculated by the accelerometer are wirelessly transmitted to a computer for online trajectory recognition. So, by varying the position of mems (micro electro mechanical systems) we can capable to show the alphabetical characters in the PC. The acceleration signals calculated from the triaxial accelerometer are transmitted to a computer via the wireless module. Keywords - ARM, Zigbee, Sensors module
IRJET- A Survey on Control of Mechanical ARM based on Hand Gesture Recognitio...IRJET Journal
This document summarizes a research paper that proposed a system using wearable IMU sensors and machine learning to recognize hand gestures and control a mechanical arm. The system uses an IMU-based wearable device to collect gesture data from hand movements. A support vector machine classifier is used to classify the gestures in real-time and control the movements of a mechanical arm. The paper reviews several related works that used different sensors and machine learning algorithms for hand gesture recognition, finding that support vector machines provided high accuracy for gesture classification. The proposed system aims to allow remote control of machines through natural hand gestures.
The document details and evaluates different technologies for gesture recognition, including computer vision, accelerometers, and gloves. It provides a literature review of papers on vision-based and accelerometer-based gesture recognition techniques. The document proposes parameters for evaluating and comparing these technologies, such as resolution, accuracy, latency, range of motion, user comfort, and cost. It assigns weights to these parameters based on the goals of developing a gesture recognition system for research purposes.
Gesture recognition technology uses cameras to read human body movements and gestures as a form of input to control devices and applications. A camera captures gestures like hand movements and facial expressions and sends that data to a computer for interpretation. Gesture recognition allows humans to interact with machines naturally without physical devices by using gestures to control cursors, activate menus, or control games and other applications. There are different methods for capturing and interpreting gestures including using wired gloves, depth cameras, stereo cameras, single cameras, or motion controllers.
This document describes an ink-less electro pen device for human-computer interaction. The pen contains an accelerometer, microcontroller, and wireless transmission module. It allows a user to write digits and letters normally, which are then transmitted to a computer. The computer applies a trajectory recognition algorithm using features like zero-crossing points and range to identify the handwritten input. The algorithm segments acceleration data, extracts features, and uses a probabilistic neural network classifier to recognize digits and some letters with good accuracy. The portable pen provides a natural writing interface without needing keyboards. It aims to improve usability in applications where pen input is convenient.
This document provides an overview of gesture recognition technology, including what gestures are, the history and basic workings of gesture recognition, different types of gesture recognition and sensing technologies, algorithms used, applications, and challenges. It discusses hand, facial, and sign language recognition and technologies like wired gloves, cameras, and controllers. Benefits include interacting without mouse/keyboard and with 3D environments without physical contact. Applications include rehabilitation, sign language, gaming, and assisting those with disabilities.
Mems Sensor Based Approach for Gesture Recognition to Control Media in ComputerIJARIIT
Gesture Recognition is the method of identifying and understanding meaningful movements of the arms, hands,
face, or sometimes head. It is one of the most important aspects in the field of Human-Computer interface. There has been a
continuous research in this field because of its ability for application in user interfaces. Gesture Recognition is one of the
important areas of research for engineers and scientists. Nowadays the industry is working on the different implementation for
the trouble free, natural and easy product which can be easy to handle. This paper proposed a method to work with motion
sensors and interpret the motion of hand into various applications in a virtual interface. The Micro-Electro-Mechanical
Systems (MEMS) accelerometers are used to capture the dynamic hand gesture. These sensors information is transferred to
the microcontroller from where these data are transferred wirelessly to the computer system for actual processing of the data
with the use of various algorithms.
The International Journal of Engineering and Sciencetheijes
This document summarizes a research paper on a hand sign interpreter system that uses a sensor glove to recognize sign language gestures and translate them into voice signals in real time. The system aims to help normal people communicate more effectively with those who are speech impaired. It uses flex sensors on a glove to detect hand shapes and an accelerometer to detect hand orientations. The signals are fed to a microprocessor that analyzes the signals and retrieves the corresponding audio files from memory to be played through a speaker. The system is designed to be low-cost and portable compared to other sign language recognition systems on the market.
A Digital Pen with a Trajectory Recognition AlgorithmIOSR Journals
Abstract : Now a days, the development of miniaturization technologies in electronic circuits and components has seriously decreased the dimension and weight of consumer electronic products, those are smart phones and handheld computers, and thus prepared them more handy and convenient. This paper contains an accelerometer-based digital pen for handwritten digit and gesture trajectory recognition applications. The digital pen consists of a triaxial accelerometer, a microcontroller, and an Zigbee wireless transmission module for sensing and collecting accelerations of handwriting and gesture trajectories. with this project we can do human computer interaction. Users can utilize this pen to write digits or make hand gestures, and the accelerations of hand motions calculated by the accelerometer are wirelessly transmitted to a computer for online trajectory recognition. So, by varying the position of mems (micro electro mechanical systems) we can capable to show the alphabetical characters in the PC. The acceleration signals calculated from the triaxial accelerometer are transmitted to a computer via the wireless module. Keywords - ARM, Zigbee, Sensors module
IRJET- A Survey on Control of Mechanical ARM based on Hand Gesture Recognitio...IRJET Journal
This document summarizes a research paper that proposed a system using wearable IMU sensors and machine learning to recognize hand gestures and control a mechanical arm. The system uses an IMU-based wearable device to collect gesture data from hand movements. A support vector machine classifier is used to classify the gestures in real-time and control the movements of a mechanical arm. The paper reviews several related works that used different sensors and machine learning algorithms for hand gesture recognition, finding that support vector machines provided high accuracy for gesture classification. The proposed system aims to allow remote control of machines through natural hand gestures.
IRJET- Enhanced Look Based Media Player with Hand Gesture RecognitionIRJET Journal
The document describes a proposed enhanced media player that uses face detection and hand gesture recognition to control playback. Specifically, it will:
1. Continuously monitor the user's face using a webcam and only play the video when the user is looking at the screen, pausing otherwise.
2. Detect hand gestures like raising a hand to increase volume, decrease volume, switch to the next video, or previous video.
3. The system is intended to provide a better media playback experience by automating control and preventing the user from missing parts of a video if they look away. Both face detection and hand gesture recognition are implemented using computer vision algorithms like HAAR cascades.
Hand gesture recognition using ultrasonic sensor and atmega128 microcontrollereSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document summarizes a research paper that proposes a multitasking stick to assist visually impaired people in navigating safely. The stick uses ultrasonic sensors to detect obstacles, a temperature sensor to detect high heat areas, electrodes to detect water, and a voice playback module to notify the user of detections. It also includes an RF module to help locate the stick if misplaced. The stick was tested indoors and outdoors and performed accurately in detecting obstacles of different materials at varying distances, demonstrating its effectiveness as an assistive device for the blind.
This document discusses the development of an Android application for physical activity recognition using the accelerometer sensor. It provides background on the Android operating system and its open development environment. It then summarizes relevant research papers on activity recognition using mobile sensors. The document outlines the process of collecting and labeling accelerometer data from smartphone sensors during different physical activities. Features are extracted from the sensor data and several machine learning classifiers are evaluated for activity recognition. The application will recognize activities and track metrics like calories burned, distance traveled, and implement fall detection and medical reminders.
Literature on safety module for vehicular driver assistance IOSR Journals
This document summarizes a literature review on safety modules for vehicular driver assistance. It describes a system designed to provide driver assistance and safety through features like anti-collision detection and lane departure warning. The system uses an ultrasonic sensor and camera to monitor surrounding vehicles and lane boundaries. It is connected to a Raspberry Pi development board to process sensor data and display warnings. The document explains the working of ultrasonic sensors to detect distance between vehicles and discusses edge detection techniques for lane monitoring using the camera. It provides a block diagram and details the hardware components of the system aimed at enhancing safety while driving.
Complex Weld Seam Detection Using Computer Vision Linked Inglenn_silvers
This document discusses a project to use computer vision and a Microsoft Kinect sensor to enable real-time gesture control of a welding robot. The project aims to detect and track a user's hand gestures to control robot movement, and to define the weld seam region of interest to allow for seam detection. The plan involves accessing Kinect data, detecting and tracking the hand in 3D space, recognizing gestures for robot movement commands, extracting color values from the hand for skin detection, and using the hand position to define the seam region of interest. The work so far has successfully defined the hand and fingers, tracked hand motion, and extracted the seam region. Further work is needed to finalize the gesture commands and integrate control of the robot.
This document describes an automated car parking system project developed by students at the University of Jordan. The objective of the project is to monitor parking lot availability using a wireless sensor network. Sensor nodes equipped with ultrasonic sensors are deployed in parking spaces and send data to a central sink node regarding space occupancy. This data is sent to a server then to a smartphone application, allowing users to check parking availability from their phones. The document discusses hardware choices for the sensor nodes, including sensors, microcontrollers and wireless communication modules. It also describes the network protocols and code used for node communication. Furthermore, it outlines the server software for storing and accessing the parking data, as well as the client smartphone application interface.
IRJET- Bemythirdeye- A Smart Electronic Blind Stick with GogglesIRJET Journal
This document describes a smart electronic blind stick designed to help visually impaired people navigate independently and safely. The stick uses various sensors like ultrasonic sensors, IR sensors, and a PIR motion sensor to detect obstacles, motion, and staircases. It alerts the user with a buzzer when obstacles are detected. It also includes a location tracking module using GPS and GSM to send the user's location to contacts in emergencies. The stick is controlled with a microcontroller and aims to provide more accurate detection than traditional blind sticks by combining multiple sensor types. It allows visually impaired users to navigate with greater confidence, safety, and independence.
An Approach for Object and Scene Detection for Blind Peoples Using Vocal Vision.IJERA Editor
This system help the blind peoples for the navigation without the help of third person so blind person can perform its work independently. This system implemented on android device in which object detection and scene detection implemented, so after detection there will be text to speech conversion so user or blind person can get message from that android device with the help of headphone connected to that device. Our project will help blind people to understand the images which will be converted to sound with the help of webcam. We shall capture images in front of blind peoples .The captured image will be processed through our algorithms which will enhances the image data. The hardware component will have its own database. The processed image is compare with the database in the hardware component .The result after processing and comparing will be converted into speech signals. The headphones guide the blind peoples.
IRJET - Real Time Muscle Fatigue Monitoring using IoT Cloud ComputingIRJET Journal
This document describes a real-time muscle fatigue monitoring system using IoT cloud computing. Surface electromyography is used to acquire electromyography signals from muscles during isotonic contraction using a sensor. The signals are preprocessed on a Wemos D1 mini board and sent to an IoT cloud for further processing. In the cloud, time-frequency analysis is performed to extract features like median frequency and mean frequency over time. A decrease in these frequencies indicates muscle fatigue. The results are displayed on a mobile app interface for users and healthcare professionals to monitor fatigue in real-time. The system aims to provide a low-cost, non-invasive way to monitor muscle fatigue using IoT technologies.
The document summarizes a study on using Wi-Fi signals for indoor location fingerprinting. It discusses how fingerprinting involves two phases: a calibration phase where signal strength is recorded at calibration points, and a location estimation phase where current signal strength is compared to the fingerprint map. It evaluates the k-nearest neighbor algorithm using Euclidean, Manhattan, and Chebychev distances to estimate location. Tests of this approach involved collecting Wi-Fi signal data at calibration points in four rooms and a hall to generate a fingerprint map for location estimation. The accuracy of Euclidean and Manhattan distances was found to be better than Chebychev distance for this location fingerprinting method.
This document summarizes a student project to develop a reading system for blind people using optical character recognition and a braille glove. The system uses a webcam to capture text, an OCR software to recognize the text, and transmits the text to a braille glove using a microcontroller circuit board. The project was developed in two stages - using a laptop and webcam, and then modifying it to use a smartphone's camera and OCR software to make it more portable. The document provides details on the objectives, components, software, and development process of the assistive reading system.
The document describes the design of a ball tracking robot that uses image processing in MATLAB to detect and track a moving ball. A camera captures images of the ball which are processed to extract color and location information using techniques like background subtraction and centroid analysis. This information is then sent wirelessly to control motors on the robot and allow it to follow the moving ball.
This document describes a medical hands-free system using gesture recognition to help surgeons keep track of surgical instruments and materials without direct contact. It uses a Leap Motion controller to detect hand movements and recognize customized gestures. Image moments are used to distinguish between gestures by calculating weighted averages of pixel distributions in captured images. The goal is to introduce hands-free control to medical settings where direct contact poses infection risks, helping surgeons prevent accidental retention of foreign objects in patients.
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...IRJET Journal
This document presents a method for recognizing gestures using ultrasound sensors and infrared array sensors. Two ultrasound sensor pairs are used to capture hand motion in vertical and horizontal directions. An infrared Grid-Eye sensor is used to trigger the ultrasound sensors when a hand gesture is detected. The sensors capture data on the distance and movement of the hand. This data is preprocessed and extracted into features representing the average and count of upward and downward motions. An artificial neural network with two hidden layers is trained on these features to classify gestures for two letters, achieving an accuracy of 83%. The proposed method aims to provide a contactless gesture recognition system without some of the disadvantages of vision-based techniques.
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
Individuals with severe multiple disabilities have little or no opportunity to express their own wishes, make choices and move independently. Because of this, the objective of this work has been to develop a prototype for a gaze-driven device to manoeuvre powered wheelchairs or other moving platforms. The prototype has the same capabilities as a normal powered wheelchair, with two exceptions. Firstly, the prototype is controlled by eye movements instead of by a normal joystick. Secondly, the prototype is equipped with a sensor that stops all motion when the machine approaches an obstacle. The prototype has been evaluated in a preliminary clinical test with two users. Both users clearly communicated that they appreciated and had mastered the ability to control a powered wheelchair with their eye movements.
The document discusses different types of motion capture systems including optical, non-optical, and facial motion capture systems. Optical systems use cameras and markers to calculate 3D positions. Non-optical systems include inertial systems using sensors, mechanical systems using exoskeletons, and magnetic systems tracking magnetic fields. Facial motion capture aims to record complex facial movements. Motion capture technology is used in entertainment, sports, medical applications, and robotics research.
The document describes a digital pen system for handwritten digit and gesture recognition using a trajectory recognition algorithm. The system uses a tri-axial accelerometer, ARM processor, and Zigbee module in a pen-like device to capture acceleration signals from hand motions. The signals are transmitted wirelessly and a trajectory recognition algorithm processes the data through steps of acquisition, preprocessing, feature generation/selection, and extraction to recognize digits and gestures written in air. The system aims to allow for flexible use without limitations of range, environment, or surface that other methods impose.
Digital Pen for Handwritten Digit and Gesture Recognition Using Trajectory Re...IOSR Journals
The document describes a digital pen system for handwritten digit and gesture recognition using a trajectory recognition algorithm. The system uses a tri-axial accelerometer, ARM processor, and Zigbee module in a pen-like device to capture acceleration signals from hand motions. The signals are transmitted wirelessly and a trajectory recognition algorithm processes the data through steps of acquisition, preprocessing, feature generation/selection, and extraction to recognize digits and gestures written in air. The system aims to allow for flexible use without limitations of range, environment, or surface that other methods impose. It provides a portable and generalized approach to human-computer interaction through writing and gestures.
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...IJERA Editor
This project presents system based on inertial sensors and gesture recognition algorithm for SMS or calling for old age people. Users hold the device to make hand gestures with their preferred handheld style. Hand motions generate inertial signals, which are wirelessly transmitted to a computer for recognition. Here DTW recognition algorithm is used for recognition of hand gestures. Zigbee is used at the transmission section of inertial device to transmit sensor values and at the receiver section of PC to receive values. Recognized gesture is send to the microcontroller for further processing which gives AT commands to GSM to selects the SMS or calling option to the person. GSM model is used for the SMS or calling. An accelerometer-based gestures recognition systems that uses only a single 3-axis accelerometer. 3-axis accelerometer recognizes gestures, where gestures here are hand movements. DTW algorithm is used in this project for recognition. The proposed DTW-based recognition algorithm includes the procedures of inertial signal acquisition, motion detection, template selection, and recognition. Here „a‟, „b‟, „c‟, „d‟, „e‟, „f‟, „g‟, „h‟, „o‟, „v‟ letters are recognized in this system . This system can be used for the emergency calling or emergency SMS by the old age people or blind people from the home.
Analysis of Inertial Sensor Data Using Trajectory Recognition Algorithmijcisjournal
This paper describes a digital pen based on IMU sensor for gesture and handwritten digit gesture
trajectory recognition applications. This project allows human and Pc interaction. Handwriting
Recognition is mainly used for applications in the field of security and authentication. By using embedded
pen the user can make hand gesture or write a digit and also an alphabetical character. The embedded pen
contains an inertial sensor, microcontroller and a module having Zigbee wireless transmitter for creating
handwriting and trajectories using gestures. The propound trajectory recognition algorithm constitute the
sensing signal attainment, pre-processing techniques, feature origination, feature extraction, classification
technique. The user hand motion is measured using the sensor and the sensing information is wirelessly
imparted to PC for recognition. In this process initially excerpt the time domain and frequency domain
features from pre-processed signal, later it performs linear discriminant analysis in order to represent
features with reduced dimension. The dimensionally reduced features are processed with two classifiers –
State Vector Machine (SVM) and k-Nearest Neighbour (kNN). Through this algorithm with SVM classifier
provides recognition rate is 98.5% and with kNN classifier recognition rate is 95.5% .
Media Control Using Hand Gesture MomentsIRJET Journal
This document discusses a system for controlling media players using hand gestures. The system uses a webcam to capture images of hand gestures. It then uses neural networks trained on large gesture datasets to recognize the gestures. The recognized gestures can control functions of a media player like increasing/decreasing volume, playing, pausing, rewinding and forwarding. The system achieves recognition rates of 90-95% for different gestures. It provides a more natural user interface than keyboards and mice by allowing control through hand movements.
IRJET- Enhanced Look Based Media Player with Hand Gesture RecognitionIRJET Journal
The document describes a proposed enhanced media player that uses face detection and hand gesture recognition to control playback. Specifically, it will:
1. Continuously monitor the user's face using a webcam and only play the video when the user is looking at the screen, pausing otherwise.
2. Detect hand gestures like raising a hand to increase volume, decrease volume, switch to the next video, or previous video.
3. The system is intended to provide a better media playback experience by automating control and preventing the user from missing parts of a video if they look away. Both face detection and hand gesture recognition are implemented using computer vision algorithms like HAAR cascades.
Hand gesture recognition using ultrasonic sensor and atmega128 microcontrollereSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document summarizes a research paper that proposes a multitasking stick to assist visually impaired people in navigating safely. The stick uses ultrasonic sensors to detect obstacles, a temperature sensor to detect high heat areas, electrodes to detect water, and a voice playback module to notify the user of detections. It also includes an RF module to help locate the stick if misplaced. The stick was tested indoors and outdoors and performed accurately in detecting obstacles of different materials at varying distances, demonstrating its effectiveness as an assistive device for the blind.
This document discusses the development of an Android application for physical activity recognition using the accelerometer sensor. It provides background on the Android operating system and its open development environment. It then summarizes relevant research papers on activity recognition using mobile sensors. The document outlines the process of collecting and labeling accelerometer data from smartphone sensors during different physical activities. Features are extracted from the sensor data and several machine learning classifiers are evaluated for activity recognition. The application will recognize activities and track metrics like calories burned, distance traveled, and implement fall detection and medical reminders.
Literature on safety module for vehicular driver assistance IOSR Journals
This document summarizes a literature review on safety modules for vehicular driver assistance. It describes a system designed to provide driver assistance and safety through features like anti-collision detection and lane departure warning. The system uses an ultrasonic sensor and camera to monitor surrounding vehicles and lane boundaries. It is connected to a Raspberry Pi development board to process sensor data and display warnings. The document explains the working of ultrasonic sensors to detect distance between vehicles and discusses edge detection techniques for lane monitoring using the camera. It provides a block diagram and details the hardware components of the system aimed at enhancing safety while driving.
Complex Weld Seam Detection Using Computer Vision Linked Inglenn_silvers
This document discusses a project to use computer vision and a Microsoft Kinect sensor to enable real-time gesture control of a welding robot. The project aims to detect and track a user's hand gestures to control robot movement, and to define the weld seam region of interest to allow for seam detection. The plan involves accessing Kinect data, detecting and tracking the hand in 3D space, recognizing gestures for robot movement commands, extracting color values from the hand for skin detection, and using the hand position to define the seam region of interest. The work so far has successfully defined the hand and fingers, tracked hand motion, and extracted the seam region. Further work is needed to finalize the gesture commands and integrate control of the robot.
This document describes an automated car parking system project developed by students at the University of Jordan. The objective of the project is to monitor parking lot availability using a wireless sensor network. Sensor nodes equipped with ultrasonic sensors are deployed in parking spaces and send data to a central sink node regarding space occupancy. This data is sent to a server then to a smartphone application, allowing users to check parking availability from their phones. The document discusses hardware choices for the sensor nodes, including sensors, microcontrollers and wireless communication modules. It also describes the network protocols and code used for node communication. Furthermore, it outlines the server software for storing and accessing the parking data, as well as the client smartphone application interface.
IRJET- Bemythirdeye- A Smart Electronic Blind Stick with GogglesIRJET Journal
This document describes a smart electronic blind stick designed to help visually impaired people navigate independently and safely. The stick uses various sensors like ultrasonic sensors, IR sensors, and a PIR motion sensor to detect obstacles, motion, and staircases. It alerts the user with a buzzer when obstacles are detected. It also includes a location tracking module using GPS and GSM to send the user's location to contacts in emergencies. The stick is controlled with a microcontroller and aims to provide more accurate detection than traditional blind sticks by combining multiple sensor types. It allows visually impaired users to navigate with greater confidence, safety, and independence.
An Approach for Object and Scene Detection for Blind Peoples Using Vocal Vision.IJERA Editor
This system help the blind peoples for the navigation without the help of third person so blind person can perform its work independently. This system implemented on android device in which object detection and scene detection implemented, so after detection there will be text to speech conversion so user or blind person can get message from that android device with the help of headphone connected to that device. Our project will help blind people to understand the images which will be converted to sound with the help of webcam. We shall capture images in front of blind peoples .The captured image will be processed through our algorithms which will enhances the image data. The hardware component will have its own database. The processed image is compare with the database in the hardware component .The result after processing and comparing will be converted into speech signals. The headphones guide the blind peoples.
IRJET - Real Time Muscle Fatigue Monitoring using IoT Cloud ComputingIRJET Journal
This document describes a real-time muscle fatigue monitoring system using IoT cloud computing. Surface electromyography is used to acquire electromyography signals from muscles during isotonic contraction using a sensor. The signals are preprocessed on a Wemos D1 mini board and sent to an IoT cloud for further processing. In the cloud, time-frequency analysis is performed to extract features like median frequency and mean frequency over time. A decrease in these frequencies indicates muscle fatigue. The results are displayed on a mobile app interface for users and healthcare professionals to monitor fatigue in real-time. The system aims to provide a low-cost, non-invasive way to monitor muscle fatigue using IoT technologies.
The document summarizes a study on using Wi-Fi signals for indoor location fingerprinting. It discusses how fingerprinting involves two phases: a calibration phase where signal strength is recorded at calibration points, and a location estimation phase where current signal strength is compared to the fingerprint map. It evaluates the k-nearest neighbor algorithm using Euclidean, Manhattan, and Chebychev distances to estimate location. Tests of this approach involved collecting Wi-Fi signal data at calibration points in four rooms and a hall to generate a fingerprint map for location estimation. The accuracy of Euclidean and Manhattan distances was found to be better than Chebychev distance for this location fingerprinting method.
This document summarizes a student project to develop a reading system for blind people using optical character recognition and a braille glove. The system uses a webcam to capture text, an OCR software to recognize the text, and transmits the text to a braille glove using a microcontroller circuit board. The project was developed in two stages - using a laptop and webcam, and then modifying it to use a smartphone's camera and OCR software to make it more portable. The document provides details on the objectives, components, software, and development process of the assistive reading system.
The document describes the design of a ball tracking robot that uses image processing in MATLAB to detect and track a moving ball. A camera captures images of the ball which are processed to extract color and location information using techniques like background subtraction and centroid analysis. This information is then sent wirelessly to control motors on the robot and allow it to follow the moving ball.
This document describes a medical hands-free system using gesture recognition to help surgeons keep track of surgical instruments and materials without direct contact. It uses a Leap Motion controller to detect hand movements and recognize customized gestures. Image moments are used to distinguish between gestures by calculating weighted averages of pixel distributions in captured images. The goal is to introduce hands-free control to medical settings where direct contact poses infection risks, helping surgeons prevent accidental retention of foreign objects in patients.
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...IRJET Journal
This document presents a method for recognizing gestures using ultrasound sensors and infrared array sensors. Two ultrasound sensor pairs are used to capture hand motion in vertical and horizontal directions. An infrared Grid-Eye sensor is used to trigger the ultrasound sensors when a hand gesture is detected. The sensors capture data on the distance and movement of the hand. This data is preprocessed and extracted into features representing the average and count of upward and downward motions. An artificial neural network with two hidden layers is trained on these features to classify gestures for two letters, achieving an accuracy of 83%. The proposed method aims to provide a contactless gesture recognition system without some of the disadvantages of vision-based techniques.
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
Individuals with severe multiple disabilities have little or no opportunity to express their own wishes, make choices and move independently. Because of this, the objective of this work has been to develop a prototype for a gaze-driven device to manoeuvre powered wheelchairs or other moving platforms. The prototype has the same capabilities as a normal powered wheelchair, with two exceptions. Firstly, the prototype is controlled by eye movements instead of by a normal joystick. Secondly, the prototype is equipped with a sensor that stops all motion when the machine approaches an obstacle. The prototype has been evaluated in a preliminary clinical test with two users. Both users clearly communicated that they appreciated and had mastered the ability to control a powered wheelchair with their eye movements.
The document discusses different types of motion capture systems including optical, non-optical, and facial motion capture systems. Optical systems use cameras and markers to calculate 3D positions. Non-optical systems include inertial systems using sensors, mechanical systems using exoskeletons, and magnetic systems tracking magnetic fields. Facial motion capture aims to record complex facial movements. Motion capture technology is used in entertainment, sports, medical applications, and robotics research.
The document describes a digital pen system for handwritten digit and gesture recognition using a trajectory recognition algorithm. The system uses a tri-axial accelerometer, ARM processor, and Zigbee module in a pen-like device to capture acceleration signals from hand motions. The signals are transmitted wirelessly and a trajectory recognition algorithm processes the data through steps of acquisition, preprocessing, feature generation/selection, and extraction to recognize digits and gestures written in air. The system aims to allow for flexible use without limitations of range, environment, or surface that other methods impose.
Digital Pen for Handwritten Digit and Gesture Recognition Using Trajectory Re...IOSR Journals
The document describes a digital pen system for handwritten digit and gesture recognition using a trajectory recognition algorithm. The system uses a tri-axial accelerometer, ARM processor, and Zigbee module in a pen-like device to capture acceleration signals from hand motions. The signals are transmitted wirelessly and a trajectory recognition algorithm processes the data through steps of acquisition, preprocessing, feature generation/selection, and extraction to recognize digits and gestures written in air. The system aims to allow for flexible use without limitations of range, environment, or surface that other methods impose. It provides a portable and generalized approach to human-computer interaction through writing and gestures.
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...IJERA Editor
This project presents system based on inertial sensors and gesture recognition algorithm for SMS or calling for old age people. Users hold the device to make hand gestures with their preferred handheld style. Hand motions generate inertial signals, which are wirelessly transmitted to a computer for recognition. Here DTW recognition algorithm is used for recognition of hand gestures. Zigbee is used at the transmission section of inertial device to transmit sensor values and at the receiver section of PC to receive values. Recognized gesture is send to the microcontroller for further processing which gives AT commands to GSM to selects the SMS or calling option to the person. GSM model is used for the SMS or calling. An accelerometer-based gestures recognition systems that uses only a single 3-axis accelerometer. 3-axis accelerometer recognizes gestures, where gestures here are hand movements. DTW algorithm is used in this project for recognition. The proposed DTW-based recognition algorithm includes the procedures of inertial signal acquisition, motion detection, template selection, and recognition. Here „a‟, „b‟, „c‟, „d‟, „e‟, „f‟, „g‟, „h‟, „o‟, „v‟ letters are recognized in this system . This system can be used for the emergency calling or emergency SMS by the old age people or blind people from the home.
Analysis of Inertial Sensor Data Using Trajectory Recognition Algorithmijcisjournal
This paper describes a digital pen based on IMU sensor for gesture and handwritten digit gesture
trajectory recognition applications. This project allows human and Pc interaction. Handwriting
Recognition is mainly used for applications in the field of security and authentication. By using embedded
pen the user can make hand gesture or write a digit and also an alphabetical character. The embedded pen
contains an inertial sensor, microcontroller and a module having Zigbee wireless transmitter for creating
handwriting and trajectories using gestures. The propound trajectory recognition algorithm constitute the
sensing signal attainment, pre-processing techniques, feature origination, feature extraction, classification
technique. The user hand motion is measured using the sensor and the sensing information is wirelessly
imparted to PC for recognition. In this process initially excerpt the time domain and frequency domain
features from pre-processed signal, later it performs linear discriminant analysis in order to represent
features with reduced dimension. The dimensionally reduced features are processed with two classifiers –
State Vector Machine (SVM) and k-Nearest Neighbour (kNN). Through this algorithm with SVM classifier
provides recognition rate is 98.5% and with kNN classifier recognition rate is 95.5% .
Media Control Using Hand Gesture MomentsIRJET Journal
This document discusses a system for controlling media players using hand gestures. The system uses a webcam to capture images of hand gestures. It then uses neural networks trained on large gesture datasets to recognize the gestures. The recognized gestures can control functions of a media player like increasing/decreasing volume, playing, pausing, rewinding and forwarding. The system achieves recognition rates of 90-95% for different gestures. It provides a more natural user interface than keyboards and mice by allowing control through hand movements.
This document describes a MEMS accelerometer-based system for hand gesture recognition. The system uses a portable device containing a triaxial accelerometer, microcontroller, and wireless transmission module. Users can write digits and make gestures, and the accelerometer measures the motions. A trajectory recognition algorithm processes the acceleration signals through preprocessing, feature generation/selection, and classification. The algorithm aims to accurately recognize gestures with high recognition rates. An experiment tests the algorithm on handwritten digit recognition with promising results.
This document summarizes research on developing a portable device using a MEMS accelerometer for hand gesture recognition. The device consists of a triaxial accelerometer, microcontroller, and wireless transmission module. It measures acceleration signals from hand motions and transmits them to a computer for trajectory recognition using a recognition algorithm. The algorithm processes the acceleration data through steps like filtering, feature extraction, and classification to identify gestures and enable control of electronic devices through hand motions. The research aims to create an accurate, low-cost gesture recognition system using a single accelerometer without additional sensors.
A Digital Pen with a Trajectory Recognition AlgorithmIOSR Journals
Abstract : Now a days, the development of miniaturization technologies in electronic circuits and components has seriously decreased the dimension and weight of consumer electronic products, those are smart phones and handheld computers, and thus prepared them more handy and convenient. This paper contains an accelerometer-based digital pen for handwritten digit and gesture trajectory recognition applications. The digital pen consists of a triaxial accelerometer, a microcontroller, and an Zigbee wireless transmission module for sensing and collecting accelerations of handwriting and gesture trajectories. with this project we can do human computer interaction. Users can utilize this pen to write digits or make hand gestures, and the accelerations of hand motions calculated by the accelerometer are wirelessly transmitted to a computer for online trajectory recognition. So, by varying the position of mems (micro electro mechanical systems) we can capable to show the alphabetical characters in the PC. The acceleration signals calculated from the triaxial accelerometer are transmitted to a computer via the wireless module. Keywords - ARM, Zigbee, Sensors module
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Altera’s FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Altera’s Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
1) The document presents a real-time static hand gesture recognition system for the Devanagari number system using two feature extraction techniques: Discrete Cosine Transform (DCT) and Edge Oriented Histogram (EOH).
2) The system captures an image using a webcam, performs pre-processing, extracts the region of interest, then extracts features using DCT or EOH before matching against a training database to recognize the gesture.
3) An experiment tested 20 images and found DCT achieved a higher recognition accuracy of 18 gestures compared to 15 for EOH.
The International Journal of Engineering and Science (IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Technology, is today, imbibed for accomplishment of several tasks of varied complexity, in almost all walks of life. The society as a whole is exquisitely dependent on science and technology. Technology has played a very significant role in improving the quality of life. One way through which this is done is by automating several tasks using complex logic to simplify the work.Gesture recognition has been a research area which received much attention from many research communities such as human computer interaction and image processing The keyboard and mouse are currently the main interfaces between man and computer. In other areas where 3D information is required, such as computer games, robotics and design, other mechanical devices such as roller-balls, joysticks and data-gloves are used. The main motto of this project is to make robot realize the human gesture, thereby it bridge the gap between robot and human. Human gesture enhances human-robot interaction by making it independent from input devices. Robotic system can be controlled manually, or it may be autonomous. Robotic hand can be controlled remotely by hand gesture. Research in this field has taken place, sensing hand movements and controlling robotic arm has been developed.
Gesture recognition technology allows for control of devices through hand and body motions. It works by using cameras, sensors and algorithms to interpret gestures and movements. Key applications include controlling smart TVs with hand motions, sign language translation, and assisting disabled individuals. Challenges include variations between individuals, reading motions accurately due to lighting and noise, and lack of standardized gesture languages.
Gestures Based Sign Interpretation System using Hand GloveIRJET Journal
This document describes a glove-based sign language interpretation system that uses flex sensors and an Arduino Uno microcontroller. The system is intended to help those with speech impairments communicate by translating sign language gestures into text and speech output. The glove contains flex sensors that detect finger and hand movements, sending that data to the Arduino which interprets the gestures using machine learning algorithms and outputs the translation. The system aims to reduce communication barriers for the deaf and hard of hearing.
IRJET - A Smart Assistant for Aiding Dumb PeopleIRJET Journal
This document presents a proposed smart assistant system to help mute or vocally impaired people communicate with others using hand gestures. The system uses MEMS sensors in a glove to detect hand gestures, which are matched to pre-stored commands using an Arduino microcontroller. The relevant text is displayed on an LCD screen and audio is played back of the message in the local language as determined by a GPS module. An emergency notification can also be sent via GSM to a guardian if an emergency gesture is detected. The system is intended to help the mute community communicate more easily with others and ensure their safety in emergencies.
DESIGN AND DEVELOPMENT OF WRIST-TILT BASED PC CURSOR CONTROL USING ACCELEROMETERIJCSEA Journal
Human computer Interfacing apparatus is key part in modern electronics period. Motion recognition can be well introduced in present day computers to play games. In this work simple inertial navigation sensor like accelerometer can be use to get Dynamic or Static acceleration profile of movement to move cursor of mouse or even rotate 3-D object. In this paper a human computer interfaces system is presented, which will be able to act as an enhanced version of one of the most common interfacing system, which is computer mouse. In this research work, an alternative to interact with computer, for those who do not want to use conventional HCI (human computer interface) or not able to use conventional human computer interface and this achieved by using a sensor accelerometer mount on human wrist or anywhere in human body.Accelerometer device use to detect the position in x, y direction caused by movement of device mount on the wrist, as referred from acceleration of gravity (1g=9.8m/s2). Accelerometer is connected with PIC16F877A for analog to digital conversion and PIC16F877A are connected with LCD for displaying the co-ordinates(x, y) in which the accelerometer move and it further connected with ZigBee transmitter which is used to transmit the wireless signal to ZigBee receiver. ZigBee receiver receives the signal from transmitting end and transferred to PC through MAX232 serial communication. In PC an application for cursor control in response to accelerometer movement is developed
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Gesture control algorithm for personal computerseSAT Journals
Abstract As our dependency on computers is increasing every day, these intelligent machines are making inroads in our daily life and society. This requires more friendly methods for interaction between humans and computers (HCI) than the conventionally used interaction devices (mouse & keyboard) because they are unnatural and cumbersome to use at times (by disabled people). Gesture Recognition can be useful under such conditions and provides intuitive interaction. Gestures are natural and intuitive means of communication and mostly occur from hands or face of human beings. This work introduces a hand gesture recognition system to recognize real time gestures of the user (finger movements) in unstrained environments. This is an effort to adapt computers to our natural means of communication: Speech and Hand Movements. All the work has been done using Matlab 2011b and in a real time environment which provides a robust technique for feature extraction and speech processing. A USB 2.0 camera continuously tracks the movement of user’s finger which is covered with red marker by filtering out green and blue colors from the RGB color space. Java based commands are used to implement the mouse movement through moving finger and GUI keyboard. Then a microphone is used to make use of the speech processing and instruct the system to click on a particular icon or folder throughout the screen of the system. So it is possible to take control of the whole computer system. Experimental results show that proposed method has high accuracy and outperforms Sub-gesture Modeling based methods [5] Keywords: Hand Gesture Recognition (HGR), Human-Computer Interaction (HCI), Intuitive Interaction
Controlling Computer using Hand GesturesIRJET Journal
This document describes a research project on controlling a computer using hand gestures. The researchers created a real-time gesture recognition system using convolutional neural networks (CNNs). They developed a dataset of 3000 training images of 10 different hand gestures for tasks like opening apps. A CNN model was trained to detect hands in images and recognize gestures. The model achieved 80.4% validation accuracy and was able to successfully perform operations like opening WhatsApp, PowerPoint and other apps based on detected gestures in real-time. The system provides a cost-effective and contactless way of interacting with computers using hand gestures only.
1. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 1
Dept. Of ECE Mangalam College of Engineering
1. INTRODUCTION
There are a number of technologies thriving for human computer interaction or
human machine interaction such as speech recognition, vision-based gesture recognition and
tablet types of devices. These devices facilitate the intercommunication between humans and
machine more freely and without the use of traditional keyboards or mouse or remote
controls. Gesture or trajectory recognition based on inertial devices is an original, direct
human-computer interacting way, because the motion of limbs is driven by the force from
muscles and the inertial devices could detect the outcome of the force, say acceleration and
angular velocity, directly, so people could even use them without any distraction all the time.
A significant advantage of inertial sensors for general motion sensing is that they
can be operated without any external reference and limitation in working conditions.
However, motion trajectory recognition is relatively complicated because different users have
different speeds and styles to generate various motion trajectories. Thus, many researchers
have tried to narrow down the problem domain for increasing the accuracy of handwriting
recognition systems [1].
The term gesture recognition has been used to refer more narrowly to non-text-input
handwriting symbols. Gesture recognition enables humans to communicate with the machine
(HMI) and interact naturally without any mechanical devices. Using the concept of
gesture recognition, it is possible to point a finger at the computer screen so that the cursor
will move accordingly. This could potentially make conventional input devices such as
mouse, keyboards and even touch-screens redundant. Just as speech recognition can
transcribe speech to text, certain types of gesture recognition software can transcribe the
symbols represented through sign language into text. By using proper sensors (eg.
accelerometers) worn on the body of a patient and by reading the values from those sensors,
robots can assist in patient rehabilitation. The best example can be stroke rehabilitation.
2. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 2
Dept. Of ECE Mangalam College of Engineering
2. SYSTEM ANALYSIS
2.1 EXISTING SYSTEM
In the present system, a trajectory recognition algorithm was used for digit and
gesture recognition applications.this paper developed a pen-type portable device and a
trajectory recognition algorithm. The pen-type portable device consists of a triaxial
accelerometer, a microcontroller, and an RF wireless transmission module. The acceleration
signals measured from the tri-axial accelerometer are transmitted to a computer via the
wireless module. Users can utilize this digital pen to write digits and make hand gestures at
normal speed. The measured acceleration signals of these motions can be recognized by the
trajectory recognition algorithm.
The recognition procedure is composed of acceleration acquisition, signal pre-
processing, feature generation, feature selection, and feature extraction. The acceleration
signals of hand motions are measured by the pen-type portable device. The signal pre-
processing procedure consists of calibration, a moving average filter, a high-pass filter, and
normalization. First, the accelerations are calibrated to remove drift errors and offsets from
the raw signals. These two filters are applied to remove high frequency noise and
gravitational acceleration from the raw data, respectively. The features of the pre-processed
acceleration signals of each axis include mean, correlation among axes, inter-quartile range
(IQR), mean absolute deviation (MAD), root mean square (RMS), VAR, standard deviation
(STD), and energy. Before classifying the hand motion trajectories, we perform the
procedures of feature selection and extraction methods.
In general, feature selection aims at selecting a subset of size m from an original set
of d features (d > m). Therefore, the criterion of kernel-based class separability (KBCS) with
best individual N (BIN) is to select significant features from the original features (i.e., to pick
up some important features from d) and that of linear discriminate analysis (LDA) is to
reduce the dimension of the feature space with a better recognition performance (i.e., to
reduce the size of m). The objective of the feature selection and feature extraction methods is
not only to ease the burden of computational load but also to increase the accuracy of
classification. The reduced features are used as the inputs of classifiers. In this project, we
3. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 3
Dept. Of ECE Mangalam College of Engineering
adopted a probabilistic neural network (PNN) as the classifier for handwritten digit and hand
gesture recognition. The contributions of this paper include the following:
1) the development of a portable digital pen with a trajectory recognition algorithm,
i.e., with the digital pen, users can deliver diverse commands by hand motions to control
electronics devices anywhere without space limitations, and
2) an effective trajectory recognition algorithm, i.e., the proposed algorithm can
efficiently select significant features from the time and frequency domains of acceleration
signals and project the feature space into a smaller feature dimension for motion recognition
with high recognition accuracy.
Microcontroller Wireless Transeiver
Fig. 2.1 Schematic diagram of existing system
The proposed trajectory recognition algorithm can be summarized in the following
steps.
Step 1) Acquire the raw acceleration signals from the pen type accelerometer module.
Step 2) Filter out the high-frequency noise of the raw accelerations by the moving
average filter and then remove the gravity from the filtered accelerations by a high-pass filter.
Finally, normalize each segmented motion interval into equal sizes via interpolation.
Step 3) Generate the time and frequency domain features from the pre-processed
acceleration of each axis including meanx,y,z, STDx,y,z, VARx,y,z, IQRx,y,z, corrxy,yz,xz,
MADx,y,z, rmsx,y,z, and energyx,y,z.
Steps 4) By KBCS (kernel-based class separability) method select significant features.
Step 5) Reduce the dimension of the selected features by LDA (Linear Discriminant
Analysis).
Tri-axial
Accelerometer
USB
Driver
MicrocontrollerPC
A/D
Converter
Processor RF
Tx
RF
Rx
4. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 4
Dept. Of ECE Mangalam College of Engineering
2.2 PROPOSED SYSTEM
There are a number of technologies thriving for human computer interaction or
human machine interaction such as speech recognition, vision-based gesture recognition and
tablet types of devices. These devices facilitate the intercommunication between humans and
machine more freely and without the use of traditional keyboards or mouse or remote
controls.
The proposed system is an triaxial accelerometer based digital pen. The system
operates with microcontroller as its heart. AVR ATmega32 is the microcontroller used in this
project. Microcontroller is being programmed in embedded C language and is being
simulated in AVR Studio 5. The input device to this microcontroller is a three axis
accelerometer. The accelerometer is used to measure the acceleration signals with respect to
x, y and z axis. The system is capable of identifying the digits written on air, display it on
LCD .The digital pen can also control the operation of devices. A fan and light can be turned
on and off with the aid of gestures made by the digital pen.
The proposed system can be used as a system for human machine interaction. The
propsoed system finds its application in many fields. A few are listed below:
• Sign language recognition: Just as speech recognition can transcribe speech to text,
certain types of gesture recognition software can transcribe the symbols represented
through sign language into text.
• For socially assistive robotics: By using proper sensors (accelerometers and gyros)
worn on the body of a patient and by reading the values from those sensors, robots
can assist in patient rehabilitation. The best example can be stroke rehabilitation.
• Directional indication through pointing: Pointing has a very specific purpose in
robotics. The use of gesture recognition to determine where a person is pointing is
useful for identifying the context of statements or instructions. This application is of
particular interest in the field of robotics.
• Alternative computer interfaces: Foregoing the traditional keyboard and mouse setup
to interact with a computer, strong gesture recognition could allow users to
accomplish frequent or common tasks using hand gestures to a camera.
5. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 5
Dept. Of ECE Mangalam College of Engineering
• Immersive game technology: Gestures can be used to control interactions within
video games to try and make the game player's experience more interactive or
immersive.
• Virtual controllers: For systems where the act of finding or acquiring a physical
controller could require too much time, gestures can be used as an alternative control
mechanism. Controlling secondary devices in a car, or controlling a television set are
examples of such usage.
• Affective computing: In affective computing, gesture recognition is used in the
process of identifying emotional expression through computer systems.
• Remote control: Through the use of gesture recognition, "remote control with the
wave of a hand" of various devices is possible. The signal must not only indicate the
desired response, but also which device to be controlled.
6. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 6
Dept. Of ECE Mangalam College of Engineering
3. PROJECT DESCRIPTION
This project presents a digital pen based system to find out the trajectory of the
written digits and gestures. As a HMI application to this device, various gestures are used to
turn on and off other devices.
x-axis y-axis z-axis
Fig.3.1 Block diagram
Microcontroller is the heart of this system. AVR ATmega32 is the microcontroller
used in this project. Microcontroller is being programmed in embedded C language and is
being simulated in AVR Studio 5. The input device to this microcontroller is a three axis
accelerometer. The accelerometer is used to measure the acceleration signals with respect to
x, y and z axis. The acceleration values are displayed on the LCD Display.
The working of the system is as follows. The entire working can be said to have two
parts - Digit recognition part and the device control part. The digital pen can identify the
Accelerometer
Module
Light
LCD Display
Power
Supply
Microcontroller
FanPC
7. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 7
Dept. Of ECE Mangalam College of Engineering
digits from zero to seven and display it on a PC. The trajectories of the digits are shown in
Table 3.1.
The device control part aims at giving a further application to this pen. So it aimed
of a human machine interaction one. The digital pen can be used to control other machines
like fan, light etc. Four gestures where used – up, down, right and left. Table 3.2 shows the
various gestures used. These gestures can turn on and off a light and fan respectively. In
future further more devices can also be incorporated. This can thus find application in
assisting disabled or physically challenged people to communicate easily with the
surroundings. This device thus provides a tool for an intelligent environment.
Table 3.1 Trajectory of digits
Table 3.2 Gestures
Up Down Right Left
The programming involved of various stages:
1. Display unit interface. LCD is being interfaced to PORTD of microcontroller. So
PORTD is an output port.
2. Interface with input module - the accelerometer and display its x-axis, y-axis and z-
axis reading on the LCD display. Accelerometer is attached to PORTA and thus
PORTA is set as an input port. PORTA acts as an ADC. The data from accelerometer
8. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 8
Dept. Of ECE Mangalam College of Engineering
is analog in nature. To display it on an LCD screen it needs to be converted into
digital format.
3. Trial and error method of finding the axis readings corresponding to various digits.
4. Accelerometer is subjected to gravity. It gave reading when the accelerometer is
displaced. This caused error in finding digits. So we added a switch. The switch is
connected to PORTD and thus PORTD is set as an input port. The device thus carries
out digit recognition only when the switch is pressed.
5. The control of the devices light and fan is carried out without the incorporation of the
switch. These devices are interfaced to PORTB.
6. The digits that are identified are displayed on a PC. The microcontroller has CMOS
logic. A MAX232 module is attached in between the PC and the microcontroller for
the RS232 serial communication with the digital pen module.
9. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 9
Dept. Of ECE Mangalam College of Engineering
4. SYSTEM SPECIFICATIONS
4.1 HARDWARE REQUIREMENTS
The hardware used includes a microcontroller, accelerometer module, power supply,
display unit and output devices (fan, light).
4.1.1 MICROCONTROLLER
Microcontroller can be termed as a single on chip computer which includes number
of peripherals like RAM, EEPROM, Timers etc., required to perform some predefined task.
There are number of popular families of microcontrollers which are used in different
applications as per their capability and feasibility to perform the desired task, most common
of these are 8051, AVR and PIC microcontrollers. Atmel®AVR® 8-bit Microcontroller is
used in this project.
AVR was developed in the year 1996 by Atmel Corporation. The architecture of
AVR was developed by Alf-Egil Bogen and Vegard Wollan. AVR derives its name from its
developers and stands for Alf-Egil Bogen Vegard Wollan RISC microcontroller, also known
as Advanced Virtual RISC. The AVR is a modified Harvard architecture 8-bit RISC single
chip microcontroller. The AVR was one of the first microcontroller families to use on-chip
flash memory for program storage, as opposed to one-time programmable ROM, EPROM, or
EEPROM used by other microcontrollers at the time. AVR microcontrollers are available in
three categories:
1. TinyAVR – Less memory, small size, suitable only for simpler applications
2. MegaAVR – These are the most popular ones having good amount of memory (up to
256 KB), higher number of inbuilt peripherals and suitable for moderate to complex
applications.
3. XmegaAVR – Used commercially for complex applications, which require large
program memory and high speed.
10. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 10
Dept. Of ECE Mangalam College of Engineering
Atmel®AVR®ATmega32 is a low-power CMOS 8-bit microcontroller belonging to
the family of Reduced Instruction Set Computer (RISC). In RISC architecture the instruction
set of the computer are not only fewer in number but also simpler and faster in operation. By
executing powerful instructions in a single clock cycle, the ATmega32 achieves throughputs
approaching 1 MIPS per MHz allowing the system designer to optimize power consumption
versus processing speed.
It has a rich instruction set with 32 general purpose working registers. All the 32
registers are directly connected to the Arithmetic Logic Unit (ALU), allowing two
independent registers to be accessed in one single instruction executed in one clock cycle.
The resulting architecture is more code efficient while achieving throughputs up to ten times
faster than conventional CISC microcontrollers.
The ATmega32 provides the following features: 32Kbytes of In-System
Programmable Flash Program memory with Read-While-Write capabilities, 1024bytes
EEPROM, 2Kbyte SRAM, 32 general purpose I/O lines, 32 general purpose working
registers, a JTAG interface for Boundary scan, On-chip Debugging support and
programming, three flexible Timer/Counters with compare modes, Internal and External
Interrupts, a serial programmable USART, a byte oriented Two-wire Serial Interface, an 8-
channel, 10-bit ADC with optional differential input stage with programmable gain (TQFP
package only), a programmable Watchdog Timer with Internal Oscillator, an SPI serial port,
and six software selectable power saving modes. The Idle mode stops the CPU while
allowing the USART, Two-wire interface, A/D Converter, SRAM, Timer or Counters, SPI
port, and interrupt system to continue functioning.
The Power-down mode saves the register contents but freezes the Oscillator,
disabling all other chip functions until the next External Interrupt or Hardware Reset. In
Power-save mode, the Asynchronous Timer continues to run, allowing the user to maintain a
timer base while the rest of the device is sleeping. The ADC Noise Reduction mode stops the
CPU and all I/O modules except Asynchronous Timer and ADC, to minimize switching noise
during ADC conversions. In Standby mode, the crystal/resonator Oscillator is running while
the rest of the device is sleeping. This allows very fast start-up combined with low-power
consumption. In Extended Standby mode, both the main Oscillator and the Asynchronous
Timer continue to run.
11. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 11
Dept. Of ECE Mangalam College of Engineering
The device is manufactured using Atmel‟s high density non-volatile memory
technology. The on-chip ISP Flash allows the program memory to be reprogrammed in-
system through an SPI serial interface, by a conventional non-volatile memory programmer,
or by an On-chip Boot program running on the AVR core. The boot program can use any
interface to download the application program in the Application Flash memory. Software in
the Boot Flash section will continue to run while the Application Flash section is updated,
providing true Read-While-Write operation. By combining an 8-bit RISC CPU with In-
System Self-Programmable Flash on a monolithic chip, the Atmel ATmega32 is a powerful
microcontroller that provides a highly-flexible and cost-effective solution to many embedded
control applications.
The Atmel AVR ATmega32 is supported with a full suite of program and system
development tools including: C compilers, macro assemblers, program debugger/simulators,
in-circuit emulators, and evaluation kits. It has four ports- PORTA, PORTB, PORTC and
PORTD. PORTA serves as the analog inputs to the A/D Converter. It also serves as an 8-bit
bi-directional I/O port, if the A/D Converter is not used. Port pins can provide internal pull-
up resistors. When pins PA0 to PA7 are used as inputs and are externally pulled low, they
will source current if the internal pull-up resistors are activated. The PORTA pins are tri-
stated when a reset condition becomes active, even if the clock is not running. PORTB is an
8-bit bi-directional I/O port with internal pull-up resistors. The PORTB output buffers have
symmetrical drive characteristics with both high sink and source capability. As inputs,
PORTB pins that are externally pulled low will source current if the pull-up resistors are
activated. PORTC is an 8-bit bi-directional I/O port with internal pull-up resistors.
The PORTC output buffers have symmetrical drive characteristics with both high
sink and source capability. As inputs, PORTC pins that are externally pulled low will source
current if the pull-up resistors are activated. The PORTC pins are tri-stated when a reset
condition becomes active, even if the clock is not running. If the JTAG interface is enabled,
the pull-up resistors on pins PC5 (TDI), PC3 (TMS) and PC2 (TCK) will be activated even if
a reset occurs. PORTD is an 8-bit bi-directional I/O port with internal pull-up resistors
(selected for each bit). The PORTD output buffers have symmetrical drive characteristics
with both high sink and source capability. As inputs, PORTD pins that are externally pulled
low will source current if the pull-up resistors are activated. The PORTD pins are tri-stated
when a reset condition becomes active, even if the clock is not running.
12. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 12
Dept. Of ECE Mangalam College of Engineering
It provides several different interrupt sources. These interrupts and the separate reset
vector each have a separate program vector in the program memory space. All interrupts are
assigned individual enable bits which must be written logic one together with the Global
Interrupt Enable bit in the Status Register in order to enable the interrupt. The External
Interrupts are triggered by the INT0, INT1, and INT2 pins. Sleep modes enable the
application to shut down unused modules in the MCU, thereby saving power. The AVR
provides various sleep modes allowing the user to tailor the power consumption to the
application‟s requirements. Reliability Qualification results show that the projected data
retention failure rate is much less than 1 PPM over 20 years at 85°C or 100 years at 25°C.
4.1.2 ACCELEROMETER MODULE
The accelerometer is a device that can measure the force of acceleration, whether
caused by gravity or by movement. An accelerometer can therefore measure the speed of
movement of an object it is attached to. Accelerometers are one of the most frequently
utilized physical sensors for detecting and measuring motion. Accelerometers have found
applications ranging from measurement and control to inertial navigation. MEMS
implementations of accelerometers have found a large commercial market in automotive
airbag deployment systems. The basic configuration of an accelerometer is the same for all of
these applications and is shown in Figure4.1. Three items are the basic components of an
accelerometer:
• Inertial mass
• Suspension
• Sensing element
The suspension supporting the inertial mass will deflect under acceleration due to
D‟Alembert‟s principle. A sensing element will transduce the deflection of the suspension to
an electrical signal. The transduction can be accomplished by a number of means, but the
most common utilize piezoresistive, piezoelectric, or capacitance means [3].
13. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 13
Dept. Of ECE Mangalam College of Engineering
Fig.4.1 Accelerometer components [3]
ADXL335 accelerometer module is being used in this paper. It is a small, thin, low
power, complete 3-axis accelerometer with signal conditioned voltage outputs. The product
measures acceleration with a minimum full-scale range of ±3 g. It can measure the static
acceleration of gravity in tilt-sensing applications, as well as dynamic acceleration resulting
from motion, shock, or vibration. The user selects the bandwidth of the accelerometer using
the CX, CY, and CZ capacitors at the XOUT, YOUT, and ZOUT pins. Bandwidths can be
selected to suit the application, with a range of 0.5 Hz to 1600 Hz for the X and Y axes, and a
range of 0.5 Hz to 550 Hz for the Z axis. The ADXL335 is available in a small, low profile, 4
mm × 4 mm × 1.45 mm, 16-lead, plastic lead frame chip scale package.
It contains a polysilicon surface micro-machined sensor and signal conditioning
circuitry to implement open-loop acceleration measurement architecture. The output signals
are analog voltages that are proportional to acceleration. The accelerometer can measure the
static acceleration of gravity in tilt-sensing applications as well as dynamic acceleration
resulting from motion, shock, or vibration. The sensor is a polysilicon surface-
micromachined structure built on top of a silicon wafer. Polysilicon springs suspend the
structure over the surface of the wafer and provide a resistance against acceleration forces.
Deflection of the structure is measured using a differential capacitor that consists of
independent fixed plates and plates attached to the moving mass. The fixed plates are driven
by 180° out-of-phase square waves. Acceleration deflects the moving mass and unbalances
the differential capacitor resulting in a sensor output whose amplitude is proportional to
14. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 14
Dept. Of ECE Mangalam College of Engineering
acceleration. Phase-sensitive demodulation techniques are then used to determine the
magnitude and direction of the acceleration.
Rather than using additional temperature compensation circuitry, innovative design
techniques ensure that high performance is built in to the ADXL335. As a result, there is no
quantization error or non-monotonic behavior, and temperature hysteresis is very low
(typically less than 3 mg over the −25°C to +70°C temperature range). The ADXL335 uses a
single structure for sensing the X, Y, and Z axes. As a result, the three axes‟ sense directions
are highly orthogonal and have little cross-axis sensitivity. Mechanical misalignment of the
sensor die to the package is the chief source of cross-axis sensitivity.
Capacitance measurement is one of the most versatile methods of position
measurement. The parallel-plate capacitor can vary either with vertical motion of a movable
plate, modifying the gap, or by transverse motion of one plate relative to another, modifying
the effective area of the capacitor. Figure 4.2shows the arrangement of a parallel plate
capacitor.
Fig. 4.2 Parallel plate capacitor [2]
Motion of the moveable component in the indicated direction increases one
capacitance and decreases the other. A variant of the parallel-plate differential capacitor
would have the middle and lower plate fixed and only the upper plate moveable. In this
configuration, motion of the upper plate modifies one capacitor while the other remains
constant.
15. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 15
Dept. Of ECE Mangalam College of Engineering
4.1.3 POWER SUPPLY AND VOLTAGE REGULATOR
A power supply is a device that supplies electric power to an electrical load. 230V,
50Hz ac mains supply is made to step down to 12 V by means of a step down transformer.
The 12 V ac is made to dc with the aid of a bridge rectifier. Bridge rectifier makes use of four
diodes in a bridge arrangement to achieve full-wave rectification. W10 bridge rectifier used
here has high case dielectric strength, high surge current capability, high reliability, low
reverse current, low forward voltage drop and ideal for printed circuit board. It also has
Reliable low cost construction utilizing moulded plastic technique.
Fig 4.3 Voltage Regulator IC
A regulated power supply is one that controls the output voltage or current to a
specific value. The controlled value is held nearly constant despite variations in either load
current or the voltage supplied by the power supply's energy source. 7805 is a voltage
regulator integrated circuit. It is a member of 78xx series of fixed linear voltage regulator
ICs. The voltage source in a circuit may have fluctuations and would not give the fixed
voltage output. The voltage regulator IC maintains the output voltage at a constant value. The
xx in 78xx indicates the fixed output voltage it is designed to provide. 7805 provides +5V
regulated power supply. Capacitors of suitable values can be connected at input and output
pins depending upon the respective voltage levels. ST 7805 is the voltage regulator used. It is
a 3-terminal 1 A positive voltage regulator with Thermal Overload Short Circuit and Output
Transistor Safe Operating Area Protection. Figure 4.3shows a voltage regulator IC.
4.1.4 DISPLAY UNIT
An LCD is a small low cost display. A 16 x 2 LCD display is very basic module and
is very commonly used in various devices and circuits. These modules are preferred over
16. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 16
Dept. Of ECE Mangalam College of Engineering
seven segments and other multi segment LEDs. LCDs are economical, easily programmable
and have no limitation of displaying special & even custom characters. It is easy to interface
with a micro-controller. A 16 x 2 LCD means it can display 16 characters per line and there
are 2 such lines. In this LCD each character is displayed in 5x7 pixel matrix.
Table 4.1LCD Commands
Command Operation
0x01 LCD Clear
0x38 Select 5 x 7 matrix
0x80 to move cursor to first line first position
0x81 to move cursor to first line second position
0x8F to move cursor to first line last position
0xC0 display on cursor on
0x0E display on cursor off
0x0C to move cursor to second line first position
0x06 shift cursor to right
0x1C shift entire display to right by one position
0x18 shift entire display to left by one position
This LCD has two registers, namely, Command and Data. The command register
stores the command instructions given to the LCD. A command is an instruction given to
LCD to do a predefined task like initializing it, clearing its screen, setting the cursor position,
controlling display etc. The data register stores the data to be displayed on the LCD. The data
is the ASCII value of the character to be displayed on the LCD. The display is capable of
displaying 224 different characters and symbols. It is also facilitated with backlight Led.
Table 4.1shows some commonly used LCD commands.
17. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 17
Dept. Of ECE Mangalam College of Engineering
4.1.5 MAX232
The MAX232 is an IC, first created by Maxim Integrated Products, that converts
signals from an RS-232 serial port to signals suitable for use in TTL compatible digital logic
circuits. The MAX232 is a dual driver/receiver and typically converts the RX, TX, CTS and
RTS signals. The drivers provide RS-232 voltage level outputs (approx. ± 7.5 V) from a
single + 5 V supply via on-chip charge pumps and external capacitors. This makes it useful
for implementing RS-232 in devices that otherwise do not need any voltages outside the 0 V
to + 5 V range, as power supply design does not need to be made more complicated just for
driving the RS-232 in this case. The receivers reduce RS-232 inputs (which may be as high as
± 25 V), to standard 5 V TTL levels. These receivers have a typical threshold of 1.3 V, and a
typical hysteresis of 0.5 V.
When a MAX232 IC receives a TTL level to convert, it changes a TTL Logic 0 to
between +3 and +15 V, and changes TTL Logic 1 to between -3 to -15 V, and vice versa for
converting from RS232 to TTL. This can be confusing on realizing that the RS232 Data
Transmission voltages at a certain logic state are opposite from the RS232 Control Line
voltages at the same logic state. Table 4.2 shows the voltage levels.
Table 4.2 Voltage Levels
RS232 Line Type & Logic Level RS232
Voltage
TTL Voltage to/from
MAX232
Data Transmission (Rx/Tx)
Logic 0
+3 V to +15 V
Data Transmission (Rx/Tx)
Logic 1
-3 V to -15 V
Control Signals
(RTS/CTS/DTR/DSR)
Logic 0 -3 V to -15V
Control Signals
(RTS/CTS/DTR/DSR)
Logic 1 +3 V to+15V
4.1.6 OTHER DEVICES
The other output devices interfaced to this are a light and a fan.
18. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 18
Dept. Of ECE Mangalam College of Engineering
4.2 SOFTWARE REQUIREMENTS
4.2.1 AVR STUDIO 5
The software used is AVR Studio 5. It is a Windows based Integrated Development
Environment (IDE) from the Atmel Technology incorporated AVR microcontroller families.
AVR Studio 5 supports all AVR microcontrollers. The various features of this software are
• Capability to work in C programming Environment.
• Creates source code using the built in editor.
• Compile and link source code using various tools.
• A compiler, linker and library come with AVR Studio.
AVR Studio 5 includes a compiler, assembler and a simulator, and interfaces
seamlessly with in-system debuggers and programmers to make code development easier. It
is a full software development environment with an editor, simulator, programmer, etc. The
AVR Software Framework is a collection of production-ready source code, written and
optimized by experts and tested in hundreds of production designs. It comes with its own
integrated C compiler the AVR GNU C Compiler (GCC). As such you do not need a third
party C compiler.
It provides a single environment to develop programs for both the 8-bits and 32-bits
AVR series of microcontrollers. Using these peripheral drivers, communication stacks and
application-specific libraries is the quick and effortless way to complete a project. The AVR
Studio 5 editor simplifies code editing and facilitates coding more efficiently. Type a few
letters of a symbol, and AVR Studio 5 will display a list of suggestions. Figure 4.4 to 4.9
shows the various stages of programming in AVR Studio 5.
Steps to program a code in AVR Studio 5 is as follows:
1. Click open AVR Studio 5.
2. Select New Project
19. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 19
Dept. Of ECE Mangalam College of Engineering
Fig.4.4Opening window AVR Studio 5
3. Select C Executable Project
Fig.4.5Project Select
20. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 20
Dept. Of ECE Mangalam College of Engineering
4. Name the project (here named as DIGITALPEN) and click ok.
Fig4.6Name Project
5. Select the device as ATmega32 and click ok
Fig4.7Device selection
21. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 21
Dept. Of ECE Mangalam College of Engineering
6. Enter the code. The programming is done in Embedded C language.
Fig.4.8 Programming window
7. From tool bar select Build and from its drop down window select build solution
Fig.4.9Build Solution
8. The output window shows errors and warnings in case of wrong entries, missing
variables or symbols. On successful completion it displays build succeeded.
9. This code needs to be loaded to the microcontroller for its operation.
22. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 22
Dept. Of ECE Mangalam College of Engineering
4.2.2 KHAZAMA AVR PROGRAMMER
The tool used to program the ATmega32 AVR microcontroller is the Khazama AVR
Programmer. Programming using this tool is very simple, fast and reliable. Khazama can be
used to burn the code into all microcontrollers in AVR family. The steps followed in
programming microcontroller are depicted in Figure 4.10 to Figure 4.12.
1. Connect the microcontroller to the PC.
2. Click open Khazama AVR programmer
Fig.4.10Opening Window
3. Select the AVR from the dropdown window. Here the microcontroller used is
ATMEGA32.
Fig.4.11AVR Selection
23. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 23
Dept. Of ECE Mangalam College of Engineering
4. Click command and read chip signature from the drop down window.
Fig.4.12Reading Chip Signature
5. Select icon and Load flash to Hex buffer. Click Auto Program. The hex code will
be burned on to the AVR microcontroller and the window shows a successfully
completed message when the burning is complete.
24. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 24
Dept. Of ECE Mangalam College of Engineering
5. SIMULATION RESULTS
The programming language used in this project is Embedded C. The code was
written and simulated in AVR Studio 5. Figure 5.1 shows the window showing the successful
compilation of the code.
Fig. 5.1 Simulation Result
The code was loaded to the microcontroller. The device successfully recognized
digits zero to seven. The digits were displayed on the LCD. The four gestures used in this
project turned on and off the light and the fan.
25. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 25
Dept. Of ECE Mangalam College of Engineering
6. CONCLUSION
This project has presented an accelerometer based framework that can that can be
utilized for acceleration-based handwriting and gesture recognition. The proposed system is
utilized for digit and gesture recognition and thus it is an effective tool for HCI applications.
Users can use the pen to write digits or make hand gestures as shown in Table 3.1 and 3.2,
and the accelerations of hand motions are measured by the accelerometer. The recognized
digits are displayed on a computer for further applications. So, by changing the position of
accelerometer users can show the numerical characters on the LCD.
Gesture recognition is useful for processing information from humans which is not
conveyed through speech or type. As well, there are various types of gestures which can be
identified by computers.The system finds its application in following fields. It acts as a
communication tool between deaf, dumb and the blind. It has other human machine
applications in the field of game control, titlt recognition, tv remote, mobile text scrolling,
mobile application selection, presentation pointer etc.
26. Most Probable Longest Common Subsequence for Recognition of Gesture Character Input 26
Dept. Of ECE Mangalam College of Engineering
7. REFERENCES
[1] J.C.Wang, and F.C Chuang, „‟An Accelerometer-Based Digital Pen with a Trajectory
Recognition Algorithm for Handwritten Digit and Gesture Recognition‟‟, IEEE
Trans. on industrial electronics, vol. 59, no. 7, July 2012
[2] Stephen D Senturia, “Microsystem Design”, Kluwer Academic Publishers, New
York.
[3] L. L. Faulkner, James J. Allen, „‟Micro Electro mechanical System Design‟‟, Taylor
& Francis Group, LLC,2005