This document describes a medical hands-free system using gesture recognition to help surgeons keep track of surgical instruments and materials without direct contact. It uses a Leap Motion controller to detect hand movements and recognize customized gestures. Image moments are used to distinguish between gestures by calculating weighted averages of pixel distributions in captured images. The goal is to introduce hands-free control to medical settings where direct contact poses infection risks, helping surgeons prevent accidental retention of foreign objects in patients.
Mems Sensor Based Approach for Gesture Recognition to Control Media in ComputerIJARIIT
Β
Gesture Recognition is the method of identifying and understanding meaningful movements of the arms, hands,
face, or sometimes head. It is one of the most important aspects in the field of Human-Computer interface. There has been a
continuous research in this field because of its ability for application in user interfaces. Gesture Recognition is one of the
important areas of research for engineers and scientists. Nowadays the industry is working on the different implementation for
the trouble free, natural and easy product which can be easy to handle. This paper proposed a method to work with motion
sensors and interpret the motion of hand into various applications in a virtual interface. The Micro-Electro-Mechanical
Systems (MEMS) accelerometers are used to capture the dynamic hand gesture. These sensors information is transferred to
the microcontroller from where these data are transferred wirelessly to the computer system for actual processing of the data
with the use of various algorithms.
Development of Sign Signal Translation System Based on Alteraβs FPGA DE2 BoardWaqas Tariq
Β
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Alteraβs FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Alteraβs Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
This document describes the Skinput technology, which uses bio-acoustic sensing to localize finger taps on the skin. It can provide a direct manipulation graphical user interface projected directly onto the body. The technology was developed by researchers at Microsoft. It consists of a wearable arm band with multiple piezoelectric sensors of different resonant frequencies. When the skin is tapped, acoustic waves propagate through the body and are detected by the sensors. The location can be identified based on differences in signal arrival times and frequencies between sensors. User studies showed it can accurately detect taps on different areas of the arm.
Accessing Operating System using Finger GestureIRJET Journal
Β
This document describes a system for accessing an operating system using finger gestures captured by a webcam. The system aims to reduce costs compared to existing gesture recognition systems that use expensive sensors like Kinect. It uses image processing algorithms to detect hand gestures from webcam input, recognize gestures like number of fingers, and execute corresponding operating system commands. The system architecture first segments hand regions from background, then classifies skin pixels and detects colored tapes on fingers to identify gestures. It can open programs and navigate computer contents contactlessly using natural hand movements. The proposed system aims to provide an affordable alternative for human-computer interaction without external input devices like mice or keyboards.
IRJET- Human Activity Recognition using Flex SensorsIRJET Journal
Β
This document discusses a system for human activity recognition using flex sensors. Flex sensors are attached to the body and can detect movements. The flex sensor data is fed into a neural network model to recognize activities. The model is trained using flex sensor data from various human activities. The trained model can then accurately recognize activities based on new flex sensor input data. The system is meant to help elderly people or those with disabilities by allowing them to control devices with body movements detected by flex sensors. It aims to provide a modular system that can adapt to new users and disabilities. Flex sensors make the system customizable while neural networks enable accurate activity recognition.
The document discusses robotically assisted brain surgery using artificial intelligence. It describes how surgical robots are used to perform brain surgery with more precision and less invasive incisions compared to traditional surgery. Sensors and imaging techniques like MRI and CT scans are used to identify and locate tumors. A diathermy tool uses controlled heat to remove tumors. While still requiring a human surgeon, robotic systems improve dexterity and allow for potential remote surgeries in the future. The benefits of robotic brain surgery over traditional methods include reduced recovery time and pain for patients.
Real time hand gesture recognition system for dynamic applicationsijujournal
Β
Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping. Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device. The research effort centralizes on the efforts of implementing an application that employs computer vision algorithms and gesture recognition techniques which in turn results in developing a low cost interface device for interacting with objects in virtual environment using hand gestures. The prototype architecture of the application comprises of a central computational module that applies the camshift technique for tracking of hands and its gestures. Haar like technique has been utilized as a classifier that is creditworthy for locating hand position and classifying gesture. The patterning of gestures has been done for recognition by mapping the number of defects that is formed in the hand with the assigned gestures. The virtual objects are produced using Open GL library. This hand gesture recognition technique aims to substitute the use of mouse for interaction with the virtual objects. This will be useful to promote controlling applications like virtual games, browsing images etc in virtual environment using hand gestures.
Real time hand gesture recognition system for dynamic applicationsijujournal
Β
Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping.
Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are
not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an
innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia-supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer
interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device.
Mems Sensor Based Approach for Gesture Recognition to Control Media in ComputerIJARIIT
Β
Gesture Recognition is the method of identifying and understanding meaningful movements of the arms, hands,
face, or sometimes head. It is one of the most important aspects in the field of Human-Computer interface. There has been a
continuous research in this field because of its ability for application in user interfaces. Gesture Recognition is one of the
important areas of research for engineers and scientists. Nowadays the industry is working on the different implementation for
the trouble free, natural and easy product which can be easy to handle. This paper proposed a method to work with motion
sensors and interpret the motion of hand into various applications in a virtual interface. The Micro-Electro-Mechanical
Systems (MEMS) accelerometers are used to capture the dynamic hand gesture. These sensors information is transferred to
the microcontroller from where these data are transferred wirelessly to the computer system for actual processing of the data
with the use of various algorithms.
Development of Sign Signal Translation System Based on Alteraβs FPGA DE2 BoardWaqas Tariq
Β
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Alteraβs FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Alteraβs Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
This document describes the Skinput technology, which uses bio-acoustic sensing to localize finger taps on the skin. It can provide a direct manipulation graphical user interface projected directly onto the body. The technology was developed by researchers at Microsoft. It consists of a wearable arm band with multiple piezoelectric sensors of different resonant frequencies. When the skin is tapped, acoustic waves propagate through the body and are detected by the sensors. The location can be identified based on differences in signal arrival times and frequencies between sensors. User studies showed it can accurately detect taps on different areas of the arm.
Accessing Operating System using Finger GestureIRJET Journal
Β
This document describes a system for accessing an operating system using finger gestures captured by a webcam. The system aims to reduce costs compared to existing gesture recognition systems that use expensive sensors like Kinect. It uses image processing algorithms to detect hand gestures from webcam input, recognize gestures like number of fingers, and execute corresponding operating system commands. The system architecture first segments hand regions from background, then classifies skin pixels and detects colored tapes on fingers to identify gestures. It can open programs and navigate computer contents contactlessly using natural hand movements. The proposed system aims to provide an affordable alternative for human-computer interaction without external input devices like mice or keyboards.
IRJET- Human Activity Recognition using Flex SensorsIRJET Journal
Β
This document discusses a system for human activity recognition using flex sensors. Flex sensors are attached to the body and can detect movements. The flex sensor data is fed into a neural network model to recognize activities. The model is trained using flex sensor data from various human activities. The trained model can then accurately recognize activities based on new flex sensor input data. The system is meant to help elderly people or those with disabilities by allowing them to control devices with body movements detected by flex sensors. It aims to provide a modular system that can adapt to new users and disabilities. Flex sensors make the system customizable while neural networks enable accurate activity recognition.
The document discusses robotically assisted brain surgery using artificial intelligence. It describes how surgical robots are used to perform brain surgery with more precision and less invasive incisions compared to traditional surgery. Sensors and imaging techniques like MRI and CT scans are used to identify and locate tumors. A diathermy tool uses controlled heat to remove tumors. While still requiring a human surgeon, robotic systems improve dexterity and allow for potential remote surgeries in the future. The benefits of robotic brain surgery over traditional methods include reduced recovery time and pain for patients.
Real time hand gesture recognition system for dynamic applicationsijujournal
Β
Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping. Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device. The research effort centralizes on the efforts of implementing an application that employs computer vision algorithms and gesture recognition techniques which in turn results in developing a low cost interface device for interacting with objects in virtual environment using hand gestures. The prototype architecture of the application comprises of a central computational module that applies the camshift technique for tracking of hands and its gestures. Haar like technique has been utilized as a classifier that is creditworthy for locating hand position and classifying gesture. The patterning of gestures has been done for recognition by mapping the number of defects that is formed in the hand with the assigned gestures. The virtual objects are produced using Open GL library. This hand gesture recognition technique aims to substitute the use of mouse for interaction with the virtual objects. This will be useful to promote controlling applications like virtual games, browsing images etc in virtual environment using hand gestures.
Real time hand gesture recognition system for dynamic applicationsijujournal
Β
Virtual environments have always been considered as a means for more visceral and efficient human computer interaction by a diversified range of applications. The spectrum of applications includes analysis of complex scientific data, medical training, military simulation, phobia therapy and virtual prototyping.
Evolution of ubiquitous computing, current user interaction approaches with keyboard, mouse and pen are
not sufficient for the still widening spectrum of Human computer interaction. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. Due to the limitation of these devices the useable command set based diligences is also limited. Direct use of hands as an input device is an
innovative method for providing natural Human Computer Interaction which has its inheritance from textbased interfaces through 2D graphical-based interfaces, multimedia-supported interfaces, to full-fledged multi-participant Virtual Environment (VE) systems. Conceiving a future era of human-computer
interaction with the implementations of 3D application where the user may be able to move and rotate objects simply by moving and rotating his hand - all without help of any input device.
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...IJMER
Β
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Gesture recognition technology allows humans to control devices through visible bodily motions and hand movements instead of physical interfaces. It works by detecting and interpreting gestures using cameras and analyzing features like hand position and motion. Popular applications include virtual keyboards and navigation systems that respond to gestures of the head, hands or eyes. Future technologies may enable self-reliant communication through gestures for people with disabilities.
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
Β
The presentation contains a technology for identifying any type of body motions commonly originating from hand and face using artificial neural network.This include identifying sign language also.This technology is for speech impaired individuals.
// I have shared an IJCSE standard paper in this topic
Gestures are an important form of non-verbal communication that involve visible bodily motions. The document discusses the history and development of gesture recognition technologies, describing early data gloves and videoplace systems as well as current technologies like Cepal and ADITI that help people with disabilities control devices with gestures. It also outlines the key components of a gesture recognition system including modeling, analysis, and recognition of gestures and discusses classification methods like HMMs and MLPs. Applications discussed include virtual keyboards, navigaze, and Sixth Sense technology.
IRJET- Convenience Improvement for Graphical Interface using Gesture Dete...IRJET Journal
Β
This document discusses a proposed system for improving graphical user interfaces using hand gesture detection. The system aims to allow users to access information from the internet without using input devices like a mouse or keyboard. It uses a webcam to capture images of hand gestures, which are then processed using techniques like skin color segmentation, principal component analysis, and template matching to recognize the gestures. The recognized gestures can then be linked to retrieving specific data from pre-defined URLs. An evaluation of the system found it had an accuracy rate of 90% in real-time testing for retrieving data from 10 different URLs using 10 unique hand gestures. The proposed system provides a more convenient interface compared to traditional mouse and keyboard methods.
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
Β
This paper contains a technology for identifying any type of body motions commonly originating from hand and face using artificial neural network.This include identifying sign language also.This technology is for speech impaired individuals.
// I have shared a presentation in this topic
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand.
Gesture Based Interface Using Motion and Image Comparisonijait
Β
This paper gives a new approach for movement of mouse and implementation of its functions using a real time camera. Here we propose to change the hardware design. Most of the existing technologies mainly depend on changing the mouse parts features like changing the position of tracking ball and adding more buttons. We use a camera, colored substance, image comparison technology and motion detection technology to control mouse movement and implement its functions (right click, left click, scrolling and double click) .
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
Β
This paper represents a comparative study of exiting hand gesture recognition systems and gives the new approach for the gesture recognition which is easy cheaper and alternative of input devices like mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-coloured glove or need lots of training data. The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). This interface is simple enough to be run using an ordinary webcam and requires little training.
IRJET- Survey Paper on Vision based Hand Gesture RecognitionIRJET Journal
Β
This document presents a survey of previous research on vision-based hand gesture recognition. It discusses various methods that have been used, including discrete wavelet transforms, skin color segmentation, orientation histograms, and neural networks. The document proposes a new methodology using webcam image capture, static and dynamic gesture definition, image processing techniques like localization, enhancement, segmentation, and morphological filtering, and a convolutional neural network for classification. The goal is to develop a more efficient and accurate system for hand gesture recognition and human-computer interaction.
This document is a final report on gesture recognition submitted by three students. It contains an abstract, introduction, background information on gesture recognition including American Sign Language and object recognition techniques. It discusses digital image processing and neural networks. It outlines the approach, modules, flowcharts, results and conclusions of the project, which developed a method to recognize static hand gestures using a perceptron neural network trained on orientation histograms of the input images. Source code and applications are also discussed.
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
Β
Individuals with severe multiple disabilities have little or no opportunity to express their own wishes, make choices and move independently. Because of this, the objective of this work has been to develop a prototype for a gaze-driven device to manoeuvre powered wheelchairs or other moving platforms. The prototype has the same capabilities as a normal powered wheelchair, with two exceptions. Firstly, the prototype is controlled by eye movements instead of by a normal joystick. Secondly, the prototype is equipped with a sensor that stops all motion when the machine approaches an obstacle. The prototype has been evaluated in a preliminary clinical test with two users. Both users clearly communicated that they appreciated and had mastered the ability to control a powered wheelchair with their eye movements.
human activity recognization using machine learning with data analysisVenkat Projects
Β
Human activity recognition, or HAR for short, is a broad field of study concerned with identifying the specific movement or action of a person based on sensor data.
The sensor data may be remotely recorded, such as video, radar, or other wireless methods. It contains data generated from accelerometer, gyroscope and other sensors of Smart phone to train supervised predictive models using machine learning techniques like SVM , Random forest and decision tree to generate a model. Which can be used to predict the kind of movement being carried out by the person, which is divided into six categories walking, walking upstairs, walking down-stairs, sitting, standing and laying?
MLM and SVM achieved accuracy of more than 99.2% in the original data set and 98.1% using new feature selection method. Results show that the proposed feature selection approach is a promising alternative to activity recognition on smart phones.
Augmented Reality for Robotic Surgical Dissection - Final ReportMilind Soman
Β
This document discusses using augmented reality and robotics for robotic surgical dissection. It proposes using 3D reconstruction of organs from CT scan images to guide robotic surgery. Three approaches are suggested: 1) Using the da Vinci surgical robot with two workstations, one for the surgeon and one for the robot. 2) Using augmented reality glasses like Google Glass to overlay a 3D model. 3) Using MATLAB to construct a 3D model from CT scans and project it onto a dummy body. The document then discusses challenges with each approach and simulations performed using software like 3D Slicer and MATLAB to test reconstruction of models from CT data.
This document summarizes a student project on human activity recognition using smartphones. A group of 4 students submitted the project to partially fulfill requirements for a Bachelor of Technology degree in computer science and engineering. The project involved developing a system to recognize human activities using the accelerometer and gyroscope sensors in smartphones. Various machine learning algorithms were tested and evaluated on experimental data collected from smartphone sensors. The goal of the project was to create an accurate and lightweight activity recognition system for smartphones, while also exploring active learning methods to reduce the amount of labeled training data needed.
The document discusses Blue Eyes technology, which uses sensors and image processing techniques to identify human emotions based on facial expressions and eye movements. It can sense emotions like sad, happy, surprised. The technology aims to give computers human-like perceptual abilities by analyzing facial expressions and eye gaze. This is done using sensors like cameras and microphones along with techniques like facial recognition and eye tracking. It has applications in control rooms, driver monitoring systems, and interfaces that adapt based on inferred user interests from eye gaze. The document provides details of various sensors involved - emotion mouse, expression glasses, speech recognition systems - and how they can help computers understand and interact with humans at a more personal level.
Vertical Fragmentation of Location Information to Enable Location Privacy in ...ijasa
Β
The aim of the development of Pervasive computing was to simplify our lives by integrating
communication technologies into real life. Location aware computing which is evolved from pervasive
computing performs services which are dependent on the location of the user or his communication
device. The advancements in this area have led to major revolutions in various application areas,
especially mass advertisements. It has long been evident that privacy of personal information, in this
case location of the user, is rather a touchy subject with most people. This paper explores the Location
Privacy issue in location aware computing. Vertical fragmentation of the stored location information of
users has been proposed as an effective solution for this issue.
Mouse Simulation Using Two Coloured Tapes ijistjournal
Β
In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can.
The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.
Hand gesture recognition system has received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. In this work Hand segmentation using color models is introduced for obtaining hand gestures or detecting userβs hand by color segmentation technique for faster, better, robust, accurate and real-time applications. There are many such color models available for human hand and human skin detection with relative advantages and disadvantages in the field of Image Processing. For the purpose of hand Segmentation mix model approach has been adopted for best results. For detection of Hand from an image. The proposed approach is found to be accurate and effective for multiple conditions
Haptic technology refers to technology that interfaces with users through the sense of touch. It allows the creation of virtual objects that can be controlled and manipulated. Haptic systems consist of human and machine parts, with the human sensing touch and the machine applying forces and motions. This emerging technology has applications in virtual reality, teleoperation, medicine, and more. It provides tactile and kinesthetic feedback to enhance user experience in virtual environments. Haptic devices measure user input and provide force feedback, allowing for bidirectional interaction between user and virtual world.
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...IJMER
Β
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Gesture recognition technology allows humans to control devices through visible bodily motions and hand movements instead of physical interfaces. It works by detecting and interpreting gestures using cameras and analyzing features like hand position and motion. Popular applications include virtual keyboards and navigation systems that respond to gestures of the head, hands or eyes. Future technologies may enable self-reliant communication through gestures for people with disabilities.
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
Β
The presentation contains a technology for identifying any type of body motions commonly originating from hand and face using artificial neural network.This include identifying sign language also.This technology is for speech impaired individuals.
// I have shared an IJCSE standard paper in this topic
Gestures are an important form of non-verbal communication that involve visible bodily motions. The document discusses the history and development of gesture recognition technologies, describing early data gloves and videoplace systems as well as current technologies like Cepal and ADITI that help people with disabilities control devices with gestures. It also outlines the key components of a gesture recognition system including modeling, analysis, and recognition of gestures and discusses classification methods like HMMs and MLPs. Applications discussed include virtual keyboards, navigaze, and Sixth Sense technology.
IRJET- Convenience Improvement for Graphical Interface using Gesture Dete...IRJET Journal
Β
This document discusses a proposed system for improving graphical user interfaces using hand gesture detection. The system aims to allow users to access information from the internet without using input devices like a mouse or keyboard. It uses a webcam to capture images of hand gestures, which are then processed using techniques like skin color segmentation, principal component analysis, and template matching to recognize the gestures. The recognized gestures can then be linked to retrieving specific data from pre-defined URLs. An evaluation of the system found it had an accuracy rate of 90% in real-time testing for retrieving data from 10 different URLs using 10 unique hand gestures. The proposed system provides a more convenient interface compared to traditional mouse and keyboard methods.
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
Β
This paper contains a technology for identifying any type of body motions commonly originating from hand and face using artificial neural network.This include identifying sign language also.This technology is for speech impaired individuals.
// I have shared a presentation in this topic
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand.
Gesture Based Interface Using Motion and Image Comparisonijait
Β
This paper gives a new approach for movement of mouse and implementation of its functions using a real time camera. Here we propose to change the hardware design. Most of the existing technologies mainly depend on changing the mouse parts features like changing the position of tracking ball and adding more buttons. We use a camera, colored substance, image comparison technology and motion detection technology to control mouse movement and implement its functions (right click, left click, scrolling and double click) .
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
Β
This paper represents a comparative study of exiting hand gesture recognition systems and gives the new approach for the gesture recognition which is easy cheaper and alternative of input devices like mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-coloured glove or need lots of training data. The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). This interface is simple enough to be run using an ordinary webcam and requires little training.
IRJET- Survey Paper on Vision based Hand Gesture RecognitionIRJET Journal
Β
This document presents a survey of previous research on vision-based hand gesture recognition. It discusses various methods that have been used, including discrete wavelet transforms, skin color segmentation, orientation histograms, and neural networks. The document proposes a new methodology using webcam image capture, static and dynamic gesture definition, image processing techniques like localization, enhancement, segmentation, and morphological filtering, and a convolutional neural network for classification. The goal is to develop a more efficient and accurate system for hand gesture recognition and human-computer interaction.
This document is a final report on gesture recognition submitted by three students. It contains an abstract, introduction, background information on gesture recognition including American Sign Language and object recognition techniques. It discusses digital image processing and neural networks. It outlines the approach, modules, flowcharts, results and conclusions of the project, which developed a method to recognize static hand gestures using a perceptron neural network trained on orientation histograms of the input images. Source code and applications are also discussed.
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
Β
Individuals with severe multiple disabilities have little or no opportunity to express their own wishes, make choices and move independently. Because of this, the objective of this work has been to develop a prototype for a gaze-driven device to manoeuvre powered wheelchairs or other moving platforms. The prototype has the same capabilities as a normal powered wheelchair, with two exceptions. Firstly, the prototype is controlled by eye movements instead of by a normal joystick. Secondly, the prototype is equipped with a sensor that stops all motion when the machine approaches an obstacle. The prototype has been evaluated in a preliminary clinical test with two users. Both users clearly communicated that they appreciated and had mastered the ability to control a powered wheelchair with their eye movements.
human activity recognization using machine learning with data analysisVenkat Projects
Β
Human activity recognition, or HAR for short, is a broad field of study concerned with identifying the specific movement or action of a person based on sensor data.
The sensor data may be remotely recorded, such as video, radar, or other wireless methods. It contains data generated from accelerometer, gyroscope and other sensors of Smart phone to train supervised predictive models using machine learning techniques like SVM , Random forest and decision tree to generate a model. Which can be used to predict the kind of movement being carried out by the person, which is divided into six categories walking, walking upstairs, walking down-stairs, sitting, standing and laying?
MLM and SVM achieved accuracy of more than 99.2% in the original data set and 98.1% using new feature selection method. Results show that the proposed feature selection approach is a promising alternative to activity recognition on smart phones.
Augmented Reality for Robotic Surgical Dissection - Final ReportMilind Soman
Β
This document discusses using augmented reality and robotics for robotic surgical dissection. It proposes using 3D reconstruction of organs from CT scan images to guide robotic surgery. Three approaches are suggested: 1) Using the da Vinci surgical robot with two workstations, one for the surgeon and one for the robot. 2) Using augmented reality glasses like Google Glass to overlay a 3D model. 3) Using MATLAB to construct a 3D model from CT scans and project it onto a dummy body. The document then discusses challenges with each approach and simulations performed using software like 3D Slicer and MATLAB to test reconstruction of models from CT data.
This document summarizes a student project on human activity recognition using smartphones. A group of 4 students submitted the project to partially fulfill requirements for a Bachelor of Technology degree in computer science and engineering. The project involved developing a system to recognize human activities using the accelerometer and gyroscope sensors in smartphones. Various machine learning algorithms were tested and evaluated on experimental data collected from smartphone sensors. The goal of the project was to create an accurate and lightweight activity recognition system for smartphones, while also exploring active learning methods to reduce the amount of labeled training data needed.
The document discusses Blue Eyes technology, which uses sensors and image processing techniques to identify human emotions based on facial expressions and eye movements. It can sense emotions like sad, happy, surprised. The technology aims to give computers human-like perceptual abilities by analyzing facial expressions and eye gaze. This is done using sensors like cameras and microphones along with techniques like facial recognition and eye tracking. It has applications in control rooms, driver monitoring systems, and interfaces that adapt based on inferred user interests from eye gaze. The document provides details of various sensors involved - emotion mouse, expression glasses, speech recognition systems - and how they can help computers understand and interact with humans at a more personal level.
Vertical Fragmentation of Location Information to Enable Location Privacy in ...ijasa
Β
The aim of the development of Pervasive computing was to simplify our lives by integrating
communication technologies into real life. Location aware computing which is evolved from pervasive
computing performs services which are dependent on the location of the user or his communication
device. The advancements in this area have led to major revolutions in various application areas,
especially mass advertisements. It has long been evident that privacy of personal information, in this
case location of the user, is rather a touchy subject with most people. This paper explores the Location
Privacy issue in location aware computing. Vertical fragmentation of the stored location information of
users has been proposed as an effective solution for this issue.
Mouse Simulation Using Two Coloured Tapes ijistjournal
Β
In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can.
The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.
Hand gesture recognition system has received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. In this work Hand segmentation using color models is introduced for obtaining hand gestures or detecting userβs hand by color segmentation technique for faster, better, robust, accurate and real-time applications. There are many such color models available for human hand and human skin detection with relative advantages and disadvantages in the field of Image Processing. For the purpose of hand Segmentation mix model approach has been adopted for best results. For detection of Hand from an image. The proposed approach is found to be accurate and effective for multiple conditions
Haptic technology refers to technology that interfaces with users through the sense of touch. It allows the creation of virtual objects that can be controlled and manipulated. Haptic systems consist of human and machine parts, with the human sensing touch and the machine applying forces and motions. This emerging technology has applications in virtual reality, teleoperation, medicine, and more. It provides tactile and kinesthetic feedback to enhance user experience in virtual environments. Haptic devices measure user input and provide force feedback, allowing for bidirectional interaction between user and virtual world.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Shabnam Moini Chaghervand has over 5 years of experience teaching English to students of various ages and backgrounds. She is currently pursuing her MA in Teaching English as a Second Language at Kent State University, where she also teaches ESL courses and volunteers as a conversation partner. Her previous teaching roles include instructing adults in Iran and children in a private English institute.
CV ini dibuat untuk memenuhi tugas UAS mata kuliah Kontorl Audit Sistem Informasi yang dibimbing oleh Bapak Jazman.
Nama : Nurgivo Alfajri
NIM : 11353103824
mm Bagali...... start up...... just go for that.........Start up....the taste...dr m m bagali, phd in hr
Β
The document discusses why startups are an attractive employer choice for students graduating from technical and business schools. Some of the key reasons mentioned include the ideal office locations near business communities, the perceived potential for innovation and positive impact, qualifications of founders and investors, and lifestyle perks like free food and flexibility. The trend is that graduates want to pursue their dreams of starting their own company and creating products or services that improve lives and the economy.
soon all pictures of entire career from Ambulance accidents tower cranes rock sand asphalt mines Bridges 7 oaks 91 dam etc soon takes time for all so many
William A. Habeck has over 15 years of experience in sales, sales management, marketing, and general management. He has a proven track record of increasing revenue and profitability through strategic planning, personnel management, and digital and traditional marketing. His skills include leadership, problem solving, conflict resolution, strategic planning, consultative selling, team building, and personnel development.
This document provides a summary of an individual's academic and professional qualifications. It includes details of their educational background such as two PhDs from Karnatak University, publications, research experience, teaching experience of over 18 years, and global certifications in human resource management. They currently serve as the Director of Research Projects and International Relations and Professor of Management Practices and Human Resources at REVA University in Bangalore.
Para acceder al Mailbox en Achieve3000 y completar tareas asignadas, los estudiantes deben ingresar su usuario y la contraseΓ±a temporal "trina", seleccionar la clase correspondiente presionando "Adelante", y completar los items enviados por el maestro que aparecen en azul.
Virtual reality simulations allow generation of 3D models from patient imaging scans for surgical planning and training. Surgeons can view detailed anatomy, practice procedures, and receive haptic feedback without risk to patients. While early systems had limitations like unrealistic graphics, current VR provides an effective alternative to cadavers for training with benefits like standardized lessons and performance tracking.
Media Control Using Hand Gesture MomentsIRJET Journal
Β
This document discusses a system for controlling media players using hand gestures. The system uses a webcam to capture images of hand gestures. It then uses neural networks trained on large gesture datasets to recognize the gestures. The recognized gestures can control functions of a media player like increasing/decreasing volume, playing, pausing, rewinding and forwarding. The system achieves recognition rates of 90-95% for different gestures. It provides a more natural user interface than keyboards and mice by allowing control through hand movements.
At the present time, hand gestures recognition system could be used as a more expected and useable
approach for human computer interaction. Automatic hand gesture recognition system provides us a new
tactic for interactive with the virtual environment. In this paper, a face and hand gesture recognition
system which is able to control computer media player is offered. Hand gesture and human face are the key
element to interact with the smart system. We used the face recognition scheme for viewer verification and
the hand gesture recognition in mechanism of computer media player, for instance, volume down/up, next
music and etc. In the proposed technique, first, the hand gesture and face location is extracted from the
main image by combination of skin and cascade detector and then is sent to recognition stage. In
recognition stage, first, the threshold condition is inspected then the extracted face and gesture will be
recognized. In the result stage, the proposed technique is applied on the video dataset and the high
precision ratio acquired. Additional the recommended hand gesture recognition method is applied on static
American Sign Language (ASL) database and the correctness rate achieved nearby 99.40%. also the
planned method could be used in gesture based computer games and virtual reality.
Virtual Mouse Control Using Hand GesturesIRJET Journal
Β
This document describes a system for controlling a computer mouse using hand gestures detected by a webcam. The system uses computer vision and image processing techniques to track hand movements and identify gestures. It analyzes video frames from the webcam to extract the hand contour and detect gestures. Specific gestures are mapped to mouse functions like movement, left/right clicks, and scrolling. The system aims to provide an intuitive, hands-free way to control the mouse for physically disabled people or those uncomfortable with touchpads. It could help the millions affected by carpal tunnel syndrome annually in India. The document outlines the system architecture, methodology including hand tracking and gesture recognition, and concludes the technology provides better human-computer interaction without requiring a physical mouse.
FUSION OF GAIT AND FINGERPRINT FOR USER AUTHENTICATION ON MOBILE DEVICESvasim hasina
Β
This document proposes fusing gait and fingerprint biometrics for user authentication on mobile devices. Gait samples are collected from an accelerometer attached to subjects' hips as they walk. Fingerprints are collected from three different sensors. Feature extraction and comparison scores are derived for each biometric. The scores are then normalized and fused using different methods. Fusing the biometrics results in improved authentication performance compared to the individual biometrics, with lower equal error rates achieved.
Design a System for Hand Gesture Recognition with Neural NetworkIRJET Journal
Β
This document proposes a system for recognizing hand gestures using surface electromyography (sEMG) and artificial neural networks. sEMG signals are collected from the forearm using a Myo wristband to measure muscle activity. The signals are preprocessed to remove noise and extract time and frequency domain features. An artificial neural network classifier is then trained to predict different gesture classes from the features, achieving 87.32% accuracy in recognizing various hand movements. The proposed system provides an effective method for hand gesture recognition using sEMG signals and neural networks for applications in human-computer interaction and assistive technologies.
Hand gesture recognition using ultrasonic sensor and atmega128 microcontrollereSAT Publishing House
Β
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The document discusses performance evaluation of neural network-based hand gesture recognition. It begins with an abstract describing the study, which tested a proposed algorithm on 100 sign images from American Sign Language (ASL). The algorithm improved true match rate from 77.7% to 84% while decreasing false match rate from 8.33% to 7.4%. The introduction provides background on pattern matching versus recognition algorithms. The rest of the document details hand gesture recognition approaches, importance of gestures for human-computer interaction, basic architecture of a gesture recognition system including data acquisition, modeling, and feature extraction stages, and challenges in hand gesture recognition.
Gestures Based Sign Interpretation System using Hand GloveIRJET Journal
Β
This document describes a glove-based sign language interpretation system that uses flex sensors and an Arduino Uno microcontroller. The system is intended to help those with speech impairments communicate by translating sign language gestures into text and speech output. The glove contains flex sensors that detect finger and hand movements, sending that data to the Arduino which interprets the gestures using machine learning algorithms and outputs the translation. The system aims to reduce communication barriers for the deaf and hard of hearing.
Design and Development ofa Mirror Effect Control Prosthetic Hand with Force S...TELKOMNIKA JOURNAL
Β
Some of the already available prosthetic hands in the market are operated in open loop, without
any feedback and expensive. This system counters those by having the prosthetic hand printed using 3D
printer and consist of a feedback sensor to make it a closed loop system. The system generally consists of
two sections, mainly Finger Input and Prosthetic Output. The two sections communicate wirelessly for data
transferring. The main purpose of the system is to control the prosthetic hand wirelessly using the Mirror
Glove by performing a mirror effect that will translate movement from the glove onto the prosthetic hand.
The Mirror Glove monitors the movements/bending of each fingers using force sensitive sensor. The
prosthetic hand also has a sensor known as force sensitive resistor. The sensors will feedback the
pressure on the prosthetic hand during object grasping, allowing the prosthetic hand to grasp delicate
object without damaging it. Overall, the system will imitate the flex and relaxing of fingers inside the Mirror
Glove and wirelessly control distant prosthetic hand to imitate the human hand.
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...IRJET Journal
Β
This document presents a method for recognizing gestures using ultrasound sensors and infrared array sensors. Two ultrasound sensor pairs are used to capture hand motion in vertical and horizontal directions. An infrared Grid-Eye sensor is used to trigger the ultrasound sensors when a hand gesture is detected. The sensors capture data on the distance and movement of the hand. This data is preprocessed and extracted into features representing the average and count of upward and downward motions. An artificial neural network with two hidden layers is trained on these features to classify gestures for two letters, achieving an accuracy of 83%. The proposed method aims to provide a contactless gesture recognition system without some of the disadvantages of vision-based techniques.
Virtual reality simulation allows surgeons to practice complex procedures, view detailed 3D models of patient anatomy, and reduce errors by planning surgeries in virtual environments before operating on real patients. VR simulators are used for surgical training, planning, navigation and guidance, and even remote "tele-surgery". While early systems had limitations, medical VR has advanced significantly and proven effective for improving skills, access to care, and outcomes.
The document proposes using the Leap Motion controller and a 6-DOF Jaco robotic arm to allow for intuitive and adaptive robotic arm manipulation. An algorithm would allow optimum mapping between a user's hand movements tracked by the Leap Motion controller and movements of the Jaco arm. This would allow a more natural human-computer interaction and smooth manipulation of the robotic arm, adapting to hand tremors or shakes. The goal is to enhance quality of living for people with upper limb problems and support them in performing essential daily tasks.
Controlling Computer using Hand GesturesIRJET Journal
Β
This document describes a research project on controlling a computer using hand gestures. The researchers created a real-time gesture recognition system using convolutional neural networks (CNNs). They developed a dataset of 3000 training images of 10 different hand gestures for tasks like opening apps. A CNN model was trained to detect hands in images and recognize gestures. The model achieved 80.4% validation accuracy and was able to successfully perform operations like opening WhatsApp, PowerPoint and other apps based on detected gestures in real-time. The system provides a cost-effective and contactless way of interacting with computers using hand gestures only.
This document provides an overview of touchless touchscreen technology. It describes the hardware and software requirements including sensor installation and calibration. The document then analyzes how touchless touchscreens work by detecting hand movements using sensors without physical contact. Several applications are discussed including use in medical settings where sterile conditions are required, as interactive kiosks or displays, and future possibilities like interactive walls or surfaces. The conclusion is that this technology has significant potential in healthcare and other fields by providing more natural human-computer interaction.
Gesture recognition technology allows humans to interface with computers using body movements detected by cameras. Cameras read gestures and send that data to computers for processing as input to control devices or applications. Some techniques use specialized gloves with sensors to capture finger and hand positions and movements. When developing gesture recognition systems, factors like accuracy, precision, resolution, update rate, and latency of the sensing system should be considered. Systems work by using computer vision and image processing techniques on camera input to interpret 3D gestures based on x, y, and z coordinate data. Future developments could integrate speech and gesture recognition for more natural multimodal interaction with virtual assistants.
Mouse Simulation Using Two Coloured Tapesijistjournal
Β
In this paper, we present a novel approach for Human Computer Interaction (HCI) where, we control cursor movement using a real-time camera. Current methods involve changing mouse parts such as adding more buttons or changing the position of the tracking ball. Instead, our method is to use a camera and computer vision technology, such as image segmentation and gesture recognition, to control mouse tasks (left and right clicking, double-clicking, and scrolling) and we show how it can perform everything as current mouse devices can.
The software will be developed in JAVA language. Recognition and pose estimation in this system are user independent and robust as we will be using colour tapes on our finger to perform actions. The software can be used as an intuitive input interface to applications that require multi-dimensional control e.g. computer games etc.
A Survey Paper on Controlling Computer using Hand GesturesIRJET Journal
Β
This document summarizes a survey paper on controlling computers using hand gestures. It discusses various techniques that have been used for hand gesture recognition in previous research papers. The paper reviews literature on hand gesture recognition methods based on sensor technology and computer vision. It describes applications of hand gesture recognition such as controlling media playback, scrolling web pages, and presenting slides. Common challenges with hand gesture recognition are also mentioned, such as dealing with complex backgrounds and lighting conditions. The goal of the paper is to perform a literature review on prominent techniques, applications, and difficulties in controlling computers using hand gestures.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Smartphone based wearable sensors for cyborgs using neural network engineDayana Benny
Β
This document summarizes a research paper on developing a health monitoring system using wearable sensors and a neural network engine. The system collects sensor data from electronic skin modules placed on a patient's body. A smartphone application processes the sensor data using location information and sends it to a cloud-based neural network engine. The neural network engine fuses data from multiple sensors to determine if the patient's condition is normal or dangerous. If abnormal, an alert is sent from the cloud to the smartphone to notify medical professionals. The paper outlines the system architecture and compares it to other health monitoring and electronic skin approaches.
Similar to Medical Handsfree System - Project Paper (20)
Smartphone based wearable sensors for cyborgs using neural network engine
Β
Medical Handsfree System - Project Paper
1. 1
Medical Hands-Free System
Authors
Avraham Levi, Guy Peleg
Supervisors
Ron Sivan, PhD , Yael Einav, PhD
Abstract - Contemporary Human Computer Interaction (HCI) is based on usage of multiple
devices such as keyboard, mouse or touchscreen, all of which require manual contact. This
imposes a serious limitation when it comes to deal some medical equipment and applications
where sterility is required, such as in the operating room. Statistical reports show that keyboard
and mice contain more bacteria than lavatories. We focus in our project on the application of a
hands free motion detector in surgical procedures. The purpose of the application in this setting is
to enable a surgeon to account for the placement of surgical items to prevent forgetting any inside
the patient by mistake. The application makes use of a hand motion detector from Leap Motion,
Inc. We hope it will give us a good opportunity to solve this issue, and, furthermore, introduce the
concept of hands free controller to the medical world.
Keywords - Human computer interaction (HCI), Leap motion (Leap) , Infra-red (IR) ,Gesture,
Image moments, Retained foreign object (RFO).
1. INTRODUCTION
This project addresses the issue of recognizing hand and finger movement. Using a 3D
motion controller, and utilizing computer vision method which is image moments. we
hope to solve the issue of defining the hand motion patterns. In practice, we will need to
expand the SDK the manufacturer of the device provides, to enable developers to record
and create their own custom made gestures with our SDK we will develop an application
that we hope will help surgeons in the operating room by assisting them in keeping track
of instruments and materials they use during surgery so they wouldn't forget anything
inside the patient. Our purpose is to introduce the possibility of hands-free control of a
computer to the medical world, which is usually very conservative and is slow in adopting
new technologies.
2. THEORY
2.1. Background and related work
2.1.1. Gesture
Gestures are a method of communication using only hand movement, such as sign
language used by the deaf (See Figure 1). A gesture includes movement of body parts
(hands, fingers) or various implements, such as flags, pens, etc.
Figure 1 : Hand gestures used in sign language
2. 2
2.1.2.The leap motion controller
The Leap Motion controller [1] is a computer hardware sensor designed to detect hand
and finger movement requiring no contact or touch. The main components of the device
are three IR LEDs for illumination and two monochromatic IR cameras. The device is
sensitive to hand and finger movements to a distance of about 1 meter. The LEDs
generate a 3d pattern of dots of IR light, and the cameras generate 200 frames per
second of captured data. This data is sent via USB to a host computer. The device
comes with a development kit (SDK) capable of recognizing a few simple gestures. The
main part of our project will consist of expanding the SDK for enabling users to define
their own, more complex gestures. In particular, we plan to use that environment to
create applications based on customized gestures to control medical equipment used in
sterile conditions, where forms of control which require touch are precluded.
Figure 2 : The inside components of the Leap Motion controller
2.1.3.Other products
Other products in the 3D motion controller domain are Kinect [2] and Intel Perception
Camera [3] . Compared with the Leap Motion device, the Kinect does not come with
finger recognition and its main purpose is to capture the whole body, therefore it is much
slower in capturing hand and fingers movement, too slow for gesture recognition.
Because the Kinect camera captures the soundings as well, computing time is much
longer. Moreover, the Kinect is much larger and requires more space. The Kinect,
however, has better range and can capture the surroundings from a distance.
Another comparable device is the Intel Perceptual Computing Camera. However, It is
available for purchase, although it rather more expensive than the Leap Motion ($200 for
the device and $150 for Intel SDK). The device is not designed specifically for gesture
recognition, it works in visible light (not IR), and depends on ambient lighting. It supports
voice and various other technologies that are immaterial to our application.
2.1.4. The health care industry
Research [4] [5] strongly supports that computer keyboards and other input devices are
source of bacteria and cross-contamination that can lead to hospital acquired infections.
Therefore washable keyboards, mice, TV remotes and mobile products with anti-
microbial product protection should be put in place along with proper disinfection
protocols to reduce the risk of infection and cross-contamination.
2.1.5. Surgical inventory count
Oftentimes surgical instruments may accidentally be left behind in a surgical cavity
causing, in the worst case, severe infection or even death [6]. These types of events
(called retained foreign object β RFO) are considered as "never events" , namely β they
are preventable medical errors. Unlike other medical errors, RFO errors were declared as
"touch zero" errors β namely, the goal is to reach zero events, since this type of error is
considered easy to prevent [7].
3. 3
Over the history of surgical procedures a strict surgical inventory count protocol was
developed and is now obligatory in all surgical settings around the world. Two nurses are
responsible for counting all surgical sponges, needles and instruments in most surgical
settings. However, there are some surgical procedures in which the count protocol is not
performed on a regular basis. These are "small", "simple" procedures in which there is no
nurse continuously aiding the surgeon (e.g., episiotomy). Naturally, where there is no
formal count protocol performed or no count at all, the chances for retained surgical
sponge rises dramatically [6] . In these procedures the surgeon has to rely on his or her
memory to recall how many items were used or how many items were in the set that was
opened (for example 10 pads, 2 needles and 3 gauzes). To account for all items the
surgeon actually needs to compare the number in his memory (e.g., "there were 5 pads in
the set and then I opened a package of 5 more, and there were 3 needles in the set.") To
the number of items found at the end of the procedure. However, to keep these numbers
in mind for the whole procedure is much of a burden and since short term memory is so
vulnerable, there is a good chance that the surgeon may make a mistake.
In the past 10-15 years there were few technological developed to support the counting
protocol: "SURGICOUNT Medical" and "SmartSponge System" are two examples. These
systems require hand contact to operate and are controlled mostly by the nurses
responsible for the count. Our solution is designed to help surgeons in cases where there
is no nurse available to write down (or type) items' count. Using the Leap Motion device
for this purpose will provide the surgeon means to document the usage of items without
the need to write down or type anything. This solution is the only solution (until now) that
surgeons have to document anything themselves during surgery. It is expected to
dramatically reduce memory load and promote a safer care for patients.
Figure 3 : Intra-operative radiograph performed because of an incorrect sponge count in a 54 year
old woman undergoing urethral suspension. The radio-opaque marker (arrow) of a 4 x 4 inch
surgical sponge is visible in the pelvis. The sponge was identified.
2.2. Detailed description
2.2.1. Gesture recognition
Gesture recognition refers to interpretation of human gestures via mathematical
algorithms. The IR cameras in the Leap Motion device reads the movements of the
human hand and communicates the data to a computer that uses the gesture input to
control devices or applications. Using mathematical algorithms the computer can analyze
the captured data and categorize it to one of several predefined patterns.
4. 4
2.2.2. Preprocessing
For the moment we confine our attention to planar gestures β gestures in which the index
finger traces a path in a plane. We assume it will be easier for humans to reproduce and
hope it will also be simpler to recognize, while not restricting the repertoire of possible
gestures too severely.
It is obvious that any free hand motion cannot be constrained to be completely planar:
planarity will be only approximated. We therefore find, as a first step in interpreting
gesture data, the plane whose distance from the captured gesture trace is minimal, using
Singular Vector Decomposition (see 2.2.3).
2.2.3. Singular vector decomposition
Singular Vector Decomposition (SVD) is a factorization method with many useful
applications in signal processing and statistics.
One useful application that can be used for our purpose is fitting planes and lines by
orthogonal distance regression [8] .Say we want to find the plane that are as close as
possible to set of π 3-D points (π1, β¦ , π π) which we captured by the device.
Let the matrix A of size n x 3 holds the ππ = (π₯π, π¦π, π§π) in each row :
π΄ = [
π₯1 π¦1 π§1
: : :
π₯ π π¦π π§ π
]
Using the Transpose Matrix operation we create the 3 x n AT
:
π΄ π
= [
π₯1 β¦ π₯ π
π¦1 β¦ π¦π
π§1 β¦ π§ π
]
Building matrix B by Matrix Multiplication between A and AT
yielding a 3 x 3 matrix :
π΅ = π΄ β π΄ π
Solving the eigenvalue equation for matrix B:
det(π΅ β π β πΌ ) = 0
This equation is a polynomial of degree 3, and hence has 3 solutions. Under the
conditions of the problem at hand, they are expected to be real, positive and distinct.
Ξ»1, Ξ»2, Ξ»3 β β+
For each of the Ξ» values we compute the corresponding eigenvector:
(π΅ β ππ β πΌ )π’Μ π = 0
Using the eigenvalues we have calculated, we need to create the U matrix that built from
each of the eigenvectors.
U = [
π’1π₯
π’1π¦
π’1π§
π’2π₯
π’2π¦
π’2π§
π’3π₯
π’3π¦
π’3π§
]
β
u1 u2 u3
5. 5
The eigenvector that belongs to the minimal value ππ is the normal for our working
plane.
2.2.4. Projecting point on the new plane
After we have found the closest plane to all the given points, we project each point onto
the plane to be given set of N coplanar. We use the canonical form of the plane equation
to compute the distance of each point ππ(π₯π, π¦π, π§π)from the plane:
π΄π₯π + π΅π¦π + πΆπ§π + π·
βπ΄2 + π΅2 + πΆ2
= ππ
ππ
ββ β ππ πβ = ππβ²ββββ
2.2.5. Reducing dimensions
Now that we have moved all points into one plane, we want to reduce the number of
coordinates of each point from 3 to 2. Let {ππ(π₯π, π¦π, π§π)} be that set of points, and let
π(π΄π₯ + π΅π¦ + πΆπ§ + π· = 0) be that plane. In general, the plane π need not be π0(π§ = 0),
the XY plane, and therefore the π§ component of the points ππ need not vanish. We
therefore construct a Cartesian system on plane π by choosing two perpendicular lines
on π, namely LX Let and LY. πΏ π(π΄π₯ + π΅π¦ + π· = 0) is the intersection line between π and
π0. As the origin we choose the point π (0, β
π·
π΅
, 0) on πΏ π. πΏ π will then be the line lying in
π0 that passes through the origin π and is perpendicular to πΏ π. The distances π₯β²π and
π¦β²π of every point ππ from πΏ π and πΏ π correspondingly will act as the coordinates of the
points for further analysis.
Considering Figure 4, we find the distance d on π0 between the projection on ππ on π0
and πΏ π, which by definition also lies on π0. This distance d and the z coordinate of point
ππ form a right-angle triangle whose hypotenous is the distance of ππfrom LX, hence is y'i.
Defining point Q as the intersection of y'i and πΏ π, the distance from Q to the origin O is the
distance on π from ππ to πΏ π, and hence is x'i.
Figure 4 : Reducing the number of coordinates
Developing the math we get:
22
2222
'
))(())((
BA
ABx
B
D
yAxB
B
D
yAB
x
iiii
i
ο«
οο«ο«οο«
ο½
2
22
2
' )(
i
ii
i z
BA
DByAx
y ο«
ο«
ο«ο«
ο½
6. 6
2.2.6. Building the image
Once a planar shape is obtained, we find a bounding rectangle for the points inside.
Forming a matrix M with the dimensions of that rectangle, we initialize the matrix
according to this formula:
π(π₯, π¦) = {
1 π₯, π¦ ππ π πππ‘π πππππ‘
0 ππ‘βπππ€ππ π
Yielding a matrix representing the image.
2.2.7.Image moments
In order to distinguish between patterns we compute Image Moments [9] [10]. Image
moments, each being a real number, are various weighted averages of the image pixels
intensities, representing increasing detail of pixel distribution, such as centroid, area, and
information about orientation. Moments are invariant to translation, and some are
invariant to rotation as well, and we limit our attention to those only.
We have used the following mapping function:
π(π₯, π¦) = {
0 , ππ π‘βπ πππ₯ππ ππ π€βππ‘π
1 , ππ π‘βπ πππ₯ππ ππ πππππ
We first calculate the raw moments
π ππ = β β π₯ π
π¦ π
π( π₯, π¦)
π¦π₯
π ππ = β β(π₯ β π₯Μ )
π
(π¦ β π¦Μ ) π
π(π₯, π¦)
π¦π₯
π€βπππ π₯Μ =
π10
π00
πππ π¦Μ =
π01
π00
We have chosen to use 9 moments for now, but this number may increase if it turns out
good recognition needs more. Now that we have calculated the moments we need make
sure that they are scale invariants so we use the following formula:
πππ =
πππ
π00
1+
π+π
2
Now we want our moments to be invariant under rotation, most known set of moments
are Hu's [11] set of invariant moments, also known as the 7 moments of Hu:
π π = π20 + π02
π π = (π1)2
+ 4π11
2
π π = (π30 β 3π12)2
+ (3π12 β π30)2
π π = (π30 + π12)2
+ (π30 + π21)2
π π = (π30 β 3π12)(π30 + 3π12)[(π30 + π12)2
β 3(π30 + 3π12)2]
+ (3π21 β π03)(π03 + 3π21)[3(π30 + 3π12)2 β (π21 + π03)2]
π π = (π20 β π02)[π30 + π12)2
β (π21 + π03)2
] + 4π_11 (π30 + π12)(π21 + π03)
π π = (3π21 β π03)(π30 + π12)[(π30 + π12)2
β 3(π21 + π03)2] β (π30 β 3π12)(π21 + π03)[(3(π30 + π12)2
β (π21 + π03)2]
7. 7
The 9 moments of a gesture are taken as coordinates of the gesture in some abstract 9-
dimensional space. It is assumed that the representation of similar gestures will
congregate into "clouds" whose extent, in Euclidean metric, will be small compared to the
distance between "cloud" centroids. Those centroid will be calculated for each "cloud"
and will be saved, and with each insertion of new moments the centroid will be updated,
the usage of this data is represented in the next section.
2.2.8. Minimum distance algorithm
Minimum Distance algorithm is basic method to compare distance between our ongoing
gesture and the "clouds" of calculated method, the main idea of the usage of the algorithm
in this context is to calculate the distance between the ongoing gesture centroid to the other
centroids, so in high probability the "cloud" with the minimum distance to the ongoing
gesture is the same gesture,
We have considered two distance function:
πππ‘ π, π ππ ππππ‘ππ ππ π ππππππ πππππ π ππππ
πΈπ’πππππππ βΆ ββ(ππ β ππ)2
π
π=1
πππβππ‘π‘ππ βΆ β|ππ β ππ|
π
π=1
We assume that the number of gesture recorded is finite and considerably low. So in that
case we can allow us to calculate each distance. Since the number of rows of data is finite
the complexity of this process the complexity is low. Which is π(π).
The Algorithm:
Let π = π1, β¦ , π π be the set of centroids of gesture {ππ} in feature space.
Let p be the point in representing a new gesture to be recognized in feature space
ο· Compute the distance d(p, ci)
ο· Find the minimum distance.
ο· Identify the new gesture as the gesture i with the minimal distance
ο· Return the id
2.3. Expected results
The final product will be implemented in each device that require hand contact
especially for counting procedures in operating room. This would reduce the number
of retain sponge cases.
Our SDK will give the opportunity for developers around the world to implement
application which based on custom-made gesture.
8. 8
3. PRELIMINARY SOFTWARE ENGINEERING DOCUMENTATION
3.1. Requirements (Use Cases)
3.2. GUI
In this section we would like to introduce our UI prototype for the project. This prototype
may demonstrate the usability of the SDK and the Medical application.
3.2.1. SDK expansion
We have divided the toolkit to 3 major steps which give the user all the functionally he
needs to create a custom made gesture. Each step screen has its own purpose, we gave
the user a wizard that give him information about the current step.
Step 1 (Figure 5): In this step the user records the custom made gesture he or she
would like to create.
Step 2 (Figure 6): In this step the user trains the computer to recognize the new gesture
by recording additional examples preferably by different people.
Step 3 (Figure 7): In this step the user tests the system by presenting the gesture to see
if it is recognized.
Step 4 : In this step the user gets the information about the new gesture he has just
created.
Each screen has the following components (See figures below):
A) Wizard- A UI component which presents the user a sequence of screens that lead him
through a series of well-defined steps.
B) Gesture grid panel β A real time panel displayer which tracks the movement of the
user finger and display this movement in this panel.
C) Coordinates table β A table which is filled with the point coordinates of each finger
position that was detected in the current frame.
D) Operation buttons β the four control buttons which can be clicked upon to perform an
action.
E) Status bar β A status line that gives an information about the Leap controller
connection , and about the action that is performed.
F) Help button β a control button which stands for giving help for the user.
9. 9
G) Match rate bar βa bar that presents the likelihood rate of the gesture that the user is
doing with the custom made gesture that was saved in the record step. (refers to steps
2+3)
G
A
B
C
D
E
F
Figure 5 : Record Step
Figure 6 : Train Step
Figure 7 : Exercise Step
10. 10
3.2.2. Items counting application
We design our medical application to be simple and intuitive to the surgeons that will
use it eventually. We focused on the Episiotomy procedure kit since it perfectly
demonstrate the usage of our system.
As we can see in Figure 8 below, there are 4 components:
A) Item panel β this panel includes elements : gesture icon , item image and counter .
B) Undo panel β this panel show the gesture icon that the surgeons needs to do in
order to undo his last gesture action.
C) More panel - this panel show the gesture icon that the surgeons needs to do in
order to add different item to the surgery.
D) Status bar β A status line that gives an information about the Leap controller
connection , and about the action that is performed.
Using these components the surgeon makes the gesture according the item that was
entered to the surgery environment and the counter that belongs to this gesture is
incremented, and a feedback will be shown on the screen.
3.3. Program structure β Architecture, Design (UML diagrams)
3.3.1. Software architecture
The SDK itself will divide into two main parts, the main part the core of the program
with all the logic is going to be C++ based application. When the GUI is going to be
done with QT or WPF/WINFORMS and will communicate with the C++ program that
will transfer event for each gesture that have been made, as for the medical application
the whole application is going to be written with WPF and communicate with the SDK.
The following API is an initial prototype of the interface of our SDK:
Init() β this function initialize the setting of the device.
SetupListener() β this function connect to the infrastructure of the system.
CallBackFunc() β this interface stands for the user in order to send it as a delegate
function to the device.
OnClose() β operation to be done when the application is close.
BA
C
D
Figure 8 : Instruments counting application
12. 12
3.4. Testing plan
3.4.1. Testing plan for the SDK
3.4.2. Testing plan for the medical application
REFERENCES
[1] L. M. Controller, "Leap Motion Specs," Leap Motion Inc, 2013. [Online]. Available:
https://www.leapmotion.com/product.
[2] "Kinect WikiPedia," [Online]. Available: http://en.wikipedia.org/wiki/Kinect.
[3] I. Corp, "Intel Dev Guide for perceptual-computing," Intel Corp, [Online]. Available: http://software.intel.com/en-
us/vcsource/tools/perceptual-computing-sdk.
[4] A. K. Al-Ghamdi, S. M. A. Abdelmalek, A. M. Ashshi, H. Faidah, H. Shukri and A. A. Jiman-Fatan, "Bacterial
contamination of computer keyboards and mice ,elevtors and shoping carts," African Journal of Microbiology
Research, no. 5(23), 2011.
[5] D. C. -. A. N. M. Unit, "Abc News - Your Keyboard: Dirtier Than a Toilet," 5 May 2008. [Online]. Available:
http://abcnews.go.com/Health/Germs/story?id=4774746.
[6] M. M. D. M. S. L. S. M. a. M. J. Z. M. Atul A. Gawande, "Risk Factors for Retained Instruments," The new england
journal of medicine, 2003.
[7] M. t. S. F. B. M. K. P. S. R. J. T. S. B. M. a. H. A. K. B. C. William Kaiser, "The Retained Surgical Sponge," ANNALS
OF SURGERY, vol. 224.
[8] W. Comunity, "SVD - WIKI," [Online]. Available: http://en.wikipedia.org/wiki/Singular_value_decomposition.
[9] J. Flusser, "On the independence of rotation moment invariants," Pattern Recognition, no. 33, 1999.
[10] J. F. a. T. Suk, "Rotation Moment Invariants for Recognition of Symmetric Objects," IEEE TRANSACTIONS ON
IMAGE PROCESSING, vol. 15, 2006.
[11] J. L. Zhihu Huang, "Analysis of Hu's Moment Invariants on Image Scaling and Rotation," ECU Publications Pre,
2011.
Test name Scenario Expected result
Record input data We record input from the
device
A file with the recorded data will
appear in the DB.
Call SVD function over input data file SVD function gets the raw data
from the file
Function return the normal of
closest plane.
Call image moments function Image moment will be
extracted from the bit map
matrix.
All moment will be recorded to
the DB.
Record And test 3 Different Gestures
repeat this with different people
We record 3 different gesture
and test them
There system will notice
between the 3 different
gestures
Test name Scenario Expected result
Open application User starts the application Application opens in main
screen with no errors
User imitates gesture shown on the
screen
The user follow the gesture
shape.
A counter of the instrument in
being increased , and the
shape is being highlighted .
The user chose to undo the action User imitates gesture of undo. The last counter is being
decreased.
Open surgery log The user press log button. A log of his actions is being
shown
"you made X , pad 4X4 was
incremented by 1".
Open surgery inventory report The user click inventory report
button.
A report of all the required
equipment is being shown
"two pair of scalpels" etc.