The document describes a MATLAB program called MOTION that was created to analyze human gait patterns. The program imports limb length data and outputs joint angles to determine the position of the wrist and ankle. It uses inverse kinematic algorithms and plots the predicted and measured joint angles over time on a graph. The program provides a simplified way to study gait using a single platform. Future applications could include distinguishing abnormalities, identifying individuals, and enhancing athletic performance.
This document discusses gait analysis and a MATLAB program called MOTION for analyzing gait data. It provides background on gait studies, describes the goals and capabilities of MOTION, and shows initial results applying MOTION to analyze VICON gait data using inverse kinematics algorithms. Applications of gait analysis include identifying IEDs, diagnosing medical conditions, and enhancing sports performance.
Trajectory reconstruction for robot programming by demonstration IJECEIAES
The reproduction of hand movements by a robot remains difficult and conventional learning methods do not allow us to faithfully recreate these movements because it is very difficult when the number of crossing points is very large. Programming by Demonstration gives a better opportunity for solving this problem by tracking the user’s movements with a motion capture system and creating a robotic program to reproduce the performed tasks. This paper presents a Programming by Demonstration system in a trajectory level for the reproduction of hand/tool movement by a manipulator robot; this was realized by tracking the user’s movement with the ArToolkit and reconstructing the trajectories by using the constrained cubic spline. The results obtained with the constrained cubic spline were compared with cubic spline interpolation. Finally the obtained trajectories have been simulated in a virtual environment on the Puma 600 robot.
Turkish Sign Language Recognition Using Hidden Markov Model csandit
In past years, there were a lot of researches made in order to provide more accurate and comfortable interaction between human and machine. Developing a system which recognizes human gestures, is an important study to improve interaction between human and machine.
Sign language is a way of communication for hearing-impaired people which enables them to communicate among themselves and with other people around them. Sign language consists of hand gestures and facial expressions. During the past 20 years, researches were made to facilitate communication of hearing-impaired people with others.
Sign language recognition systems are designed in various countries. This paper presents a sign language recognition system, which uses Kinect camera to obtain skeletal model. Our aim was to recognize expressions, which are used widely in Turkish Sign Language (TSL). For that purpose we have selected 15 words/expressions randomly (repeated 4 times each by 3 different signers) which belong to Turkish Sign Language. We have used 180 records in total. Videos are recorded using Microsoft Kinect Camera and Nui Capture. Joint angles and joint positions have been used as features of gesture and achieved close to 100% recognition rates.
This document discusses converting gait data collected using a motion capture system into files that can be analyzed using OpenSim software. The process involves collecting marker position data during walking trials, converting the files to .c3d and .trc formats, scaling a musculoskeletal model to match the participant, using inverse kinematics to calculate joint angles, inverse dynamics to calculate joint forces and moments, and static optimization to estimate muscle forces. The goal is to analyze muscle and joint loading during walking.
This document summarizes a research article that proposes a method for implementing a 3D pedometer using a three-axis accelerometer and microcontroller. The pedometer can automatically identify walking and running motions and calculate step counts without needing to be worn at the waist. It analyzes the acceleration signals from walking and running to distinguish the motions and accumulate step counts accordingly. The article describes the hardware system, signal analysis methods used to smooth the acceleration data, and an algorithm to determine motion state and step counts based on the 3D acceleration values. Experimental results demonstrate the pedometer's ability to accurately count steps during various motions while being worn in different positions.
An Iot Based Smart Manifold Attendance SystemIJERDJOURNAL
ABSTRACT:- Attendance has been an age old procedure employed in different disciplines of educational institutions. While attendance systems have witnessed growth right from manual techniques to biometrics, plight of taking attendance is undeniable. In fingerprint based attendance monitoring, if fingers get roughed / scratched, it leads to misreading. Also for face recognition, students will have to make a queue and each one will have to wait until their face gets recognised. Our proposed system is employing “manifold attendance” that means employing passive attendance, where at a time, the attendance of multiple people can get captured. We have eliminated the need of queue system / paper-pen system of attendance, and just with a single click the attendance is not only captured, but monitored as well, that too without any human intervention. In the proposed system, creation of database and face detection is done by using the concepts of bounding box, whereas for face recognition we employ histogram equalization and matching technique.
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOTcsandit
Image-processing is one the challenging issue in robotic as well as electrical engineering
research contexts. This study proposes a system for extract and tracking objects by a
quadcopter’s flying robot and how to extract the human body. It is observed in image taken
from real-time camera that is embedded bottom of the quadcopter, there is a variance in human
behaviour being tracked or recorded such as position and, size, of the human. In the regard, the
paper tries to investigate an image-processing method for tracking humans’ body, concurrently.
For this process, an extraction method, which defines features to distinguish a human body, is
proposed. The proposed method creates a virtual shape of bodies for recognizing the body of
humans, also, generate an extractor according to its edge information. This method shows
better performance in term of precision as well as speed experimentally.
3D Human Hand Posture Reconstruction Using a Single 2D ImageWaqas Tariq
Passive sensing of the 3D geometric posture of the human hand has been studied extensively over the past decade. However, these research efforts have been hampered by the computational complexity caused by inverse kinematics and 3D reconstruction. In this paper, our objective focuses on 3D hand posture estimation based on a single 2D image with aim of robotic applications. We introduce the human hand model with 27 degrees of freedom (DOFs) and analyze some of its constraints to reduce the DOFs without any significant degradation of performance. A novel algorithm to estimate the 3D hand posture from eight 2D projected feature points is proposed. Experimental results using real images confirm that our algorithm gives good estimates of the 3D hand pose. Keywords: 3D hand posture estimation; Model-based approach; Gesture recognition; human- computer interface; machine vision.
This document discusses gait analysis and a MATLAB program called MOTION for analyzing gait data. It provides background on gait studies, describes the goals and capabilities of MOTION, and shows initial results applying MOTION to analyze VICON gait data using inverse kinematics algorithms. Applications of gait analysis include identifying IEDs, diagnosing medical conditions, and enhancing sports performance.
Trajectory reconstruction for robot programming by demonstration IJECEIAES
The reproduction of hand movements by a robot remains difficult and conventional learning methods do not allow us to faithfully recreate these movements because it is very difficult when the number of crossing points is very large. Programming by Demonstration gives a better opportunity for solving this problem by tracking the user’s movements with a motion capture system and creating a robotic program to reproduce the performed tasks. This paper presents a Programming by Demonstration system in a trajectory level for the reproduction of hand/tool movement by a manipulator robot; this was realized by tracking the user’s movement with the ArToolkit and reconstructing the trajectories by using the constrained cubic spline. The results obtained with the constrained cubic spline were compared with cubic spline interpolation. Finally the obtained trajectories have been simulated in a virtual environment on the Puma 600 robot.
Turkish Sign Language Recognition Using Hidden Markov Model csandit
In past years, there were a lot of researches made in order to provide more accurate and comfortable interaction between human and machine. Developing a system which recognizes human gestures, is an important study to improve interaction between human and machine.
Sign language is a way of communication for hearing-impaired people which enables them to communicate among themselves and with other people around them. Sign language consists of hand gestures and facial expressions. During the past 20 years, researches were made to facilitate communication of hearing-impaired people with others.
Sign language recognition systems are designed in various countries. This paper presents a sign language recognition system, which uses Kinect camera to obtain skeletal model. Our aim was to recognize expressions, which are used widely in Turkish Sign Language (TSL). For that purpose we have selected 15 words/expressions randomly (repeated 4 times each by 3 different signers) which belong to Turkish Sign Language. We have used 180 records in total. Videos are recorded using Microsoft Kinect Camera and Nui Capture. Joint angles and joint positions have been used as features of gesture and achieved close to 100% recognition rates.
This document discusses converting gait data collected using a motion capture system into files that can be analyzed using OpenSim software. The process involves collecting marker position data during walking trials, converting the files to .c3d and .trc formats, scaling a musculoskeletal model to match the participant, using inverse kinematics to calculate joint angles, inverse dynamics to calculate joint forces and moments, and static optimization to estimate muscle forces. The goal is to analyze muscle and joint loading during walking.
This document summarizes a research article that proposes a method for implementing a 3D pedometer using a three-axis accelerometer and microcontroller. The pedometer can automatically identify walking and running motions and calculate step counts without needing to be worn at the waist. It analyzes the acceleration signals from walking and running to distinguish the motions and accumulate step counts accordingly. The article describes the hardware system, signal analysis methods used to smooth the acceleration data, and an algorithm to determine motion state and step counts based on the 3D acceleration values. Experimental results demonstrate the pedometer's ability to accurately count steps during various motions while being worn in different positions.
An Iot Based Smart Manifold Attendance SystemIJERDJOURNAL
ABSTRACT:- Attendance has been an age old procedure employed in different disciplines of educational institutions. While attendance systems have witnessed growth right from manual techniques to biometrics, plight of taking attendance is undeniable. In fingerprint based attendance monitoring, if fingers get roughed / scratched, it leads to misreading. Also for face recognition, students will have to make a queue and each one will have to wait until their face gets recognised. Our proposed system is employing “manifold attendance” that means employing passive attendance, where at a time, the attendance of multiple people can get captured. We have eliminated the need of queue system / paper-pen system of attendance, and just with a single click the attendance is not only captured, but monitored as well, that too without any human intervention. In the proposed system, creation of database and face detection is done by using the concepts of bounding box, whereas for face recognition we employ histogram equalization and matching technique.
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOTcsandit
Image-processing is one the challenging issue in robotic as well as electrical engineering
research contexts. This study proposes a system for extract and tracking objects by a
quadcopter’s flying robot and how to extract the human body. It is observed in image taken
from real-time camera that is embedded bottom of the quadcopter, there is a variance in human
behaviour being tracked or recorded such as position and, size, of the human. In the regard, the
paper tries to investigate an image-processing method for tracking humans’ body, concurrently.
For this process, an extraction method, which defines features to distinguish a human body, is
proposed. The proposed method creates a virtual shape of bodies for recognizing the body of
humans, also, generate an extractor according to its edge information. This method shows
better performance in term of precision as well as speed experimentally.
3D Human Hand Posture Reconstruction Using a Single 2D ImageWaqas Tariq
Passive sensing of the 3D geometric posture of the human hand has been studied extensively over the past decade. However, these research efforts have been hampered by the computational complexity caused by inverse kinematics and 3D reconstruction. In this paper, our objective focuses on 3D hand posture estimation based on a single 2D image with aim of robotic applications. We introduce the human hand model with 27 degrees of freedom (DOFs) and analyze some of its constraints to reduce the DOFs without any significant degradation of performance. A novel algorithm to estimate the 3D hand posture from eight 2D projected feature points is proposed. Experimental results using real images confirm that our algorithm gives good estimates of the 3D hand pose. Keywords: 3D hand posture estimation; Model-based approach; Gesture recognition; human- computer interface; machine vision.
Feasibility Study for evaluating the effectiveness of Orient inertial 3D motion capture wireless devices (developed by the Speckled Computing research group at the University of Edinburgh) for human gait analysis and for identifying deviations from normal gait
Analysis of Inertial Sensor Data Using Trajectory Recognition Algorithmijcisjournal
This paper describes a digital pen based on IMU sensor for gesture and handwritten digit gesture
trajectory recognition applications. This project allows human and Pc interaction. Handwriting
Recognition is mainly used for applications in the field of security and authentication. By using embedded
pen the user can make hand gesture or write a digit and also an alphabetical character. The embedded pen
contains an inertial sensor, microcontroller and a module having Zigbee wireless transmitter for creating
handwriting and trajectories using gestures. The propound trajectory recognition algorithm constitute the
sensing signal attainment, pre-processing techniques, feature origination, feature extraction, classification
technique. The user hand motion is measured using the sensor and the sensing information is wirelessly
imparted to PC for recognition. In this process initially excerpt the time domain and frequency domain
features from pre-processed signal, later it performs linear discriminant analysis in order to represent
features with reduced dimension. The dimensionally reduced features are processed with two classifiers –
State Vector Machine (SVM) and k-Nearest Neighbour (kNN). Through this algorithm with SVM classifier
provides recognition rate is 98.5% and with kNN classifier recognition rate is 95.5% .
Draft activity recognition from accelerometer dataRaghu Palakodety
This document describes a framework for classifying human activities like standing, walking, and running using data from an accelerometer sensor on a smartphone. It discusses collecting raw sensor data, preprocessing the data through smoothing and feature extraction, training classifiers on extracted features, and classifying new data in real-time. Random forest classification achieved 83.49% accuracy on this activity recognition task using accelerometer data from an Android application.
IRJET- Recognition of Theft by Gestures using Kinect Sensor in Machine Le...IRJET Journal
This document discusses a system that uses a Kinect sensor to recognize theft gestures using machine learning. The system tracks a person's skeleton and compares their gestures to a dictionary of known theft and normal gestures. If a match for a theft gesture is found, an alarm and SMS notification are generated. The system was implemented using Processing and a logistic regression machine learning algorithm to classify poses as abnormal or normal based on joint angle features extracted from Kinect skeleton data. The system aims to automatically detect theft in environments like banks and stores to improve security.
IRJET-V9I114.pdfA Review Paper on Economical Bionic Arm with Predefined Grasp...IRJET Journal
The document is a review paper on developing an economical bionic arm. It discusses using technologies like 3D printing, EMG sensors, machine learning algorithms, and computer vision to improve the functionality of prosthetic arms while reducing costs. It summarizes several previous studies that used these technologies, including arms controlled via EMG signals and machine learning classifiers or using computer vision for object detection and control. The goal is to develop a low-cost prosthetic arm that provides multifunctional movement and feels natural to the user.
Human Activity Recognition Using SmartphoneIRJET Journal
The document discusses human activity recognition using smartphone sensors. It proposes using a CNN-LSTM model to classify activities like walking, running, and sitting based on accelerometer and gyroscope sensor data from a smartphone. The CNN extracts features from the sensor data, while the LSTM recognizes sequences of activities over time. The model is implemented in an Android application that recognizes activities in real-time and also counts steps, distance, and calories burned. The application uses built-in smartphone sensors like accelerometer, gyroscope, and pedometer to recognize activities affordably and with high availability without external devices. The CNN-LSTM model achieves accurate activity recognition compared to other machine learning techniques.
The document describes a digital pen system for handwritten digit and gesture recognition using a trajectory recognition algorithm. The system uses a tri-axial accelerometer, ARM processor, and Zigbee module in a pen-like device to capture acceleration signals from hand motions. The signals are transmitted wirelessly and a trajectory recognition algorithm processes the data through steps of acquisition, preprocessing, feature generation/selection, and extraction to recognize digits and gestures written in air. The system aims to allow for flexible use without limitations of range, environment, or surface that other methods impose.
Digital Pen for Handwritten Digit and Gesture Recognition Using Trajectory Re...IOSR Journals
The document describes a digital pen system for handwritten digit and gesture recognition using a trajectory recognition algorithm. The system uses a tri-axial accelerometer, ARM processor, and Zigbee module in a pen-like device to capture acceleration signals from hand motions. The signals are transmitted wirelessly and a trajectory recognition algorithm processes the data through steps of acquisition, preprocessing, feature generation/selection, and extraction to recognize digits and gestures written in air. The system aims to allow for flexible use without limitations of range, environment, or surface that other methods impose. It provides a portable and generalized approach to human-computer interaction through writing and gestures.
Control Buggy using Leap Sensor Camera in Data Mining DomainIRJET Journal
This document summarizes a research paper that proposes controlling a buggy using hand gestures detected by a Leap Motion sensor camera. The system extracts 6 points from the detected gesture, calculates 15 features from those points, and compares the features to stored gesture data using similarity algorithms to identify the gesture and command the buggy accordingly. This reduces human effort in driving compared to manual control. The system was designed to recognize gestures for basic buggy movements like forward, backward, left, and right.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IRJET- Hand Gesture Recognition and Voice Conversion for Deaf and DumbIRJET Journal
This document describes a research project that aims to help deaf and dumb people communicate more easily. It presents a system using hand gesture recognition and voice conversion. The system uses a webcam to detect hand gestures, then converts the gestures to text via image processing and matching to a database of gestures and texts. It also aims to convert the text to voice so deaf people can understand via voice. It reviews previous related work on sign language recognition systems and discusses the proposed system's image processing and matching techniques, including feature extraction using principal component analysis and classification using k-nearest neighbors. The goal is to help reduce communication barriers for deaf and dumb people.
IRJET- Survey Paper on Vision based Hand Gesture RecognitionIRJET Journal
This document presents a survey of previous research on vision-based hand gesture recognition. It discusses various methods that have been used, including discrete wavelet transforms, skin color segmentation, orientation histograms, and neural networks. The document proposes a new methodology using webcam image capture, static and dynamic gesture definition, image processing techniques like localization, enhancement, segmentation, and morphological filtering, and a convolutional neural network for classification. The goal is to develop a more efficient and accurate system for hand gesture recognition and human-computer interaction.
DEVELOPMENT OF CONTROL SOFTWARE FOR STAIR DETECTION IN A MOBILE ROBOT USING A...IAEME Publication
In this paper our main aim is to design and develop the control software for the detection and alignment of stairs by a stair climbing and manually operated robot. The robot platform is a differential drive, with skid steering system. The system is mounted on a rugged chassis. Vision sensors are mounted on the robot. These are cameras which will provide motion images of the robot’s surroundings. The application software will apply image processing and artificial intelligence techniques to detect stairs at Real-time and align the robot at an appropriate distance from the stair. Use of canny edge detection method to detect the edges of the stairs, smoothen the image and removing noise from the image. Neural networking will be used to detect stairs and faults. Machine learning technology to overcome faults in stairs and act accordingly from the saved experiences. This will be Linux based application which will have support of OpenCV API.
human activity recognization using machine learning with data analysisVenkat Projects
Human activity recognition, or HAR for short, is a broad field of study concerned with identifying the specific movement or action of a person based on sensor data.
The sensor data may be remotely recorded, such as video, radar, or other wireless methods. It contains data generated from accelerometer, gyroscope and other sensors of Smart phone to train supervised predictive models using machine learning techniques like SVM , Random forest and decision tree to generate a model. Which can be used to predict the kind of movement being carried out by the person, which is divided into six categories walking, walking upstairs, walking down-stairs, sitting, standing and laying?
MLM and SVM achieved accuracy of more than 99.2% in the original data set and 98.1% using new feature selection method. Results show that the proposed feature selection approach is a promising alternative to activity recognition on smart phones.
Research.Essay_Chien-Chih_Weng_v3_by Prof. KarkoubChien-Chih Weng
1. Dr. Weng's research focuses on developing control algorithms for nonlinear systems with uncertainties using techniques like fuzzy logic, neural networks, and sliding mode control.
2. His past work includes developing an adaptive fuzzy sliding mode controller for robotic systems and a tracking controller for parallel manipulators.
3. His current research further develops control methods for complex nonlinear systems like robotic systems, drilling systems, and trains of self-balancing vehicles.
Feasibility Study for evaluating the effectiveness of Orient inertial 3D motion capture wireless devices (developed by the Speckled Computing research group at the University of Edinburgh) for human gait analysis and for identifying deviations from normal gait
Analysis of Inertial Sensor Data Using Trajectory Recognition Algorithmijcisjournal
This paper describes a digital pen based on IMU sensor for gesture and handwritten digit gesture
trajectory recognition applications. This project allows human and Pc interaction. Handwriting
Recognition is mainly used for applications in the field of security and authentication. By using embedded
pen the user can make hand gesture or write a digit and also an alphabetical character. The embedded pen
contains an inertial sensor, microcontroller and a module having Zigbee wireless transmitter for creating
handwriting and trajectories using gestures. The propound trajectory recognition algorithm constitute the
sensing signal attainment, pre-processing techniques, feature origination, feature extraction, classification
technique. The user hand motion is measured using the sensor and the sensing information is wirelessly
imparted to PC for recognition. In this process initially excerpt the time domain and frequency domain
features from pre-processed signal, later it performs linear discriminant analysis in order to represent
features with reduced dimension. The dimensionally reduced features are processed with two classifiers –
State Vector Machine (SVM) and k-Nearest Neighbour (kNN). Through this algorithm with SVM classifier
provides recognition rate is 98.5% and with kNN classifier recognition rate is 95.5% .
Draft activity recognition from accelerometer dataRaghu Palakodety
This document describes a framework for classifying human activities like standing, walking, and running using data from an accelerometer sensor on a smartphone. It discusses collecting raw sensor data, preprocessing the data through smoothing and feature extraction, training classifiers on extracted features, and classifying new data in real-time. Random forest classification achieved 83.49% accuracy on this activity recognition task using accelerometer data from an Android application.
IRJET- Recognition of Theft by Gestures using Kinect Sensor in Machine Le...IRJET Journal
This document discusses a system that uses a Kinect sensor to recognize theft gestures using machine learning. The system tracks a person's skeleton and compares their gestures to a dictionary of known theft and normal gestures. If a match for a theft gesture is found, an alarm and SMS notification are generated. The system was implemented using Processing and a logistic regression machine learning algorithm to classify poses as abnormal or normal based on joint angle features extracted from Kinect skeleton data. The system aims to automatically detect theft in environments like banks and stores to improve security.
IRJET-V9I114.pdfA Review Paper on Economical Bionic Arm with Predefined Grasp...IRJET Journal
The document is a review paper on developing an economical bionic arm. It discusses using technologies like 3D printing, EMG sensors, machine learning algorithms, and computer vision to improve the functionality of prosthetic arms while reducing costs. It summarizes several previous studies that used these technologies, including arms controlled via EMG signals and machine learning classifiers or using computer vision for object detection and control. The goal is to develop a low-cost prosthetic arm that provides multifunctional movement and feels natural to the user.
Human Activity Recognition Using SmartphoneIRJET Journal
The document discusses human activity recognition using smartphone sensors. It proposes using a CNN-LSTM model to classify activities like walking, running, and sitting based on accelerometer and gyroscope sensor data from a smartphone. The CNN extracts features from the sensor data, while the LSTM recognizes sequences of activities over time. The model is implemented in an Android application that recognizes activities in real-time and also counts steps, distance, and calories burned. The application uses built-in smartphone sensors like accelerometer, gyroscope, and pedometer to recognize activities affordably and with high availability without external devices. The CNN-LSTM model achieves accurate activity recognition compared to other machine learning techniques.
The document describes a digital pen system for handwritten digit and gesture recognition using a trajectory recognition algorithm. The system uses a tri-axial accelerometer, ARM processor, and Zigbee module in a pen-like device to capture acceleration signals from hand motions. The signals are transmitted wirelessly and a trajectory recognition algorithm processes the data through steps of acquisition, preprocessing, feature generation/selection, and extraction to recognize digits and gestures written in air. The system aims to allow for flexible use without limitations of range, environment, or surface that other methods impose.
Digital Pen for Handwritten Digit and Gesture Recognition Using Trajectory Re...IOSR Journals
The document describes a digital pen system for handwritten digit and gesture recognition using a trajectory recognition algorithm. The system uses a tri-axial accelerometer, ARM processor, and Zigbee module in a pen-like device to capture acceleration signals from hand motions. The signals are transmitted wirelessly and a trajectory recognition algorithm processes the data through steps of acquisition, preprocessing, feature generation/selection, and extraction to recognize digits and gestures written in air. The system aims to allow for flexible use without limitations of range, environment, or surface that other methods impose. It provides a portable and generalized approach to human-computer interaction through writing and gestures.
Control Buggy using Leap Sensor Camera in Data Mining DomainIRJET Journal
This document summarizes a research paper that proposes controlling a buggy using hand gestures detected by a Leap Motion sensor camera. The system extracts 6 points from the detected gesture, calculates 15 features from those points, and compares the features to stored gesture data using similarity algorithms to identify the gesture and command the buggy accordingly. This reduces human effort in driving compared to manual control. The system was designed to recognize gestures for basic buggy movements like forward, backward, left, and right.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IRJET- Hand Gesture Recognition and Voice Conversion for Deaf and DumbIRJET Journal
This document describes a research project that aims to help deaf and dumb people communicate more easily. It presents a system using hand gesture recognition and voice conversion. The system uses a webcam to detect hand gestures, then converts the gestures to text via image processing and matching to a database of gestures and texts. It also aims to convert the text to voice so deaf people can understand via voice. It reviews previous related work on sign language recognition systems and discusses the proposed system's image processing and matching techniques, including feature extraction using principal component analysis and classification using k-nearest neighbors. The goal is to help reduce communication barriers for deaf and dumb people.
IRJET- Survey Paper on Vision based Hand Gesture RecognitionIRJET Journal
This document presents a survey of previous research on vision-based hand gesture recognition. It discusses various methods that have been used, including discrete wavelet transforms, skin color segmentation, orientation histograms, and neural networks. The document proposes a new methodology using webcam image capture, static and dynamic gesture definition, image processing techniques like localization, enhancement, segmentation, and morphological filtering, and a convolutional neural network for classification. The goal is to develop a more efficient and accurate system for hand gesture recognition and human-computer interaction.
DEVELOPMENT OF CONTROL SOFTWARE FOR STAIR DETECTION IN A MOBILE ROBOT USING A...IAEME Publication
In this paper our main aim is to design and develop the control software for the detection and alignment of stairs by a stair climbing and manually operated robot. The robot platform is a differential drive, with skid steering system. The system is mounted on a rugged chassis. Vision sensors are mounted on the robot. These are cameras which will provide motion images of the robot’s surroundings. The application software will apply image processing and artificial intelligence techniques to detect stairs at Real-time and align the robot at an appropriate distance from the stair. Use of canny edge detection method to detect the edges of the stairs, smoothen the image and removing noise from the image. Neural networking will be used to detect stairs and faults. Machine learning technology to overcome faults in stairs and act accordingly from the saved experiences. This will be Linux based application which will have support of OpenCV API.
human activity recognization using machine learning with data analysisVenkat Projects
Human activity recognition, or HAR for short, is a broad field of study concerned with identifying the specific movement or action of a person based on sensor data.
The sensor data may be remotely recorded, such as video, radar, or other wireless methods. It contains data generated from accelerometer, gyroscope and other sensors of Smart phone to train supervised predictive models using machine learning techniques like SVM , Random forest and decision tree to generate a model. Which can be used to predict the kind of movement being carried out by the person, which is divided into six categories walking, walking upstairs, walking down-stairs, sitting, standing and laying?
MLM and SVM achieved accuracy of more than 99.2% in the original data set and 98.1% using new feature selection method. Results show that the proposed feature selection approach is a promising alternative to activity recognition on smart phones.
Research.Essay_Chien-Chih_Weng_v3_by Prof. KarkoubChien-Chih Weng
1. Dr. Weng's research focuses on developing control algorithms for nonlinear systems with uncertainties using techniques like fuzzy logic, neural networks, and sliding mode control.
2. His past work includes developing an adaptive fuzzy sliding mode controller for robotic systems and a tracking controller for parallel manipulators.
3. His current research further develops control methods for complex nonlinear systems like robotic systems, drilling systems, and trains of self-balancing vehicles.
The document describes Experts Vision, a company that provides experts in various technological fields. It offers services including research and development projects, training, and product development. The company has expertise in areas such as MATLAB, engineering design, image processing, and more. It has completed numerous projects involving electrical systems, computer vision, artificial intelligence, communications and other domains. Experts Vision aims to develop strategic partnerships and expand its training programs and research.
Real Time Vision Hand Gesture Recognition Based Media Control via LAN & Wirel...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Mems Sensor Based Approach for Gesture Recognition to Control Media in ComputerIJARIIT
Gesture Recognition is the method of identifying and understanding meaningful movements of the arms, hands,
face, or sometimes head. It is one of the most important aspects in the field of Human-Computer interface. There has been a
continuous research in this field because of its ability for application in user interfaces. Gesture Recognition is one of the
important areas of research for engineers and scientists. Nowadays the industry is working on the different implementation for
the trouble free, natural and easy product which can be easy to handle. This paper proposed a method to work with motion
sensors and interpret the motion of hand into various applications in a virtual interface. The Micro-Electro-Mechanical
Systems (MEMS) accelerometers are used to capture the dynamic hand gesture. These sensors information is transferred to
the microcontroller from where these data are transferred wirelessly to the computer system for actual processing of the data
with the use of various algorithms.
IRJET- Behavior Analysis from Videos using Motion based Feature ExtractionIRJET Journal
This document proposes a technique for analyzing human behavior in videos using motion-based feature extraction. It discusses how previous approaches have used spatial and temporal features to detect abnormal behaviors. The proposed approach extracts motion features from videos to represent each video with a single feature vector, rather than extracting features from each individual frame. This reduces the feature space and unnecessary information. The technique involves preprocessing videos into frames, extracting motion features, using KNN classification on the features to classify behaviors as normal or abnormal, and evaluating the method's performance on various metrics like accuracy, recall, and precision. Testing on fight and riot datasets showed the motion-based approach achieved higher accuracy, recall, precision and F-measure than a non-motion based approach.
Qadri et Al., en su trabajo “The Future of Healthcare Internet of Things (H-IoT): A Survey of Emerging Technologies” propone como uno de los desafíos del H-IoT:
Monitoreo de Desórdenes neurológicos
Ambient Assisted Living (AAL)
Fitness Tracking
Uso de técnicas de Big Data
Uso de Edge Computing
Internet of Nano-Things
Similar to Modeling Motion in Matlab 7.19.11_v4_Taylor_KKedits (20)
⭐⭐⭐⭐⭐ LECCIÓN SISTEMAS EMBEBIDOS, 2do Parcial (2020 PAO 1) C6
Modeling Motion in Matlab 7.19.11_v4_Taylor_KKedits
1. 1
Modeling Motion in MATLAB
Anthony Taylor
Dept. of Computer Science and Mathematics
Central State University
Wilberforce, Ohio 45384 USA
Email: ant.taylor2@gmail.com
Abstract
A MATLAB program was created using algebraic techniques for solving inverse problems.
Through these algorithms, the MATLAB program was structured to import the lengths of the
upper arm, forearm, thigh, and calf from a text file and output the two dimensional joint angles to
determine the position of the wrist and ankle in space. This paper, then, will explain how the
MATLAB program was designed and discuss future applications in gait analysis.
1. Introduction
Gait analysis is the study of human locomotion by means of measuring, data collection,
and computer applications to determine abnormalities in human walking patterns, gender, or
medical disabilities. To understand gait analysis one must first know the basic gait cycle which is
heel touch- toe off, heel touch-toe off. There have been many advances in the study of gait
analysis since its pioneer Aristotle first introduced the knowledge and techniques for measuring
and predicting such patterns in human locomotion. Although major developments of gait
analysis weren’t made until the twentieth century, (Inman et al., 1981; Eberhart et. al., 1951;
Sutherland et. al., 1980) modified sophisticated measuring instruments such as force plates for
clinical applications of gait analysis. Since then, a wide range of temporal and linear techniques
have been created to represent models of gait data that describe body movements in relation to
joint angles.
2. 2
The techniques used in today’s society have been modernized by the use of high
performance computers, digital imaging, and computer software applications. Each technique
makes gait behavior easier to study. There are various methods used for gait analysis: collecting
sensor data, collecting anatomical data, modeling, digital image processing, video tracking, and
computer applicable techniques.
Sensor data is a major entity of gait analysis. Sensors are responsive devices that
measure, signal, or indicate the properties of an object. There are various types of sensors that
can be used to collect data for acceleration, tracking, and pressure. When using sensors to study
human locomotion, sensors give the observer the ability to track a specific angle or joint of a
subject with the help of active or passive markers.
(Figure1. Various types of sensors used for measuring, signaling, and indicating properties.)
Anatomical data is data collected from the human body. Examples include the lengths of
a limb, degree of joint rotation, etc.
Modeling uses the data derived from sensors or anatomical data to replicate a linear or
nonlinear structure of motion. Models can be projected onto graphs of two dimensional or three
dimensional surfaces.
Video tracking records an individual’s gait behavior. The video is then analyzed using
image processing techniques which input data from sensors, tracking markers, and specialized
digital cameras. Generally a study is conducted with a parameter that is composed of video
surveillance. This method can be time efficient, although it is not always accurate due to lighting,
shadows, or the natural elements of the experimental field of view.
3. 3
Computer applied techniques are also used to process gait data. These techniques could
involve little or no human interaction, and are programmed to run self-sufficiently to import,
manipulate, and translate gait data from sensors or video tracking devices.
The technique discussed in this paper is a computer applied technique called MOTION, a
MATLAB program that uses kinematic and anatomical data from a human subject to predict
wrist and ankle placement during the gait cycle. The kinematic data used in MOTION is
generated from a VICON Motion Analysis file. The anatomical data of a human subject is
collected prior to the gait experiment. Through MOTION the joint angles to predict wrist and
ankle coordinates are calculated using algebraic techniques for solving inverse problems. The
results are stored in a text file for access and manipulation. The Motion program, then, integrates
several gait analysis techniques through one platform.
2. Literature Review
Significant research has shown that there are various software techniques for studying and
modeling human motion. Researchers (Figueroa et al.,2002), introduced a technique for tracking
the passive or active markers on a human subject through a windows based computer application
to efficiently measure and study video captured data. (Taylor et. al.,2006), introduced a modeling
technique that used a linear dynamical system due to the continuous hidden variables to compute
hidden and visible angles of a human subject to resemble motion. (Bowden, 2000) applied the
methods of non-linear formations to movements. Bowden used models of deformation to provide
a pre-existing state of movements in which an objects motion can be traced and used to simulate
motion. The various software techniques mentioned above are related entities of the software
application introduced in this paper due to the nature and technique used in the work. The
4. 4
MOTION application is unique from the methods described above because the program is
capable of importing VICON Motion Analysis data, and plotting a graphical representation of
the data in a single platform as opposed to using different software applications simultaneously.
This program can also run on multiple operating systems making it easily accessible, as well as
user friendly.
3. Methods
To analyze the VICON Motion Analysis data, a program was designed using the
MATLAB software environment and programming language. MATLAB is short for Matrix
Laboratory. The software is programmer friendly with an environment that meets the demand of
high performance computing. MATLAB can import highly complex numerical values for data
analysis and simulation. The software also allows users to develop programs to input and output
data for graphical representation or visualization of the data. MATLAB also assists in the
development of applications in which programmers can create graphical user interfaces (GUI)
for manipulating and loading data sets.
MATLAB can run on three operating systems: Windows, Mac, and Linux. The system
requirements include:
Windows
o Can be installed on 32 or 64 bit systems with Windows XP, Windows Vista,
Windows Server 2003/2008, and Windows 7. Intel or AMD x86 Processors are
needed, as well as 3 to 5 GB of disk space and 1024 RAM.
Mac
5. 5
o Can be installed on a 64 bit system with MAC OS X Snow Leopard with an Intel
Core 2 Processor or later versions are needed, as well as 3 to 5 GB disk space and
1024 RAM.
Linux
o Can be installed on 32 or 64 bit systems with Ubuntu 10.04 LTS and 10.10 Red
Hat Enterprise, Linux 5.x and 6.x, SUSE Linux Enterprise Desktop 11.x, Debian
5.x, an Intel or AMD x86 Processors are needed, as well as 3 to 5 GB of disk
space and 1024 RAM.
Of these systems, we installed the MATLAB 7.5.0 software package onto a 64 bit
Windows 7 operating system with an AMD processor. The text files containing the inputs were
imported through MATLAB’s import wizard and separated so the numerical data could be used
explicitly. A simple code was created to process the data inputs—lengths of the upper arm,
forearm, thigh, and calf--- through algebraic algorithms derived by (Kendricks et. al.,2010), then
display the numerical outputs-- the joint angles that determine the position and orientation of the
wrist and ankle during the gait cycle. The values were processed through a time delayed loop,
and plotted onto a graph. The graph was then traced by a red marker simulating live motion.
4. Results
The MOTION program operates by importing a Microsoft Excel file containing anatomical
data provided by the VICON Motion Analysis System. This data is located on the operating
system’s hard drive and synced with MATLAB. The data in the file is then processed through
the algebraic algorithms and exports joint angles describing the position of the wrist and ankle in
6. 6
space. This export file is processed through a for loop in which the loop plots each coordinate
of the wrist and ankle onto a graph. Once each point is plotted along the x- and y- axis
respectively a marker traces the path of wrist, simulating arm swing.
MOTION Code for Plotting the Position and Orientation of the Wrist or Ankle
%Program Name: MOTION
%Author: Anthony A. Taylor
%Date: 6/20/2011
%Description: Inverse Kinematics of human motion. This program will take
%the known positions of the wrist, elbow, and shoulder from a .txt file and
%apply the variables to algebraic formulas to find the unknown variables:
%cos1, cos2, sin1, & sin2. Once all the variables are known, further
%algebraic calculations are done to find the degrees of the elbow and
%shoulder angles.
%Allows users to select .xls file.
[z, pathname] = uigetfile('*.xls','Select Excel File');
if isequal(z,0)
disp('User selected Cancel')
else
disp(['User selected', fullfile(pathname, z)])
end
%Reads in columns of data, These are the "Known" Variables L1, L2, a, & b.
L1 = xlsread(z, 'J:J');
L2 = xlsread(z, 'K:K');
a = xlsread(z,'A:A');
b = xlsread(z,'B:B');
OERplot = xlsread(z,'Kin. Model _Fwd & Inv_Cycle 1','Z:Z');
OSRplot = xlsread(z,'Kin. Model _Fwd & Inv_Cycle 1','AA:AA');
%Apply the "Known" variables to the equation solution sets.
SinTheta2Sqr= -((0.25).*L1.^4 - (0.5).*L1.^2.*L2.^2 -(0.5).*(L1.^2.*b.^2)-
(0.5).*(L1.^2.* a.^2) + (0.25).*L2.^4 - (0.5).*(L2.^2.*b.^2) -
(0.5).*(L2.^2.*a.^2) + (0.25).*b.^4 +0.5.*(b.^2.*
a.^2)+0.25.*a.^4)./(L1.^2.*L2.^2);
SinTheta2P = sqrt(SinTheta2Sqr);
SinTheta2M = SinTheta2P*-1;
CosTheta1P = (L2.*a)./(a.^2+b.^2).*SinTheta2P-(-1.*L1.^2.*b+L2.^2.*b-b.^3-
b.*a.^2)./(2.*L1.*(a.^2+b.^2));
CosTheta1M = (L2.*a)./(a.^2+b.^2).*SinTheta2M-(-1.*L1.^2.*b+L2.^2.*b-b.^3-
b.*a.^2)./(2.*L1.*(a.^2+b.^2));
CosTheta2P = -1.*((L1.^2+L2.^2)-(b.^2-a.^2))./(2.*L1.*L2);
7. 7
SinTheta1P = -1.*(L2.*(b)).*(SinTheta2P)./ ((b).^2 +a.^2) - (-1.*L1.^2.* a +
L2.^2.*a - (b).^2.* a-a.^3)./(2.*L1.*((b).^2 +a.^2));
SinTheta1M = -1.*(L2.*(b)).*(SinTheta2M)./ ((b).^2 +a.^2) - (-1.*L1.^2.* a +
L2.^2.*a - (b).^2.* a-a.^3)./(2.*L1.*((b).^2 +a.^2));
%Take the inverse to find the Shoulder and Elbow angles Theta 1 & Theta 2
SAThetaOne1 = atan(SinTheta1P./CosTheta1P);
EAThetaTwo1 = atan(SinTheta2P./CosTheta2P);
ShoulderThetaOne2 = atan(SinTheta1M./CosTheta1M);
ElbowThetaTwo2 = atan(SinTheta2M./CosTheta2P);
%Actual angle in degrees
ElbowAngleDegrees =EAThetaTwo1.*(180./pi);
ShoulderAngleDegrees = SAThetaOne1.*(180./pi);
ElbowRadian = ElbowAngleDegrees.*(pi./180);
ShoulderRadian = ShoulderAngleDegrees.*(pi./180);
E = ElbowRadian;
S = ShoulderRadian;
%Outputs data in an excel file with the following headings and plots the
projected shoulder and elbow angles
headers = {'Cosine of Theta 1_value 1 Plus','Cosine of theta 1_value 2
Minus','Sine of theta 1 _value1 Plus','Sine of theta 1_value 2
Minus','Shoulder Angles (Theta 1_1)','Elbow Angles (Theta 2_1)','Shoulder
(Theta 1_2)','Elbow (Theta 2_2)','Cosine of theta 2','Sine of theta 2
(Squared)','Sine of theta 2_1 Plus','Sine of theta 2_2 Minus','Elbow Angles
in Degrees','Shoulder Angles in Degrees','Elbow Angles in Radians','Shoulder
Angles in Radians'};
data =
[CosTheta1P,CosTheta1M,SinTheta1P,SinTheta1M,SAThetaOne1,EAThetaTwo1,Shoulder
ThetaOne2,ElbowThetaTwo2,CosTheta2P,SinTheta2Sqr,SinTheta2P,SinTheta2M,ElbowA
ngleDegrees,ShoulderAngleDegrees,ElbowRadian,ShoulderRadian];
xlswrite('DataOutput.xls',headers);
xlswrite('DataOutput.xls',data,1,'A2');
%Start plot from position 1 to 100.
%Track plot from 1 to 100.
x=1:100;
DELAY = 0.05;
for i = 1:numel(x)
clf;
plot(x,S,'--r');
hold on;
plot(x(i),S(i),'b*');
hold on;
plot(x,OSRplot,'.b');
hold on;
plot(x(i),OSRplot(i),'r*');
grid on
ylabel('Angle in radians')
8. 8
xlabel('% of the gait cycle')
title('Shoulder positions')
legend('Predicted Shoulder','Marker','Measured Shoulder','Marker')
pause(DELAY);
end
(Figure 2. Shoulder Angles: Measured vs. Predicted)
(Figure 3. A two dimensional view of the predicted position of the wrist moving in space.)
The code for plotting the joint angles of the shoulder and elbow can also be used to plot
the joint angles of the hip and knee. The lower extremity angles can be found by adjusting the
input values of L1, L2, which is hip to knee, and knee to ankle, to determine the position of the
ankle denoted (a,b). Look for future versions of MOTION to include these modifications so that
the positions of the wrist and ankle can be predicted simultaneously.
5. Discussion
Through further research with additional human subjects (of various height, weight, race,
gender) the MOTION program will simultaneously generate multiple coordinates of the wrist
and ankle in space. Advanced functionality of the program will be an asset for generating and
synchronizing gait patterns to accurately distinguish abnormalities in real time, and in the future
identify a human subject’s height, weight, race, etc. Such modifications can influence clinical
studies that can accurately and effectively distinguish abnormal walking patterns caused by
stooped posture, muscular disorders, or fractures. Motion analysis can also benefit with an
enhanced system for studying video captured data. By accurately measuring video captured data
9. 9
without physical measuring instruments, an identification system can be created to assist with
distinguishing criminal terrorist or missing persons, as well as in the field of professional sports
to enhance an athlete’s performance.
6. Conclusion
This paper provides an overview of gait analysis and the methods used to measure gait.
Using the computer applied technique, a MATLAB program called MOTION was created to
simplify the analysis of human gait patterns by using algebraic techniques for solving inverse
problems. The program imported the lengths and exported the joint angles that determine wrist
or ankle position. The program also plotted the position and orientation of the wrist and ankle on
a graph.
Though MOTION is among many computer applied techniques, future applications will
be seen in various branches of law enforcement, medical treatment and research, and
professional sports. The need for an accurate and efficient system of measuring walking patterns
will be in high demand for many government agencies and software developers, by combining
the various methods of obtaining gait data and enhancing the performance of measuring
techniques, MOTION will increase the efficiency of gait analysis and computational procedures.
7. References
10. 10
Baker, R. (2007). The history of gait analysis before the advent of modern computers. Gait and
Posture, 26(3), 331-342.
Benabdelkader, C. Cutler, R. Davism, L. (2004). Gait Recognition Using Image Self-Similarity.
Eurasip Journal on Applied Signal Processing, 4, 572-585.
Bowden, R. (2000). Learning statistical models of human motion. In IEEE Workshop on Human
Modeling, Analysis, and Synthesis, CVPR 2000, IEEE Computer Society.
Dugdale, D. Hoch, D. Zieve, D. (2009). Walking Abnormalities. The A.D.A.M. Medical
Encyclopedia, Bethesda, MD.
Eberhart, H. D. & Inman, V. T. (1951). An evaluation of experimental procedures used in a
fundamental study of human locomotion. Ann. N. Y. Acad. Sci. 51, 1213-1228
Figueroa, P. Leite, N.L. Barros, M.L. R. (2002). A flexible software for tracking of markers used
in human motion analysis. Computer Methods and Programs in Biomedicine 72 (2003) 155-165.
Inman, V. T., Ralston, Henry J. (1981). In Human Walking. Williams & Wilkins, Baltimore.
11. 11
Kendricks, D.K. Fullenkamp, M.A. McGrellis, R. Juhl, J. and Tuttle, F.R. (2010). An Inverse
Kinematic Mathematical Model Using Groebner Basis Theory for Arm Swing Movement in the
Gait Cycle. Battlespace Acoustics and Magnetic Sensors.
Koopman, B. Helm, F. Horsman, K.M. Laboratory Biomechanical Engineering. (2010,
September 7). Analysis and Modeling of Gait Disorders. Retrieved May 25, 2011 from
http://www.bw.ctw.utwente.nl/research/projects/Analysis.doc/index.html
Pallejà, T. Teixidó, M. Tresanchez, M. and Palacín, J. (2009). Measuring Gait Using a Ground
Laser Range Sensor. Sensors. 9 (11), 9133-9146.
Pollo, F. (2007, April 30). Gait Analysis: Techniques and Recognition of Abnormal Gait. Baylor
University Medical Center. Retrieved May 20, 2011 from
http://www.scribd.com/doc/46943381/Gait-Analysis-Techniques-and-Abnormal-Patterns-AAA
Sutherland, H. D, Olshen, R., Cooper, L. and Wo, SL. (1980) The Development of Mature Gait.
The Journal of Bone and Joint Surgery, 62:336-353
Taylor, G.W. Hinton, G.E. and Roweis, S.T. (2006). Modeling human motion using binary
latent variables. Advances in Neural Information Processing. MIT Press.
12. 12
Teplitsk, S. (2009). Cluster 3. COSMOS 2009, University of California. Retrieved May 25, 2011
from http://cosmos.ucdavis.edu/archives/2009/cluster3/teplitsky_sarah.pdf