The document describes a pedometer and its components. It has an embedded processor that counts steps using a sensor and converts the number to distance traveled like miles or calories burned. It stores this data that can then be transferred to a computer via methods like USB, Bluetooth, or WiFi to generate a graph of the user's activity levels. Having an accurate activity tracker embedded in a pedometer allows individuals to independently monitor and improve their fitness while potentially reducing demand on health services, though it could impact businesses like gyms.
Use of Computer Graphics In Defense And Education-Trainingshelly mundhra
Computer graphics are used for defense and education/training purposes. In defense, they are used for tracking, air operations, weather operations, positional operations, mapping techniques, and satellite imagery. They allow for real-time monitoring. In education/training, 3D projectors are used in classrooms and have been shown to improve learning over 2D methods. Flight, space shuttle, naval, and automobile simulators provide realistic training without risk and allow trainees to practice emergency procedures. Simulators have been shown to improve performance for pilots and astronauts.
This document provides an introduction to robotics, including definitions of robots, types of sensors, and applications of robotics. It discusses how robots sense their environment using proprioceptive and exteroceptive sensors like cameras. Stereo vision and machine vision allow robots to perceive depth. Control of robots can be teleoperated by humans or autonomous. Ethical concerns about robots include jobs displacement, but automated systems also contribute benefits to society. MATLAB toolboxes provide functions for simulating and controlling robots using concepts like forward and inverse kinematics.
A total station is an instrument that combines an electronic theodolite, electronic distance measuring device (EDM), and microprocessor. It offers advantages like rapid field work, high accuracy, elimination of errors, and automated computations. Potential disadvantages include lack of hard copies in the field and need to return to the office for overall checks. There are three types: manual requires manual angle readings, semi-automatic reads angles electronically, and automatic senses angles and distances electronically.
This document describes a land survey robot that is designed to automatically measure land areas and divide plots of land into subplots. The survey robot uses a microcontroller and ZigBee wireless module to receive movement commands and transmit distance measurements to a PC. It moves around the plot on its own to calculate the total distance traveled, which is then used by an area measurement module on the PC to determine the area of the plot. The robot is able to divide plots into subplots by placing markers as it moves according to a programmed path. The system aims to automate the land survey process for more efficient area measurement compared to conventional surveying techniques.
This document describes a study that aimed to develop algorithms to calculate 3D body segment orientations from motion capture (mocap) and IMU data during simulated manual work tasks. Participants performed tasks while wearing IMUs and optical markers tracked with mocap. Algorithms were developed in MATLAB to compute orientations of body segments from the mocap data by defining three coordinate systems - a provisional, global, and segment system - and relating them using Euler rotation matrices. The goal was to validate the mocap algorithm and integrate it with one calculating IMU orientations to allow field-based ergonomic assessments using body-worn IMUs.
Exercise Recognition System using Facial Image Information from a Mobile Devi...sugiuralab
This document proposes an exercise recognition system using facial features extracted from a mobile device's camera. It aims to help motivate exercise by automatically measuring exercises without additional equipment. The system obtains facial images during exercise, extracts tracking points and distances as features, and uses SVM classification on the FFT of features to recognize 9 exercises with 88.2% accuracy. Experiments show the system is robust to changes in window size and user standing position, but face tracking is sometimes lost and floor exercises have lower accuracy.
Pose Trainer: “An Exercise Guide and Assessment in Physiotherapy”IRJET Journal
This document describes a project called Pose Trainer that uses computer vision and machine learning to assess and provide feedback on physiotherapy exercises. The system uses OpenPose models to extract key points from user-uploaded exercise videos. It then compares the user's pose to sample poses to evaluate accuracy and provide real-time feedback on form. This guidance helps users perform exercises correctly at home without an in-person trainer. The project aims to make physical therapy more accessible and avoid potential injuries from improper exercise form.
The document describes a pedometer and its components. It has an embedded processor that counts steps using a sensor and converts the number to distance traveled like miles or calories burned. It stores this data that can then be transferred to a computer via methods like USB, Bluetooth, or WiFi to generate a graph of the user's activity levels. Having an accurate activity tracker embedded in a pedometer allows individuals to independently monitor and improve their fitness while potentially reducing demand on health services, though it could impact businesses like gyms.
Use of Computer Graphics In Defense And Education-Trainingshelly mundhra
Computer graphics are used for defense and education/training purposes. In defense, they are used for tracking, air operations, weather operations, positional operations, mapping techniques, and satellite imagery. They allow for real-time monitoring. In education/training, 3D projectors are used in classrooms and have been shown to improve learning over 2D methods. Flight, space shuttle, naval, and automobile simulators provide realistic training without risk and allow trainees to practice emergency procedures. Simulators have been shown to improve performance for pilots and astronauts.
This document provides an introduction to robotics, including definitions of robots, types of sensors, and applications of robotics. It discusses how robots sense their environment using proprioceptive and exteroceptive sensors like cameras. Stereo vision and machine vision allow robots to perceive depth. Control of robots can be teleoperated by humans or autonomous. Ethical concerns about robots include jobs displacement, but automated systems also contribute benefits to society. MATLAB toolboxes provide functions for simulating and controlling robots using concepts like forward and inverse kinematics.
A total station is an instrument that combines an electronic theodolite, electronic distance measuring device (EDM), and microprocessor. It offers advantages like rapid field work, high accuracy, elimination of errors, and automated computations. Potential disadvantages include lack of hard copies in the field and need to return to the office for overall checks. There are three types: manual requires manual angle readings, semi-automatic reads angles electronically, and automatic senses angles and distances electronically.
This document describes a land survey robot that is designed to automatically measure land areas and divide plots of land into subplots. The survey robot uses a microcontroller and ZigBee wireless module to receive movement commands and transmit distance measurements to a PC. It moves around the plot on its own to calculate the total distance traveled, which is then used by an area measurement module on the PC to determine the area of the plot. The robot is able to divide plots into subplots by placing markers as it moves according to a programmed path. The system aims to automate the land survey process for more efficient area measurement compared to conventional surveying techniques.
This document describes a study that aimed to develop algorithms to calculate 3D body segment orientations from motion capture (mocap) and IMU data during simulated manual work tasks. Participants performed tasks while wearing IMUs and optical markers tracked with mocap. Algorithms were developed in MATLAB to compute orientations of body segments from the mocap data by defining three coordinate systems - a provisional, global, and segment system - and relating them using Euler rotation matrices. The goal was to validate the mocap algorithm and integrate it with one calculating IMU orientations to allow field-based ergonomic assessments using body-worn IMUs.
Exercise Recognition System using Facial Image Information from a Mobile Devi...sugiuralab
This document proposes an exercise recognition system using facial features extracted from a mobile device's camera. It aims to help motivate exercise by automatically measuring exercises without additional equipment. The system obtains facial images during exercise, extracts tracking points and distances as features, and uses SVM classification on the FFT of features to recognize 9 exercises with 88.2% accuracy. Experiments show the system is robust to changes in window size and user standing position, but face tracking is sometimes lost and floor exercises have lower accuracy.
Pose Trainer: “An Exercise Guide and Assessment in Physiotherapy”IRJET Journal
This document describes a project called Pose Trainer that uses computer vision and machine learning to assess and provide feedback on physiotherapy exercises. The system uses OpenPose models to extract key points from user-uploaded exercise videos. It then compares the user's pose to sample poses to evaluate accuracy and provide real-time feedback on form. This guidance helps users perform exercises correctly at home without an in-person trainer. The project aims to make physical therapy more accessible and avoid potential injuries from improper exercise form.
An eye gaze detection using low resolution web camera in desktop environmenteSAT Journals
This document presents a method for detecting eye gaze using a low-resolution webcam. The method uses OpenCV and the Viola-Jones algorithm to detect faces and eyes. It then calculates the eye centers and maps them to X-Y coordinates on the screen. The algorithm achieves 69-74% accuracy in detecting gaze positions. It can be used for applications like controlling interfaces for blind users or in military cockpits. The method provides an economical way to perform eye gaze detection using regular webcams and open-source tools.
IRJET- Automated Attendance System using Face RecognitionIRJET Journal
1) The document describes an automated attendance system using face recognition from video frames with deep learning. The system captures real-time video and can generate attendance reports with improved accuracy.
2) It reviews previous works on automated attendance systems using face recognition. Some key approaches discussed include using Local Binary Pattern Histograms and correlations for face detection and recognition, principal component analysis, and combining principal component analysis with artificial neural networks.
3) The proposed system aims to develop an accurate and efficient automated attendance management system using video surveillance and face recognition to capture and mark the presence of students and employees in real-time.
Person Acquisition and Identification ToolIRJET Journal
The document proposes a facial recognition system using CCTV video to identify individuals and generate timestamp data on their presence. It involves three steps: 1) face detection on video frames, 2) super resolution to standardize face sizes, and 3) face recognition using a Siamese network to identify known and new identities with one-shot learning. The system aims to reduce time spent reviewing surveillance footage for law enforcement. It analyzes existing research on low-resolution face recognition, pedestrian detection, and proposes its pipeline as a solution to semi-automate target individual tracking from video data through facial matching and timestamps.
This document discusses using artificial intelligence and deep learning techniques for yoga pose estimation and classification. Specifically, it proposes training a model using the PoseNet and OpenPose frameworks on a dataset of yoga pose videos to identify key points in the human body and classify the pose. The model would use convolutional neural networks and long short-term memory to process video frames in real-time and provide classification scores for the accuracy of pose identification. This type of system could help improve health and provide feedback to users on yoga pose form without an instructor. However, it is currently limited to a small number of poses and requires internet and webcam access.
Yoga Pose Detection and Correction using Posenet and KNNIRJET Journal
This document discusses a system for detecting and correcting yoga poses using computer vision techniques. The system uses PoseNet and a KNN classifier to identify key points in images of humans performing yoga poses. It then compares the actual pose to the target pose and provides feedback to help the user correct their form. The system was trained on a dataset of images depicting different yoga poses and can identify poses with 98.51% accuracy in real-time using a webcam. It aims to help people improve their yoga practice and form to avoid injuries from performing poses incorrectly.
IRJET - Human Pose Detection using Deep LearningIRJET Journal
This document discusses using deep learning for human pose detection. It begins with an introduction to human pose detection and challenges in the field. It then describes how deep learning can be used for this task by training neural networks on large datasets of images annotated with body joint locations. Specifically, it trained models like COCO and MPII to identify and locate body parts. OpenCV and Flask were used to process video frames and build a graphical interface. The trained models were able to detect poses and provide feedback on proper form for exercises. Graphs and skeletal representations visualized the poses and joint angles. The system was able to perform human pose detection in real-time with low hardware requirements. In conclusion, it achieved an effective low-cost software model for motion
Human pose detection using machine learning by GrandelGrandelDsouza
This document discusses using deep learning for human pose detection. It begins with an introduction to human pose detection and challenges in the field. It then describes how deep learning can be used for this task by training neural networks on large datasets of images annotated with body joint locations. Specifically, it trained models like COCO and MPII to identify and locate body parts. OpenCV and Flask were used to process video frames and build a graphical interface. The trained models were able to detect poses and provide feedback on proper form for exercises. Graphs and skeletal representations visualized the poses and joint angles. The system was able to perform human pose detection in real-time with low hardware requirements. In conclusion, it achieved an effective low-cost software model for motion
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
AI Personal Trainer Using Open CV and Media PipeIRJET Journal
This document summarizes a research paper that proposes an AI personal trainer system using computer vision techniques. The system uses OpenCV and MediaPipe to detect a user's body pose and angles in real-time video to correct their form during exercises. It aims to help users safely and effectively work out at home without a physical trainer. The system would also connect users with similar fitness goals to encourage motivation. The researchers believe this AI trainer could make exercise more accessible and convenient for users.
IRJET- Survey on Face Recognition using BiometricsIRJET Journal
This document describes a survey on face recognition using biometrics. It discusses using the Haar cascade algorithm with OpenCV in Python to detect faces in images and video. The algorithm involves selecting Haar features, creating integral images for rapid calculation of features, training classifiers with AdaBoost, and cascading the classifiers. It trains on positive and negative image datasets to detect faces and then recognizes faces by extracting principal components and comparing to trained data. The system fulfills basic face detection and recognition needs at low cost for applications like security and real-time analysis. Improving the algorithm involves adding more training images to increase accuracy.
Developing Image Processing System for Classification of Indian Multispectral...Sumedha Mishra
This document presents a winter training report submitted by Sumedha Mishra for their B.Tech degree. The report details the development of an image processing system in Java using the open-source ImageJ platform. Plugins were created for ImageJ to implement various unsupervised classification algorithms, including k-means, ISODATA, and fuzzy c-means. These plugins were used to classify very high-resolution multispectral images from sensors like Quickbird, CARTOSAT, WorldView-3, and IKONOS. The goal was pixel-based classification of the satellite images to analyze land use and land cover changes.
IRJET - Facial Recognition based Attendance System with LBPHIRJET Journal
This document presents a facial recognition based attendance system using LBPH (Local Binary Pattern Histograms). It begins with an abstract describing the system which takes student attendance using facial identification from classroom camera images. It then discusses related work in attendance and face recognition systems. The proposed system workflow is described involving face detection, feature extraction using LBPH, template matching, and attendance recording. Experimental results demonstrate the system's ability to detect multiple faces and record attendance accurately in an Excel sheet with date/time. The conclusion discusses how the system reduces human effort for attendance and increases learning time compared to traditional methods.
Virtual Yoga System Using Kinect SensorIRJET Journal
The document describes a virtual yoga system using the Microsoft Kinect sensor. The system aims to make yoga exercises more engaging and motivating for patients by tracking their poses in real-time and providing feedback. It recognizes skeleton joints and yoga postures using the Kinect's depth sensing capabilities. Voice instructions guide users through different poses. The system is intended to address issues with traditional physiotherapy being tedious and repetitive. It allows customizing exercises to individual needs and challenges. Recognizing poses accurately in real-time could help patients perform exercises correctly and consistently at home without direct supervision.
Hybrid Head Tracking for Wheelchair Control Using Haar Cascade Classifsier an...TELKOMNIKA JOURNAL
Disability may limit someone to move freely, especially when the severity of the disability is high. In order to help disabled people control their wheelchair, head movement-based control is preferred due to its reliability. This paper proposed a head direction detector framework which can be applied to wheelchair control. First, face and nose were detected from a video frame using Haar cascade classfier. Then, the detected bounding boxes were used to initialize Kernelized Correlation Filters tracker. Direction of a head was determined by relative position of the nose to the face, extracted from tracker’s bounding boxes. Results show that the method effectively detect head direction indicated by 82% accuracy and very low detection or tracking failure.
Human Movement Recognition Using Internal Sensors of a Smartphone-based HMD (...sugiuralab
The document proposes recognizing human movements using the internal sensors of a smartphone in a head-mounted display (HMD) without external controllers. It collected sensor data from participants performing 16 movements and used machine learning to recognize the movements with 92.03% accuracy on average. However, there was a long time lag between movement detection and recognition completion. Shortening the sensor recording time decreased accuracy but could enable faster recognition.
AI Personal Trainer Using Open CV and Media PipeIRJET Journal
This document summarizes previous work on developing an AI personal trainer using computer vision techniques. It discusses early research using the Kinect camera for body posture detection. Later works applied machine learning and deep learning models to activity recognition in gyms and used OpenPose for pose detection in pre-recorded videos. To enable real-time detection, approaches categorized models as kinematic, planar and volumetric. Convolutional neural networks were also used to estimate 2D poses from single images and increase accuracy. More recent works introduced datasets with whole-body annotations and used a single network to address scale variance across body parts. The goal of this project is to build upon these techniques to create an AI trainer that analyzes exercise repetitions in real-time videos
The document summarizes a student project to develop a virtual mouse interface using computer vision and finger tracking. The project is divided into 5 modules: 1) basic video operations in OpenCV, 2) image processing techniques, 3) object tracking, 4) finger-tip detection, and 5) using detected finger motions to control mouse functions. Key functions demonstrated include moving the cursor, left and right clicking, dragging, brightness control, and scrolling. Evaluation of the system found finger tracking accuracy between 60-85% for different gestures. The project aims to provide an alternative input method that reduces hardware needs and workspace.
IRJET-Vision Based Occupant Detection in Unattended VehicleIRJET Journal
This document proposes a vision-based method to detect and classify occupants inside an unattended vehicle using face recognition and motion-based classification. The system uses a camera mounted inside the vehicle to detect occupants in real-time at 30 frames per second with high accuracy under different lighting and weather conditions. It detects occupants in two steps - first detecting objects using background subtraction, then classifying objects as human or non-human using motion-based classification. The system aims to improve safety and comfort by monitoring occupants for applications like airbag deployment and climate control.
method study- micromotion vs memo motionpranav teli
The document discusses method study, which aims to improve work processes and reduce costs. It describes the objectives and typical procedure of method study, which includes selecting a job to study, recording details, examining the method critically, developing an improved method, installing it, and maintaining the new standard. The document also explains micro-motion study and memo-motion study as techniques for recording and analyzing activities in detail or at a macro level to identify unnecessary motions and establish more efficient methods. The key difference is that micro-motion studies operations at a finer level of detail using filmed footage, while memo-motion uses time-lapse photography to study overall processes.
A Study of Wearable Accelerometers Layout for Human Activity Recognition(Asia...sugiuralab
The document summarizes a study on optimizing the placement of wearable accelerometers for human activity recognition. It describes experimenting with different numbers and positions of sensors, using a particle swarm optimization algorithm to determine optimal combinations that maximize classification accuracy. The results show 2 sensors provide good recognition, while more sensors particularly help with transitional activities, and upper body positions like chest, waist and shoulders perform best. Placements are evaluated for static, dynamic and transitional daily living activities.
An eye gaze detection using low resolution web camera in desktop environmenteSAT Journals
This document presents a method for detecting eye gaze using a low-resolution webcam. The method uses OpenCV and the Viola-Jones algorithm to detect faces and eyes. It then calculates the eye centers and maps them to X-Y coordinates on the screen. The algorithm achieves 69-74% accuracy in detecting gaze positions. It can be used for applications like controlling interfaces for blind users or in military cockpits. The method provides an economical way to perform eye gaze detection using regular webcams and open-source tools.
IRJET- Automated Attendance System using Face RecognitionIRJET Journal
1) The document describes an automated attendance system using face recognition from video frames with deep learning. The system captures real-time video and can generate attendance reports with improved accuracy.
2) It reviews previous works on automated attendance systems using face recognition. Some key approaches discussed include using Local Binary Pattern Histograms and correlations for face detection and recognition, principal component analysis, and combining principal component analysis with artificial neural networks.
3) The proposed system aims to develop an accurate and efficient automated attendance management system using video surveillance and face recognition to capture and mark the presence of students and employees in real-time.
Person Acquisition and Identification ToolIRJET Journal
The document proposes a facial recognition system using CCTV video to identify individuals and generate timestamp data on their presence. It involves three steps: 1) face detection on video frames, 2) super resolution to standardize face sizes, and 3) face recognition using a Siamese network to identify known and new identities with one-shot learning. The system aims to reduce time spent reviewing surveillance footage for law enforcement. It analyzes existing research on low-resolution face recognition, pedestrian detection, and proposes its pipeline as a solution to semi-automate target individual tracking from video data through facial matching and timestamps.
This document discusses using artificial intelligence and deep learning techniques for yoga pose estimation and classification. Specifically, it proposes training a model using the PoseNet and OpenPose frameworks on a dataset of yoga pose videos to identify key points in the human body and classify the pose. The model would use convolutional neural networks and long short-term memory to process video frames in real-time and provide classification scores for the accuracy of pose identification. This type of system could help improve health and provide feedback to users on yoga pose form without an instructor. However, it is currently limited to a small number of poses and requires internet and webcam access.
Yoga Pose Detection and Correction using Posenet and KNNIRJET Journal
This document discusses a system for detecting and correcting yoga poses using computer vision techniques. The system uses PoseNet and a KNN classifier to identify key points in images of humans performing yoga poses. It then compares the actual pose to the target pose and provides feedback to help the user correct their form. The system was trained on a dataset of images depicting different yoga poses and can identify poses with 98.51% accuracy in real-time using a webcam. It aims to help people improve their yoga practice and form to avoid injuries from performing poses incorrectly.
IRJET - Human Pose Detection using Deep LearningIRJET Journal
This document discusses using deep learning for human pose detection. It begins with an introduction to human pose detection and challenges in the field. It then describes how deep learning can be used for this task by training neural networks on large datasets of images annotated with body joint locations. Specifically, it trained models like COCO and MPII to identify and locate body parts. OpenCV and Flask were used to process video frames and build a graphical interface. The trained models were able to detect poses and provide feedback on proper form for exercises. Graphs and skeletal representations visualized the poses and joint angles. The system was able to perform human pose detection in real-time with low hardware requirements. In conclusion, it achieved an effective low-cost software model for motion
Human pose detection using machine learning by GrandelGrandelDsouza
This document discusses using deep learning for human pose detection. It begins with an introduction to human pose detection and challenges in the field. It then describes how deep learning can be used for this task by training neural networks on large datasets of images annotated with body joint locations. Specifically, it trained models like COCO and MPII to identify and locate body parts. OpenCV and Flask were used to process video frames and build a graphical interface. The trained models were able to detect poses and provide feedback on proper form for exercises. Graphs and skeletal representations visualized the poses and joint angles. The system was able to perform human pose detection in real-time with low hardware requirements. In conclusion, it achieved an effective low-cost software model for motion
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
AI Personal Trainer Using Open CV and Media PipeIRJET Journal
This document summarizes a research paper that proposes an AI personal trainer system using computer vision techniques. The system uses OpenCV and MediaPipe to detect a user's body pose and angles in real-time video to correct their form during exercises. It aims to help users safely and effectively work out at home without a physical trainer. The system would also connect users with similar fitness goals to encourage motivation. The researchers believe this AI trainer could make exercise more accessible and convenient for users.
IRJET- Survey on Face Recognition using BiometricsIRJET Journal
This document describes a survey on face recognition using biometrics. It discusses using the Haar cascade algorithm with OpenCV in Python to detect faces in images and video. The algorithm involves selecting Haar features, creating integral images for rapid calculation of features, training classifiers with AdaBoost, and cascading the classifiers. It trains on positive and negative image datasets to detect faces and then recognizes faces by extracting principal components and comparing to trained data. The system fulfills basic face detection and recognition needs at low cost for applications like security and real-time analysis. Improving the algorithm involves adding more training images to increase accuracy.
Developing Image Processing System for Classification of Indian Multispectral...Sumedha Mishra
This document presents a winter training report submitted by Sumedha Mishra for their B.Tech degree. The report details the development of an image processing system in Java using the open-source ImageJ platform. Plugins were created for ImageJ to implement various unsupervised classification algorithms, including k-means, ISODATA, and fuzzy c-means. These plugins were used to classify very high-resolution multispectral images from sensors like Quickbird, CARTOSAT, WorldView-3, and IKONOS. The goal was pixel-based classification of the satellite images to analyze land use and land cover changes.
IRJET - Facial Recognition based Attendance System with LBPHIRJET Journal
This document presents a facial recognition based attendance system using LBPH (Local Binary Pattern Histograms). It begins with an abstract describing the system which takes student attendance using facial identification from classroom camera images. It then discusses related work in attendance and face recognition systems. The proposed system workflow is described involving face detection, feature extraction using LBPH, template matching, and attendance recording. Experimental results demonstrate the system's ability to detect multiple faces and record attendance accurately in an Excel sheet with date/time. The conclusion discusses how the system reduces human effort for attendance and increases learning time compared to traditional methods.
Virtual Yoga System Using Kinect SensorIRJET Journal
The document describes a virtual yoga system using the Microsoft Kinect sensor. The system aims to make yoga exercises more engaging and motivating for patients by tracking their poses in real-time and providing feedback. It recognizes skeleton joints and yoga postures using the Kinect's depth sensing capabilities. Voice instructions guide users through different poses. The system is intended to address issues with traditional physiotherapy being tedious and repetitive. It allows customizing exercises to individual needs and challenges. Recognizing poses accurately in real-time could help patients perform exercises correctly and consistently at home without direct supervision.
Hybrid Head Tracking for Wheelchair Control Using Haar Cascade Classifsier an...TELKOMNIKA JOURNAL
Disability may limit someone to move freely, especially when the severity of the disability is high. In order to help disabled people control their wheelchair, head movement-based control is preferred due to its reliability. This paper proposed a head direction detector framework which can be applied to wheelchair control. First, face and nose were detected from a video frame using Haar cascade classfier. Then, the detected bounding boxes were used to initialize Kernelized Correlation Filters tracker. Direction of a head was determined by relative position of the nose to the face, extracted from tracker’s bounding boxes. Results show that the method effectively detect head direction indicated by 82% accuracy and very low detection or tracking failure.
Human Movement Recognition Using Internal Sensors of a Smartphone-based HMD (...sugiuralab
The document proposes recognizing human movements using the internal sensors of a smartphone in a head-mounted display (HMD) without external controllers. It collected sensor data from participants performing 16 movements and used machine learning to recognize the movements with 92.03% accuracy on average. However, there was a long time lag between movement detection and recognition completion. Shortening the sensor recording time decreased accuracy but could enable faster recognition.
AI Personal Trainer Using Open CV and Media PipeIRJET Journal
This document summarizes previous work on developing an AI personal trainer using computer vision techniques. It discusses early research using the Kinect camera for body posture detection. Later works applied machine learning and deep learning models to activity recognition in gyms and used OpenPose for pose detection in pre-recorded videos. To enable real-time detection, approaches categorized models as kinematic, planar and volumetric. Convolutional neural networks were also used to estimate 2D poses from single images and increase accuracy. More recent works introduced datasets with whole-body annotations and used a single network to address scale variance across body parts. The goal of this project is to build upon these techniques to create an AI trainer that analyzes exercise repetitions in real-time videos
The document summarizes a student project to develop a virtual mouse interface using computer vision and finger tracking. The project is divided into 5 modules: 1) basic video operations in OpenCV, 2) image processing techniques, 3) object tracking, 4) finger-tip detection, and 5) using detected finger motions to control mouse functions. Key functions demonstrated include moving the cursor, left and right clicking, dragging, brightness control, and scrolling. Evaluation of the system found finger tracking accuracy between 60-85% for different gestures. The project aims to provide an alternative input method that reduces hardware needs and workspace.
IRJET-Vision Based Occupant Detection in Unattended VehicleIRJET Journal
This document proposes a vision-based method to detect and classify occupants inside an unattended vehicle using face recognition and motion-based classification. The system uses a camera mounted inside the vehicle to detect occupants in real-time at 30 frames per second with high accuracy under different lighting and weather conditions. It detects occupants in two steps - first detecting objects using background subtraction, then classifying objects as human or non-human using motion-based classification. The system aims to improve safety and comfort by monitoring occupants for applications like airbag deployment and climate control.
method study- micromotion vs memo motionpranav teli
The document discusses method study, which aims to improve work processes and reduce costs. It describes the objectives and typical procedure of method study, which includes selecting a job to study, recording details, examining the method critically, developing an improved method, installing it, and maintaining the new standard. The document also explains micro-motion study and memo-motion study as techniques for recording and analyzing activities in detail or at a macro level to identify unnecessary motions and establish more efficient methods. The key difference is that micro-motion studies operations at a finer level of detail using filmed footage, while memo-motion uses time-lapse photography to study overall processes.
A Study of Wearable Accelerometers Layout for Human Activity Recognition(Asia...sugiuralab
The document summarizes a study on optimizing the placement of wearable accelerometers for human activity recognition. It describes experimenting with different numbers and positions of sensors, using a particle swarm optimization algorithm to determine optimal combinations that maximize classification accuracy. The results show 2 sensors provide good recognition, while more sensors particularly help with transitional activities, and upper body positions like chest, waist and shoulders perform best. Placements are evaluated for static, dynamic and transitional daily living activities.
Similar to Exercise Measurement using a Built-in Camera in a Mobile Device(AsianCHI2020) (20)
EarAuthCam: Personal Identification and Authentication Method Using Ear Image...sugiuralab
Earphones are now used for longer hours than before with the advancement in wireless technology and miniaturization. In addition, the application of earphones has become more diverse, and opportunities to access highly confidential information through them have increased. We propose a method comprising a hearable device equipped with a small camera for user authentication from ear images. This method improves the security of the hearable device. Ear images are first captured with the camera. The ear regions in the images are then extracted using a mask region-based convolutional neural network. Finally, the user is identified using histograms of oriented gradient features and a support vector machine (SVM). Our method was able to identify 18 participants with an accuracy of 84.1%. Users are authenticated through unsupervised anomaly detection using an autoencoder with an error rate of 8.36%. This method facilitates hands- and eye-free operations without requiring any explicit authentication action by the user.
Converting Tatamis into Touch Sensors by Measuring Capacitancesugiuralab
This document summarizes a research paper that proposes a method to convert tatami floor mats into touch sensors by measuring capacitance. Conductive sheets are placed under the tatami surface. When a person contacts the tatami, capacitance is measured between the sheets and their skin to detect the touch position. The system identifies 12 hand gestures with approximately 90% accuracy. Future work includes enabling multi-touch detection and using the sensors for footprint tracking and pose estimation.
Pinch Force Measurement Using a Geomagnetic Sensorsugiuralab
This document proposes measuring pinch force using the geomagnetic sensor in a smartphone. A device with embedded magnets and springs is attached to the smartphone. As force is applied, the magnet's distance from the sensor changes, altering the magnetic flux density. Measurements found a strong correlation between force and magnetic flux density. Future work includes testing different smartphone models and collecting user feedback to improve usability.
Smartphone-Based Teaching System for Neonate Soothing Motionssugiuralab
This document describes a proposed smartphone-based teaching system to help first-time caregivers learn how to properly soothe neonates. The system uses sensors in a stuffed toy and a smartphone to capture posture angles and acceleration during cradling motions. It provides real-time feedback on the user's form compared to expert cradling motions. An experiment tested the system's effectiveness in improving users' cradling posture after training compared to just watching a video. Results showed the system helped users better match the expert's inclination angle, indicating it could help ensure neonate safety by teaching proper neck support. Future work is needed to improve measurement accuracy and further validate the system.
Tactile Presentation of Orchestral Conductor's Motion Trajectorysugiuralab
This document proposes presenting a conductor's motion trajectory tactilely for visually impaired musicians using vibrators. It describes capturing conducting movements, mapping them to vibrators, and using tactile apparent movement. An experiment found trajectory presentation helped predict beat timing better than single vibrations, especially for tempo changes and start cues. Future work includes developing a universal device.
TouchLog: Finger Micro Gesture Recognition Using Photo-Reflective Sensorssugiuralab
The researchers developed a fingernail-sized device using 7 photo-reflective sensors to detect finger microgestures based on fingertip skin deformation. They implemented a random forest classifier to recognize 11 gestures with an average accuracy of 91.1% for the general model and 91.5% for the individual model. Future work will focus on addressing limitations like user dependence and developing a device that can be worn comfortably for real-world use.
Seeing the Wind: An Interactive Mist Interface for Airflow Inputsugiuralab
Human activities can introduce variations in various environmental cues, such as light and sound, which can serve as inputs for interfaces. However, one often overlooked aspect is the airflow variation caused by these activities, which presents challenges in detection and utilization due to its intangible nature. In this paper, we have unveiled an approach using mist to capture invisible airflow variations, rendering them detectable by Time-of-Flight (ToF) sensors. We investigate the capability of this sensing technique under different types of mist or smoke, as well as the impact of airflow speed. To illustrate the feasibility of this concept, we created a prototype using a humidifier and demonstrated its capability to recognize motions. On this basis, we introduce potential applications, discuss inherent limitations, and provide design lessons grounded in mist-based airflow sensing.
Identification and Authentication Using Claviclessugiuralab
Identification and Authentication Using Clavicles
Yohei Kawasaki, Yuta Sugiura
2023 62nd Annual Conference of the Society of Instrument and Control Engineers (SICE), Mie, Japan, 2023
Estimation of Violin Bow Pressure Using Photo-Reflective Sensorssugiuralab
Estimation of Violin Bow Pressure Using Photo-Reflective Sensors presents a method for quantitatively estimating bow pressure during violin playing using photo-reflective sensors attached to the bow. Five sensors measure the distance between the bow stick and hair, which changes with applied pressure. A random forest regression model is trained on sensor distance values and actual pressure measurements to estimate pressure based solely on sensor values. In experiments, the model estimated bow pressure with an R2 of 0.84, MAE of 0.11N, and MAPE of 19.1% when tested on data from an experienced violinist. The goal is to provide visual feedback to support practice by quantifying bow pressure.
R3 Stem Cell Therapy: A New Hope for Women with Ovarian FailureR3 Stem Cell
Discover the groundbreaking advancements in stem cell therapy by R3 Stem Cell, offering new hope for women with ovarian failure. This innovative treatment aims to restore ovarian function, improve fertility, and enhance overall well-being, revolutionizing reproductive health for women worldwide.
As Mumbai's premier kidney transplant and donation center, L H Hiranandani Hospital Powai is not just a medical facility; it's a beacon of hope where cutting-edge science meets compassionate care, transforming lives and redefining the standards of kidney health in India.
Sectional dentures for microstomia patients.pptxSatvikaPrasad
Microstomia, characterized by an abnormally small oral aperture, presents significant challenges in prosthodontic treatment, including limited access for examination, difficulties in impression making, and challenges with prosthesis insertion and removal. To manage these issues, customized impression techniques using sectional trays and elastomeric materials are employed. Prostheses may be designed in segments or with flexible materials to facilitate handling. Minimally invasive procedures and the use of digital technologies can enhance patient comfort. Education and training for patients on prosthesis care and maintenance are crucial for compliance. Regular follow-up and a multidisciplinary approach, involving collaboration with other specialists, ensure comprehensive care and improved quality of life for microstomia patients.
Digital Health in India_Health Informatics Trained Manpower _DrDevTaneja_15.0...DrDevTaneja1
Digital India will need a big trained army of Health Informatics educated & trained manpower in India.
Presently, generalist IT manpower does most of the work in the healthcare industry in India. Academic Health Informatics education is not readily available at school & health university level or IT education institutions in India.
We look into the evolution of health informatics and its applications in the healthcare industry.
HIMMS TIGER resources are available to assist Health Informatics education.
Indian Health universities, IT Education institutions, and the healthcare industry must proactively collaborate to start health informatics courses on a big scale. An advocacy push from various stakeholders is also needed for this goal.
Health informatics has huge employment potential and provides a big business opportunity for the healthcare industry. A big pool of trained health informatics manpower can lead to product & service innovations on a global scale in India.
TEST BANK FOR Health Assessment in Nursing 7th Edition by Weber Chapters 1 - ...rightmanforbloodline
TEST BANK FOR Health Assessment in Nursing 7th Edition by Weber Chapters 1 - 34.
TEST BANK FOR Health Assessment in Nursing 7th Edition by Weber Chapters 1 - 34.
TEST BANK FOR Health Assessment in Nursing 7th Edition by Weber Chapters 1 - 34.
Joker Wigs has been a one-stop-shop for hair products for over 26 years. We provide high-quality hair wigs, hair extensions, hair toppers, hair patch, and more for both men and women.
The Importance of Black Women Understanding the Chemicals in Their Personal C...bkling
Certain chemicals, such as phthalates and parabens, can disrupt the body's hormones and have significant effects on health. According to data, hormone-related health issues such as uterine fibroids, infertility, early puberty and more aggressive forms of breast and endometrial cancers disproportionately affect Black women. Our guest speaker, Jasmine A. McDonald, PhD, an Assistant Professor in the Department of Epidemiology at Columbia University in New York City, discusses the scientific reasons why Black women should pay attention to specific chemicals in their personal care products, like hair care, and ways to minimize their exposure.
VEDANTA AIR AMBULANCE SERVICES IN REWA AT A COST-EFFECTIVE PRICE.pdfVedanta A
Air Ambulance Services In Rewa works in close coordination with ground-based emergency services, including local Emergency Medical Services, fire departments, and law enforcement agencies.
More@: https://tinyurl.com/2shrryhx
More@: https://tinyurl.com/5n8h3wp8
This particular slides consist of- what is hypotension,what are it's causes and it's effect on body, risk factors, symptoms,complications, diagnosis and role of physiotherapy in it.
This slide is very helpful for physiotherapy students and also for other medical and healthcare students.
Here is the summary of hypotension:
Hypotension, or low blood pressure, is when the pressure of blood circulating in the body is lower than normal or expected. It's only a problem if it negatively impacts the body and causes symptoms. Normal blood pressure is usually between 90/60 mmHg and 120/80 mmHg, but pressures below 90/60 are generally considered hypotensive.
This particular slides consist of- what is Pneumothorax,what are it's causes and it's effect on body, risk factors, symptoms,complications, diagnosis and role of physiotherapy in it.
This slide is very helpful for physiotherapy students and also for other medical and healthcare students.
Here is a summary of Pneumothorax:
Pneumothorax, also known as a collapsed lung, is a condition that occurs when air leaks into the space between the lung and chest wall. This air buildup puts pressure on the lung, preventing it from expanding fully when you breathe. A pneumothorax can cause a complete or partial collapse of the lung.
The facial nerve, also known as cranial nerve VII, is one of the 12 cranial nerves originating from the brain. It's a mixed nerve, meaning it contains both sensory and motor fibres, and it plays a crucial role in controlling various facial muscles, as well as conveying sensory information from the taste buds on the anterior two-thirds of the tongue.
Exercise Measurement using a Built-in Camera in a Mobile Device(AsianCHI2020)
1. 02 Recognition Principle
・ 30 feature points are extracted by using the Single Face Tracker for Unity Plugin [3].
・The system uses 62 parameters: 30 feature points × 2 (x and y) + two distances.
・Obtained data is divided into every frames.
・After removing a trend, it applies Fast Fourier
Transform (FFT) and uses half of all dimensions’
frequency components for making
a Support Vector Machine’s (SVM’s) classifier.
Exercise Measurement
using a Built-in Camera
in a Mobile Device
Kaho Kato
Keio University
Chengshuo Xia
Keio University
Yuta Sugiura
Keio University
01 Introduction
Background Problem
・Exercise is important to keep our health.
・Our motivation for exercise can be improved
by using the systems that can measure
and record our exercises automatically.
・Many studies have been proposed that can
measure exercises automatically by cameras.
・In most measurement methods, users need
a large space to capture a whole body within
an angle of view of a camera.
・Inaba[1] proposed an exercise measurement
method using the human face’s feature
points got via mobile devices.
・We investigated what kind of movements
can be identified by the face’s feature points.
Approach
[3]ULSeeInc.2017.SingleFaceTrackerPlugin(LiteVersion-30FaceTrackingPoints)-AssetStore.(2017).(Accessedon01/13/2020).
Contact: kaho_0128@keio.jp
03 Recording Application
・An application based on Unity is developed to record and uses
a built-in camera video on a mobile device.
・During excising, it tracks 30 feature points on the face.
・When a user completes, it writes data into CSV files.
04 Kind of Exercise
・9 kinds of exercise were defined, referring to the research of Hashizume[2].
・Standing exercises: To train the lower half of the body.
A user can do anywhere.
・On the floor exercises: To train the trunk of the body.
They need a specific place.
・Limitations
The system- needs a high frame rate to get smooth data.
- is vulnerable to darkness or shielding.
- needs a user to keep the face putting within an angle
of a camera view.
・Future Works
We will - investigate how many feature points are suitable for
classification to reduce calculate costs.
- gather more data to derive a more accurate model.
05 Evaluation 06 Limitations & Future Works
・Participants:
10 people.
・Order of exercises:
Randomly.
・Learning data:
Leave One Participant
Out cross-validation
・The average accuracy:
About 89.4%.
[1]KonomiInabaetal.2019.Tablet
ApplicationforHeel-raiseTraining
byDetectingFaceMovement.In
EntertainmentComputing2019,Vol.
2019.393–397.
・Walking
・Jogging
・High knee raise exercise
・Heel raise and lower exercise
・Squat exercise
・No exercise(Standing straight)
・Sit-ups exercise
・Push-ups exercise
・Back extension exercise
Standing On the floor
[2]HiroshiHashizumeetal.2014.
Developmentand evaluationofavideo
exerciseprogramforlocomotive
syndromeintheelderly.Modern
Rheumatology24,2(2014),250–257.