This document proposes OpenSesame, a system for unlocking smartphones through handwaving biometrics. It analyzes handwaving patterns collected from 200 users to find unique characteristics. OpenSesame extracts four statistical features from handwaving data using sensors. It uses support vector machines to accurately classify users based on their waving patterns within 1-2 seconds. Experiments show OpenSesame achieves low false positive rates of around 5% on average.
Interactive Projector Screen with Hand Detection Using LED LightsCSCJournals
There are many different ways to manipulate OS, we can use keyboard, mouse or touch screen. When doing presentation, it is inconvenient to control the OS and present at the same time. Our system, interactive wall, allows you to use hand to control OS on the projection screen but not touch screen. You can now experience a novel approach to control your cursor. Our system can be applied to the existing equipment, so we don’t need to purchase any expensive touch screen. The system requires one web camera to detect the hand and then you can start to experience our system. Also, three LED lights are required to put on the fingers to help the system to detect the location of the hand and the gesture performing.
Complex Weld Seam Detection Using Computer Vision Linked Inglenn_silvers
This document discusses a project to use computer vision and a Microsoft Kinect sensor to enable real-time gesture control of a welding robot. The project aims to detect and track a user's hand gestures to control robot movement, and to define the weld seam region of interest to allow for seam detection. The plan involves accessing Kinect data, detecting and tracking the hand in 3D space, recognizing gestures for robot movement commands, extracting color values from the hand for skin detection, and using the hand position to define the seam region of interest. The work so far has successfully defined the hand and fingers, tracked hand motion, and extracted the seam region. Further work is needed to finalize the gesture commands and integrate control of the robot.
This document provides an overview of a computer graphics and visualization course. It includes links to two textbooks, definitions of key graphics concepts like raster, pixel, resolution and depth. It also covers different types of displays like CRT, flat panel displays, and emissive vs non-emissive displays. Specific display technologies like plasma panels, LCDs and graphics workstations are described. The document also discusses graphics input devices, graphics software, OpenGL and using graphics over networks.
This document discusses surface computing and multi-touch display devices. Surface computing allows users to interact directly with a touch-sensitive screen instead of using a keyboard and mouse. Multi-touch devices allow multiple touches at once, serving as a substitute for traditional input devices. The document then describes various touchscreen technologies like resistive and capacitive, how they work, their advantages and disadvantages. It also covers implementation of gesture recognition software to interpret user inputs on touchscreens. Finally, potential applications of these technologies are mentioned.
A touch panel allows users to interact with a computer by touching the screen directly. There are four main touchscreen technologies: resistive, capacitive, surface acoustic wave, and infrared. Resistive touchscreens detect pressure changes while capacitive screens detect electrical changes from a finger. Surface acoustic wave screens use ultrasound waves and infrared screens use light and shadows. Touchscreens vary in aspects like touch sensitivity, durability, and water resistance depending on the technology. Smart card readers are used to read contact-based or proximity-based smart cards for applications like attendance tracking, payments, and access control. Contact readers require the card to physically connect while contactless readers use radio frequencies to communicate faster with multiple cards simultaneously.
Draft activity recognition from accelerometer dataRaghu Palakodety
This document describes a framework for classifying human activities like standing, walking, and running using data from an accelerometer sensor on a smartphone. It discusses collecting raw sensor data, preprocessing the data through smoothing and feature extraction, training classifiers on extracted features, and classifying new data in real-time. Random forest classification achieved 83.49% accuracy on this activity recognition task using accelerometer data from an Android application.
IRJET- Offline Location Detection and Accident Indication using Mobile SensorsIRJET Journal
This document presents a system for offline location detection and accident indication using mobile sensors. The system utilizes a smartphone's intrinsic sensors (accelerometer) to detect movement and determine if an accident has occurred. If an accident is detected, the location of the smartphone is obtained from satellites and sent via message to emergency contacts along with the location. The system aims to address issues with existing location tracking for accident victims by providing a more accurate location even when the phone is offline. It describes the system architecture, algorithms, and case studies evaluating sensor-based deployment algorithms for increasing coverage in wireless sensor networks. The proposed system could help provide timely assistance in emergency situations.
IRJET- Survey Paper on Vision based Hand Gesture RecognitionIRJET Journal
This document presents a survey of previous research on vision-based hand gesture recognition. It discusses various methods that have been used, including discrete wavelet transforms, skin color segmentation, orientation histograms, and neural networks. The document proposes a new methodology using webcam image capture, static and dynamic gesture definition, image processing techniques like localization, enhancement, segmentation, and morphological filtering, and a convolutional neural network for classification. The goal is to develop a more efficient and accurate system for hand gesture recognition and human-computer interaction.
Interactive Projector Screen with Hand Detection Using LED LightsCSCJournals
There are many different ways to manipulate OS, we can use keyboard, mouse or touch screen. When doing presentation, it is inconvenient to control the OS and present at the same time. Our system, interactive wall, allows you to use hand to control OS on the projection screen but not touch screen. You can now experience a novel approach to control your cursor. Our system can be applied to the existing equipment, so we don’t need to purchase any expensive touch screen. The system requires one web camera to detect the hand and then you can start to experience our system. Also, three LED lights are required to put on the fingers to help the system to detect the location of the hand and the gesture performing.
Complex Weld Seam Detection Using Computer Vision Linked Inglenn_silvers
This document discusses a project to use computer vision and a Microsoft Kinect sensor to enable real-time gesture control of a welding robot. The project aims to detect and track a user's hand gestures to control robot movement, and to define the weld seam region of interest to allow for seam detection. The plan involves accessing Kinect data, detecting and tracking the hand in 3D space, recognizing gestures for robot movement commands, extracting color values from the hand for skin detection, and using the hand position to define the seam region of interest. The work so far has successfully defined the hand and fingers, tracked hand motion, and extracted the seam region. Further work is needed to finalize the gesture commands and integrate control of the robot.
This document provides an overview of a computer graphics and visualization course. It includes links to two textbooks, definitions of key graphics concepts like raster, pixel, resolution and depth. It also covers different types of displays like CRT, flat panel displays, and emissive vs non-emissive displays. Specific display technologies like plasma panels, LCDs and graphics workstations are described. The document also discusses graphics input devices, graphics software, OpenGL and using graphics over networks.
This document discusses surface computing and multi-touch display devices. Surface computing allows users to interact directly with a touch-sensitive screen instead of using a keyboard and mouse. Multi-touch devices allow multiple touches at once, serving as a substitute for traditional input devices. The document then describes various touchscreen technologies like resistive and capacitive, how they work, their advantages and disadvantages. It also covers implementation of gesture recognition software to interpret user inputs on touchscreens. Finally, potential applications of these technologies are mentioned.
A touch panel allows users to interact with a computer by touching the screen directly. There are four main touchscreen technologies: resistive, capacitive, surface acoustic wave, and infrared. Resistive touchscreens detect pressure changes while capacitive screens detect electrical changes from a finger. Surface acoustic wave screens use ultrasound waves and infrared screens use light and shadows. Touchscreens vary in aspects like touch sensitivity, durability, and water resistance depending on the technology. Smart card readers are used to read contact-based or proximity-based smart cards for applications like attendance tracking, payments, and access control. Contact readers require the card to physically connect while contactless readers use radio frequencies to communicate faster with multiple cards simultaneously.
Draft activity recognition from accelerometer dataRaghu Palakodety
This document describes a framework for classifying human activities like standing, walking, and running using data from an accelerometer sensor on a smartphone. It discusses collecting raw sensor data, preprocessing the data through smoothing and feature extraction, training classifiers on extracted features, and classifying new data in real-time. Random forest classification achieved 83.49% accuracy on this activity recognition task using accelerometer data from an Android application.
IRJET- Offline Location Detection and Accident Indication using Mobile SensorsIRJET Journal
This document presents a system for offline location detection and accident indication using mobile sensors. The system utilizes a smartphone's intrinsic sensors (accelerometer) to detect movement and determine if an accident has occurred. If an accident is detected, the location of the smartphone is obtained from satellites and sent via message to emergency contacts along with the location. The system aims to address issues with existing location tracking for accident victims by providing a more accurate location even when the phone is offline. It describes the system architecture, algorithms, and case studies evaluating sensor-based deployment algorithms for increasing coverage in wireless sensor networks. The proposed system could help provide timely assistance in emergency situations.
IRJET- Survey Paper on Vision based Hand Gesture RecognitionIRJET Journal
This document presents a survey of previous research on vision-based hand gesture recognition. It discusses various methods that have been used, including discrete wavelet transforms, skin color segmentation, orientation histograms, and neural networks. The document proposes a new methodology using webcam image capture, static and dynamic gesture definition, image processing techniques like localization, enhancement, segmentation, and morphological filtering, and a convolutional neural network for classification. The goal is to develop a more efficient and accurate system for hand gesture recognition and human-computer interaction.
Flow Trajectory Approach for Human Action RecognitionIRJET Journal
This document proposes a method for human action recognition in videos using scale-invariant feature transform (SIFT) and flow trajectory analysis. The key steps are:
1. Extract SIFT features from each video frame to detect keypoints.
2. Track the keypoints across frames and calculate the magnitude and direction of motion for each keypoint.
3. Analyze the tracked keypoints and their motion parameters to recognize the human action, such as walking, running, etc. occurring in the video.
Human Action Recognition using Contour History Images and Neural Networks Cla...IRJET Journal
This document proposes a new method for human action recognition using contour history images extracted from silhouettes, tracking of the body's center movement, and the relative dimensions of the bounding box containing each contour history image. Features are extracted and reduced using three different methods: dividing the contour history images into rectangles, a shallow autoencoder neural network, and a deep autoencoder neural network. The reduced features are classified using a neural network classifier. The proposed method achieved a recognition rate of 98.9% on a standard human action dataset, demonstrating its potential for real-time human action recognition applications.
Hand Gesture Controls for Digital TV using Mobile ARM Platformijsrd.com
This paper presents a new approach for controlling digital television using a real-time camera. Proposed method uses a camera, a mobile ARM platform and computer vision technology, such as image segmentation and gesture recognition, to control TV operations such as changing channels, increasing or decreasing volume etc. For this we have used an ARM based mobile platform with OMAP processor. For processing the images we implemented the code using OpenCV library. Hand detection is one of the important stages for applications such as gesture recognition and hand tracking. In this paper, it proposes a new method to extract hand region and consequently the fingertips from color images.
IRJET- A Vision based Hand Gesture Recognition System using Convolutional...IRJET Journal
This document describes a vision-based hand gesture recognition system using convolutional neural networks. The system captures images of hand gestures using a camera, pre-processes the images, and classifies the gestures using a CNN model. The CNN architecture includes convolutional layers, max pooling layers, dropout layers, and fully connected layers. The system was trained on a dataset of images representing 7 different hand gestures. Testing achieved over 90% accuracy in recognizing the gestures. This vision-based approach allows for natural human-computer interaction without physical devices.
This document discusses the development of an Android application for physical activity recognition using the accelerometer sensor. It provides background on the Android operating system and its open development environment. It then summarizes relevant research papers on activity recognition using mobile sensors. The document outlines the process of collecting and labeling accelerometer data from smartphone sensors during different physical activities. Features are extracted from the sensor data and several machine learning classifiers are evaluated for activity recognition. The application will recognize activities and track metrics like calories burned, distance traveled, and implement fall detection and medical reminders.
IRJET-Computer Aided Touchless Palmprint Recognition Using SiftIRJET Journal
This document discusses a computer aided touchless palmprint recognition system using Scale Invariant Feature Transform (SIFT). SIFT is used to extract features from touchless palmprint images that are invariant to changes in scale, rotation, and translation. The system involves preprocessing images, extracting SIFT features, and matching features to recognize and authenticate individuals. An experiment was conducted using 16 real palmprint images with varying conditions. The system achieved 93.75% accuracy in recognition using SIFT features, demonstrating its effectiveness for touchless palmprint recognition compared to other approaches. Future work could explore using color information and developing algorithms to handle variations like cosmetics or injuries.
The document describes a gesture-based lamp control system created by team members William, Alex, and Amy. The system uses motion sensors worn by users to detect gestures and activities that control smart lamps connected over a local network. The system diagram shows motion sensors communicating with beacons and a computer program that interfaces with Philips Hue lamps and bridges using Bluetooth, ZigBee, and HTTP. Amy's subsystem implements data collection from sensors and controls lamps and feedback using vibrators. William's subsystem performs gesture recognition using dynamic time warping and adaptive training. Alex's subsystem identifies targets using magnetometer data and DTW distances to predefined templates. The team members discuss their individual work and challenges in signal processing, recognition, and training
Led based leather area measuring machineJamal_2011
1. The document discusses the design and development of an LED based machine for measuring the irregular surface area of leather sheets. It proposes using an array of optical sensors along a conveyor belt to detect the edges of passing leather sheets and calculate their surface areas.
2. Existing area measurement techniques like pinwheel mechanisms, roller planimeters, and vision-based systems are analyzed. The proposed system aims to be faster, more accurate, and self-calibrating by using an array of IR sensors to detect leather edges as it passes over a conveyor belt.
3. The proposed system works by sampling the leather sheet width using an array of IR sensors as it passes over the conveyor belt. It calculates the area
Led based leather area measuring machineJamal_2011
1) The document discusses the design of an LED based machine for measuring the irregular surface area of leather sheets. It uses an optical sensor array and precision motion control to follow the edge of passing leather sheets.
2) Existing area measurement techniques like pinwheel mechanisms, roller planimeters, and vision systems are analyzed. The proposed system uses IR transceivers to detect the edge of passing leather and calculate the area from the detection pattern.
3) The system is self-calibrating, fast, accurate to within 0.04% error, and uses an onboard signaling system to accumulate and display the measured area directly. This improves over existing techniques.
Led based leather area measuring machineJamal_2011
1) The document discusses the design of an LED based machine for measuring the irregular surface area of leather sheets. It uses an optical sensor array and precision motion control system to follow the curvature of passing leather sheets on a conveyor belt.
2) As the leather sheet passes through the sensor array, the edge is detected and fed to a controller to approximate the area. This allows measurement of irregular shapes by neglecting rounded corners and ellipses.
3) To minimize error, the motion control system needs only a small sensor footprint and close sensor spacing to accurately map the leather sheet contour and calculate the surface area.
The document describes a study that investigates using gestures as a form of authentication on smartwatches. The researchers collected accelerometer data from smartwatches as users performed different gestures. They extracted time and frequency domain features from the data and used k-nearest neighbors and random forest classifiers to distinguish between gestures and identify individual users performing the same gesture. Through 5-fold cross validation experiments, they found it was possible to accurately classify gestures and identify users with error rates comparable or better than previous gait-based authentication studies. This suggests gesture-based authentication on smartwatches is a viable solution.
Data Visualization and Communication by Big DataIRJET Journal
1) The document discusses data visualization and communication of big data. It focuses on analyzing and visualizing real-time sensor data from networks of tiny sensor nodes.
2) Processing, analyzing, and communicating big data from internet activity and expanding sensor networks presents challenges for data visualization. Data scientists help address these challenges.
3) The document discusses classifying sensor data dimensions and properties in order to select appropriate visualizations, and provides Python as a programming language for working with sensor network data and creating visualizations.
final presentation from William, Amy and AlexZiwei Zhu
The document describes a gesture-based lamp control system created by team members William, Alex, and Amy. The system uses motion sensors worn by users to detect gestures and activities that control smart lamps connected over a local network. The system diagram shows motion sensors communicating with beacons and a computer program that interfaces with Philips Hue lamps and bridges using Bluetooth, ZigBee, and HTTP. Amy's subsystem implements data collection from sensors and controls lamps and feedback using vibrators. William's subsystem uses dynamic time warping for gesture recognition and classification. Alex's subsystem identifies targets using magnetometer data and DTW distances to predefined templates. The team discusses their work on signal processing, adaptive training, recognition, and solving related challenges
Advanced Software Engineering course - Guest Lecture
A4WSN- Architecture 4 Wireless Sensor Networks
Here you can find the research paper presenting the concepts described in this lecture: http://goo.gl/XBB4k
This presentation has been developed in the context of the Advanced Software Engineering course at the DISIM Department of the University of L’Aquila (Italy).
http://www.di.univaq.it/malavolta
The document details and evaluates different technologies for gesture recognition, including computer vision, accelerometers, and gloves. It provides a literature review of papers on vision-based and accelerometer-based gesture recognition techniques. The document proposes parameters for evaluating and comparing these technologies, such as resolution, accuracy, latency, range of motion, user comfort, and cost. It assigns weights to these parameters based on the goals of developing a gesture recognition system for research purposes.
Abnormality in Elderly Fall using Android SmartphoneShivi Tandon
This document describes a student project that aims to develop an Android application to detect abnormal elderly falls using accelerometer and heart rate data collected from Android sensors. The application would classify falls as normal or abnormal and send an SMS alert to a doctor in the case of an abnormal fall. The project uses machine learning techniques like decision trees, Naive Bayes and k-NN classifiers to analyze sensor data and detect falls. Implementation details include collecting accelerometer and heart rate data and storing it in CSV files to train classifiers using Weka.
SHERLOCK: Energy Efficient and Continuous Environment Sensing Android Applica...IRJET Journal
1. The document describes the development of an Android application called Sherlock that uses the sensors in smartphones to continuously sense the environment and optimize performance based on context.
2. Sherlock collects data from sensors like the accelerometer, GPS, and proximity sensors to detect the phone's placement, nearby surfaces, and location.
3. It then uses this context information to enable energy-saving features and simulate higher-level applications like automatically answering calls, adjusting volume based on surface, and monitoring noise levels.
Intelligent Accident Detection, Prevention and Reporting SystemIRJET Journal
1. The document presents a system for intelligent accident detection, prevention, and reporting using technologies like convolutional neural networks (CNN), ultrasonic sensors, and SMS messaging.
2. The system aims to detect accidents using video frames analyzed by a CNN model and prevent accidents using ultrasonic sensors to measure distance between vehicles.
3. If an accident is detected, the system will send SMS alerts to emergency responders like police and medical services to provide quick help to victims. The system is intended to reduce accident deaths by facilitating timely emergency response.
Emotion Detection Using Facial Expression Recognition to Assist the Visually ...IRJET Journal
This document summarizes a research paper on emotion detection using facial expression recognition to assist the visually impaired. The system aims to use machine learning algorithms to classify facial expressions into different emotions (happy, sad, surprise, etc.) by detecting faces, extracting facial features, and recognizing expressions in real-time video. It is designed using a Raspberry Pi with a webcam to capture video and detect emotions to provide audio feedback to help visually impaired people. The system architecture includes modules for face detection using Haar cascades, preprocessing, feature extraction, and emotion classification trained on image datasets. Experimental results show over 80% accuracy in classifying emotions based on facial expressions.
Text Detection and Recognition in Natural ImagesIRJET Journal
1. The document presents a framework for detecting and recognizing text in natural images.
2. The framework includes candidate text generation using MSER extraction and discrete wavelet transform, followed by non-text filtering using occupation ratio and stroke width.
3. A component level classifier then determines relationships between connected components, and an SVM classifier further rejects non-text blocks to output the detected text, which is then recognized using OCR.
iGUARD: An Intelligent Way To Secure - ReportNandu B Rajan
This document presents a project report for an intelligent door lock system called iGuard. It was submitted by Nandu B Rajan in partial fulfillment of the requirements for a Bachelor of Technology degree in computer science and engineering. The report includes sections on requirements analysis, system design, implementation, testing, and conclusions. It aims to develop a door lock system that provides strengthened security functions such as sending images of unauthorized access attempts to users and alerting users if the lock is physically damaged.
iGUARD: An Intelligent Way To Secure - PresentationNandu B Rajan
This document presents the design of an IoT-based digital door lock system that aims to improve security and provide additional monitoring features. The system uses sensors to detect intruders, allows users to remotely view camera footage and control the lock, and can automatically unlock when an authenticated user is detected near the door. It provides advantages over traditional locks like remote access and family member tracking. The system consists of hardware components like a Raspberry Pi controller, sensors, camera and Bluetooth/WiFi modules. The software includes modules for user registration and authentication, remote lock control, location tracking, messaging and accessing the event history log.
More Related Content
Similar to OPENSESAME: Unlocking Smartphone Through Handwaving Biometrics
Flow Trajectory Approach for Human Action RecognitionIRJET Journal
This document proposes a method for human action recognition in videos using scale-invariant feature transform (SIFT) and flow trajectory analysis. The key steps are:
1. Extract SIFT features from each video frame to detect keypoints.
2. Track the keypoints across frames and calculate the magnitude and direction of motion for each keypoint.
3. Analyze the tracked keypoints and their motion parameters to recognize the human action, such as walking, running, etc. occurring in the video.
Human Action Recognition using Contour History Images and Neural Networks Cla...IRJET Journal
This document proposes a new method for human action recognition using contour history images extracted from silhouettes, tracking of the body's center movement, and the relative dimensions of the bounding box containing each contour history image. Features are extracted and reduced using three different methods: dividing the contour history images into rectangles, a shallow autoencoder neural network, and a deep autoencoder neural network. The reduced features are classified using a neural network classifier. The proposed method achieved a recognition rate of 98.9% on a standard human action dataset, demonstrating its potential for real-time human action recognition applications.
Hand Gesture Controls for Digital TV using Mobile ARM Platformijsrd.com
This paper presents a new approach for controlling digital television using a real-time camera. Proposed method uses a camera, a mobile ARM platform and computer vision technology, such as image segmentation and gesture recognition, to control TV operations such as changing channels, increasing or decreasing volume etc. For this we have used an ARM based mobile platform with OMAP processor. For processing the images we implemented the code using OpenCV library. Hand detection is one of the important stages for applications such as gesture recognition and hand tracking. In this paper, it proposes a new method to extract hand region and consequently the fingertips from color images.
IRJET- A Vision based Hand Gesture Recognition System using Convolutional...IRJET Journal
This document describes a vision-based hand gesture recognition system using convolutional neural networks. The system captures images of hand gestures using a camera, pre-processes the images, and classifies the gestures using a CNN model. The CNN architecture includes convolutional layers, max pooling layers, dropout layers, and fully connected layers. The system was trained on a dataset of images representing 7 different hand gestures. Testing achieved over 90% accuracy in recognizing the gestures. This vision-based approach allows for natural human-computer interaction without physical devices.
This document discusses the development of an Android application for physical activity recognition using the accelerometer sensor. It provides background on the Android operating system and its open development environment. It then summarizes relevant research papers on activity recognition using mobile sensors. The document outlines the process of collecting and labeling accelerometer data from smartphone sensors during different physical activities. Features are extracted from the sensor data and several machine learning classifiers are evaluated for activity recognition. The application will recognize activities and track metrics like calories burned, distance traveled, and implement fall detection and medical reminders.
IRJET-Computer Aided Touchless Palmprint Recognition Using SiftIRJET Journal
This document discusses a computer aided touchless palmprint recognition system using Scale Invariant Feature Transform (SIFT). SIFT is used to extract features from touchless palmprint images that are invariant to changes in scale, rotation, and translation. The system involves preprocessing images, extracting SIFT features, and matching features to recognize and authenticate individuals. An experiment was conducted using 16 real palmprint images with varying conditions. The system achieved 93.75% accuracy in recognition using SIFT features, demonstrating its effectiveness for touchless palmprint recognition compared to other approaches. Future work could explore using color information and developing algorithms to handle variations like cosmetics or injuries.
The document describes a gesture-based lamp control system created by team members William, Alex, and Amy. The system uses motion sensors worn by users to detect gestures and activities that control smart lamps connected over a local network. The system diagram shows motion sensors communicating with beacons and a computer program that interfaces with Philips Hue lamps and bridges using Bluetooth, ZigBee, and HTTP. Amy's subsystem implements data collection from sensors and controls lamps and feedback using vibrators. William's subsystem performs gesture recognition using dynamic time warping and adaptive training. Alex's subsystem identifies targets using magnetometer data and DTW distances to predefined templates. The team members discuss their individual work and challenges in signal processing, recognition, and training
Led based leather area measuring machineJamal_2011
1. The document discusses the design and development of an LED based machine for measuring the irregular surface area of leather sheets. It proposes using an array of optical sensors along a conveyor belt to detect the edges of passing leather sheets and calculate their surface areas.
2. Existing area measurement techniques like pinwheel mechanisms, roller planimeters, and vision-based systems are analyzed. The proposed system aims to be faster, more accurate, and self-calibrating by using an array of IR sensors to detect leather edges as it passes over a conveyor belt.
3. The proposed system works by sampling the leather sheet width using an array of IR sensors as it passes over the conveyor belt. It calculates the area
Led based leather area measuring machineJamal_2011
1) The document discusses the design of an LED based machine for measuring the irregular surface area of leather sheets. It uses an optical sensor array and precision motion control to follow the edge of passing leather sheets.
2) Existing area measurement techniques like pinwheel mechanisms, roller planimeters, and vision systems are analyzed. The proposed system uses IR transceivers to detect the edge of passing leather and calculate the area from the detection pattern.
3) The system is self-calibrating, fast, accurate to within 0.04% error, and uses an onboard signaling system to accumulate and display the measured area directly. This improves over existing techniques.
Led based leather area measuring machineJamal_2011
1) The document discusses the design of an LED based machine for measuring the irregular surface area of leather sheets. It uses an optical sensor array and precision motion control system to follow the curvature of passing leather sheets on a conveyor belt.
2) As the leather sheet passes through the sensor array, the edge is detected and fed to a controller to approximate the area. This allows measurement of irregular shapes by neglecting rounded corners and ellipses.
3) To minimize error, the motion control system needs only a small sensor footprint and close sensor spacing to accurately map the leather sheet contour and calculate the surface area.
The document describes a study that investigates using gestures as a form of authentication on smartwatches. The researchers collected accelerometer data from smartwatches as users performed different gestures. They extracted time and frequency domain features from the data and used k-nearest neighbors and random forest classifiers to distinguish between gestures and identify individual users performing the same gesture. Through 5-fold cross validation experiments, they found it was possible to accurately classify gestures and identify users with error rates comparable or better than previous gait-based authentication studies. This suggests gesture-based authentication on smartwatches is a viable solution.
Data Visualization and Communication by Big DataIRJET Journal
1) The document discusses data visualization and communication of big data. It focuses on analyzing and visualizing real-time sensor data from networks of tiny sensor nodes.
2) Processing, analyzing, and communicating big data from internet activity and expanding sensor networks presents challenges for data visualization. Data scientists help address these challenges.
3) The document discusses classifying sensor data dimensions and properties in order to select appropriate visualizations, and provides Python as a programming language for working with sensor network data and creating visualizations.
final presentation from William, Amy and AlexZiwei Zhu
The document describes a gesture-based lamp control system created by team members William, Alex, and Amy. The system uses motion sensors worn by users to detect gestures and activities that control smart lamps connected over a local network. The system diagram shows motion sensors communicating with beacons and a computer program that interfaces with Philips Hue lamps and bridges using Bluetooth, ZigBee, and HTTP. Amy's subsystem implements data collection from sensors and controls lamps and feedback using vibrators. William's subsystem uses dynamic time warping for gesture recognition and classification. Alex's subsystem identifies targets using magnetometer data and DTW distances to predefined templates. The team discusses their work on signal processing, adaptive training, recognition, and solving related challenges
Advanced Software Engineering course - Guest Lecture
A4WSN- Architecture 4 Wireless Sensor Networks
Here you can find the research paper presenting the concepts described in this lecture: http://goo.gl/XBB4k
This presentation has been developed in the context of the Advanced Software Engineering course at the DISIM Department of the University of L’Aquila (Italy).
http://www.di.univaq.it/malavolta
The document details and evaluates different technologies for gesture recognition, including computer vision, accelerometers, and gloves. It provides a literature review of papers on vision-based and accelerometer-based gesture recognition techniques. The document proposes parameters for evaluating and comparing these technologies, such as resolution, accuracy, latency, range of motion, user comfort, and cost. It assigns weights to these parameters based on the goals of developing a gesture recognition system for research purposes.
Abnormality in Elderly Fall using Android SmartphoneShivi Tandon
This document describes a student project that aims to develop an Android application to detect abnormal elderly falls using accelerometer and heart rate data collected from Android sensors. The application would classify falls as normal or abnormal and send an SMS alert to a doctor in the case of an abnormal fall. The project uses machine learning techniques like decision trees, Naive Bayes and k-NN classifiers to analyze sensor data and detect falls. Implementation details include collecting accelerometer and heart rate data and storing it in CSV files to train classifiers using Weka.
SHERLOCK: Energy Efficient and Continuous Environment Sensing Android Applica...IRJET Journal
1. The document describes the development of an Android application called Sherlock that uses the sensors in smartphones to continuously sense the environment and optimize performance based on context.
2. Sherlock collects data from sensors like the accelerometer, GPS, and proximity sensors to detect the phone's placement, nearby surfaces, and location.
3. It then uses this context information to enable energy-saving features and simulate higher-level applications like automatically answering calls, adjusting volume based on surface, and monitoring noise levels.
Intelligent Accident Detection, Prevention and Reporting SystemIRJET Journal
1. The document presents a system for intelligent accident detection, prevention, and reporting using technologies like convolutional neural networks (CNN), ultrasonic sensors, and SMS messaging.
2. The system aims to detect accidents using video frames analyzed by a CNN model and prevent accidents using ultrasonic sensors to measure distance between vehicles.
3. If an accident is detected, the system will send SMS alerts to emergency responders like police and medical services to provide quick help to victims. The system is intended to reduce accident deaths by facilitating timely emergency response.
Emotion Detection Using Facial Expression Recognition to Assist the Visually ...IRJET Journal
This document summarizes a research paper on emotion detection using facial expression recognition to assist the visually impaired. The system aims to use machine learning algorithms to classify facial expressions into different emotions (happy, sad, surprise, etc.) by detecting faces, extracting facial features, and recognizing expressions in real-time video. It is designed using a Raspberry Pi with a webcam to capture video and detect emotions to provide audio feedback to help visually impaired people. The system architecture includes modules for face detection using Haar cascades, preprocessing, feature extraction, and emotion classification trained on image datasets. Experimental results show over 80% accuracy in classifying emotions based on facial expressions.
Text Detection and Recognition in Natural ImagesIRJET Journal
1. The document presents a framework for detecting and recognizing text in natural images.
2. The framework includes candidate text generation using MSER extraction and discrete wavelet transform, followed by non-text filtering using occupation ratio and stroke width.
3. A component level classifier then determines relationships between connected components, and an SVM classifier further rejects non-text blocks to output the detected text, which is then recognized using OCR.
Similar to OPENSESAME: Unlocking Smartphone Through Handwaving Biometrics (20)
iGUARD: An Intelligent Way To Secure - ReportNandu B Rajan
This document presents a project report for an intelligent door lock system called iGuard. It was submitted by Nandu B Rajan in partial fulfillment of the requirements for a Bachelor of Technology degree in computer science and engineering. The report includes sections on requirements analysis, system design, implementation, testing, and conclusions. It aims to develop a door lock system that provides strengthened security functions such as sending images of unauthorized access attempts to users and alerting users if the lock is physically damaged.
iGUARD: An Intelligent Way To Secure - PresentationNandu B Rajan
This document presents the design of an IoT-based digital door lock system that aims to improve security and provide additional monitoring features. The system uses sensors to detect intruders, allows users to remotely view camera footage and control the lock, and can automatically unlock when an authenticated user is detected near the door. It provides advantages over traditional locks like remote access and family member tracking. The system consists of hardware components like a Raspberry Pi controller, sensors, camera and Bluetooth/WiFi modules. The software includes modules for user registration and authentication, remote lock control, location tracking, messaging and accessing the event history log.
Search Engine Optimization (SEO) Seminar ReportNandu B Rajan
SEO Seminar Report.
Gives basic idea about Search Engine Optimization.
While nobody can guarantee top level positioning in search engine organic results, proper search engine optimization can help. Because the search engines, such as Google, Yahoo!, and Bing, are so important today it is necessary to make each page in a Web site conform to the principles of good SEO as much as possible.
To do this it is necessary to:
• Understand the basics of how search engines rate sites
• Use proper keywords and phrases throughout the Web site
• Avoid giving the appearance of spamming the search engines
• Write all text for real people, not just for search engines
• Use well-formed alternate attributes on images
• Make sure that the necessary meta tags (and title tag) are installed in the head of each Web page
• Have good incoming links to establish popularity
• Make sure the Web site is regularly updated so that the content is fresh
LPG Booking System [ bookmylpg.com ] ReportNandu B Rajan
BOOK LPG FROM ANYWHERE (Mini Project 2016)
During today’s busy life, no one is ready to waste the time by doing the time consuming and hassle refill booking like IVR Booking System. We are proposing a simple, interactive, hassle free, less time consuming and efficient LPG Booking System. This is beneficial for the Gas Agencies also, they get the refill booking requests and consumer details instantly. Our system is futuristic and can be updated according to the future needs easily.
Features:-
To book an LPG cylinder, you should be a authorised customer. An authorised customer can register to the website and get user id and password. After you have registered, you can log on to the LPG portal using the password and user id provided to you.
Pros:-
Consumers can book the refill by just one click, they can post queries or complaints. Needs only username and password. If they don’t have one, the valid consumers can get the username and passwords with simple registration process. The Admin can only access the database, only he can add the consumers and staff. So the system is secured. The authorized staff can see the bookings and the consumer details without any hassle. He can mark the status whether the refill delivered or not. If delivered then refill request will be automatically cleared.
BOOK LPG FROM ANYWHERE (Mini Project 2016)
During today’s busy life, no one is ready to waste the time by doing the time consuming and hassle refill booking like IVR Booking System. We are proposing a simple, interactive, hassle free, less time consuming and efficient LPG Booking System. This is beneficial for the Gas Agencies also, they get the refill booking requests and consumer details instantly. Our system is futuristic and can be updated according to the future needs easily.
Features:-
To book an LPG cylinder, you should be a authorised customer. An authorised customer can register to the website and get user id and password. After you have registered, you can log on to the LPG portal using the password and user id provided to you.
Pros:-
Consumers can book the refill by just one click, they can post queries or complaints. Needs only username and password. If they don’t have one, the valid consumers can get the username and passwords with simple registration process. The Admin can only access the database, only he can add the consumers and staff. So the system is secured. The authorized staff can see the bookings and the consumer details without any hassle. He can mark the status whether the refill delivered or not. If delivered then refill request will be automatically cleared.
Search Engine Optimization (SEO) Seminar ReportNandu B Rajan
The document discusses search engine optimization (SEO) techniques for improving a website's search engine ranking, including both on-page optimization of elements within the website and off-page optimization involving external links and social media. Proper use of keywords, metadata, content, linking, and social signals can help a website rank higher organically in search engine results pages. Understanding and applying both on-page and off-page SEO strategies is important for online business success and increased website traffic.
Seminar on Search Engine Optimization.
Because users rarely click on links beyond the first search results page, boosting search-engine ranking has become essential to business success. With a deeper knowledge of search-engine optimization best practices, organizations can avoid unethical practices and effectively monitor strategies approved by popular search engines.
Whenever you enter a query in a search engine and hit 'enter' you get a list of web results that contain that query term. Users normally tend to visit websites that are at the top of this list as they perceive those to be more relevant to the query. If you have ever wondered why some of these websites rank better than the others then you must know that it is because of a powerful web marketing technique called Search Engine Optimization (SEO).
SEO is a technique which helps search engines find and rank your site higher than the millions of other sites in response to a search query. SEO thus helps you get traffic from search engines.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
3. ABSTRACT
Screen locking/unlocking is important for modern smart phones to avoid the
unintentional operations and secure the personal stuff. Once the phone is
locked, the user should take a specific action or provide some secret
information to unlock the phone. The existing unlocking approaches can be
categorized into four groups: motion, password, pattern, and fingerprint.
Existing approaches do not support smart phones well due to the deficiency of
security, high cost, and poor usability. We collect 200 users’ handwaving
actions with their smart phones and discover an appealing observation: the
waving pattern of a person is kind of unique, stable and distinguishable. In this
paper, we propose OpenSesame, which employs the users’ waving patterns for
locking/unlocking. The key feature of our system lies in using four fine-
grained and statistic features of handwaving to verify users. Moreover, we
utilize support vector machine (SVM) for accurate and fast classification. Our
technique is robust compatible across different brands of smart phones,
without the need of any specialized hardware. Results from comprehensive
experiments show that the mean false positive rate of OpenSesame is around
4. INTRODUCTION
Nowadays, smart phones are no longer the devices that are
only used to call or text others. Screen locker is a
fundamental utility for smart phones to prevent the device
from unauthorized use. Classical screen lockers have been
proposed long time back. Some of them are:
• Slide to Unlock
• PIN
• Graphical password and so on...
5. To enhance the security as well as the flexibility, many
biometric authentication methods are introduced for screen
lockers. The secrets of these methods cannot be easily spied
and reproduced since they identify based on the user’s natural
features. The biometric measures are grouped into two main
categories:
• Physiological biometrics
• Behavior biometrics
By behavior biometrics, different users wave their smart phones
to produce distinct features. These features derive from user’s
6. WAVING CHARACTERIZATION
WAVING SENSING
• Precisely characterizing user’s waving actions, selecting
appropriate sensors is necessary. As the tremendous
growth of MEMS technology, there are many powerful
sensors equipped in our smart phone today, such as
camera, microphone, proximity sensor, accelerometer,
gyroscope, and magnetic sensor etc.
• The selected sensor should be able to depict the
handwaving. In addition, it should be energy-efficient,
stable, cheap, and compatible for wide deployment in
most kinds of smart phones.
• The accelerometer allows smart phones to detect the
7. DATA COLLECTION
• For investigating the uniqueness of handwaving, we collect the waving
action data from 200 distinct smart phone users. For each specific user,
he is asked to shake the smart phone for more than 10 seconds and
repeat for three times.
• The data is collected in two sampling modes: fast and normal modes.
• In the fast mode, the accelerometer samples every 10 to 20 milliseconds,
corresponding to the acceleration value change rate. rate.There are 100
users’ traces collected using this mode. In the normal mode, the
sampling interval is 200 milliseconds and 100 users’ traces are sampled.
• All the raw waving action are recorded as a sequence of tuples
represented as (xt, yt, zt)
8. WAVING MEASUREMENT
• The traces are illustrated in a 3-D acceleration space,
short for A-Space, where the raw tuple (xt, yt, zt) are
connected in time order.
9. We define the waving function to measure the global
geometric properties of the waving shapes, which is formally
given by:
f = S(A)
where A = {(xt0 , yt0 , zt0 ), (xt1 , yt1 , zt1 ) · · · , (xtn , ytn
, ztn )}. A is a set of raw waving tuples collected during t0
and tn.
The waving function considers A as input and outputs a
feature vector f . A good waving function should have the
following properties:
Efficiency
Invariance
10. For meeting above four requirements, we propose four waving
functions, S1, S2, S3, S4, as follows:
S1: The centroid C is computed first and then two random
points A and B in the A-Space are cho- sen. The angle
∠ACB among these three points are measured. The selection of
random points is repeated for N times. At a result, N angles
output and the corresponding PDF of these angles is reported
as the
feature vector.
S2: This waving function is similar to the S1. The difference is
that all of these three points are randomly selected. One angle
among the three angles formed by these three points is
recorded. As the result, the corresponding PDF of these angles
11. S3: While both S1 and S2 concentrate on the angle
parameter, the other two waving functions, S3 and S4, focus
on the distances among the points. S3 randomly selects N
points and calculates the Euclidean distance between the
centroid and these N selected points. Fi- nally, the
corresponding PDF of distances is calculated as the feature
vector.
S4: Randomly selects N pair of points and calculates their
Euclidean distance. The PDF of these distances is the
feature vector.
12.
13. OPENSESAME
OVERVIEW
Sensing: This component is straightforward used
to record the user’s handwaving action data.
Filter: In practice, we find that there always exist
some silent periods when no waving or very low
level sensing data is detected. For better feature
extraction, we use filter component to wipe out the
silent periods.
14. Fetcher: The filtered raw tuples is feeded into
fetcher component in which four waving functions
are applied to fetch the waving features.
Classifier: To discriminate the authorized users
and unauthorized users, the Support Vector
Machine(SVM) is employed in our system for
classification.
Matcher: In the last component, the extracted
feature is used to determine whether it matches the
pre-defined one.
15. FILTER
Silent periods will seriously affect the accuracy of
OpenSesame, we must filter those data captured during this
periods. The i th raw tuple with composed acceleration value
Ai is wiped out if it satisfies the equation:
where b is called the tolerant static period, representing
the amount of acceleration points used to determine the
stability of an acceleration point. The α is the threshold to
filter the silent points.
17. FETCHER
• After the filter component, we need to generate the
feature vector of the user’s handwaving action.
• Using the field set as an input has two shortcomings.
•
• First, the amount of acceleration points in a field set is
large, usually more than 1000.
• Second, to unlock the smart phone, the user is required
to shake his smart phone for a period to generate same
amount of waving data.
18. • The waving action of user always shows the property
of repeating.
• Inorder to generate the feature vectors: we first
select a window with size w, where w is much
smaller than the size of the field set of data.
• Then we apply the waving function on this input and
deliver the PDF of the feature vectors to describe the
feature of the waving action.
19. CLASSIFIER
• The feature classifier is designed to generate a standard
to discriminate authorizeds user and unauthorized users
with the feature vectors of the input waving action data
• In OpenSesame, the support vector machine, SVM for short,
is selected as the classifier. The SVM classifier is used to
classify a group of linear-inseparable training tuples into
two classes.
•
20. • By injecting enough amount of training tuples into the
SVM classifier, a classification model can be achieve to
verify the authentication data of user.
• In OpenSesame, the label of the training tuple is
either+1 or −1.
• When y = +1, the tuple is generated from the class of
unauthorized users. On the contrary, y = −1 means the
tuple belongs to the authorized user’s class.
21. MATCHER
• The matcher component is performed when the user
activates the authentication interface of OpenSesame and
wants to unlock the smart phone.
• The user shakes the smart phone to input his waving
action as the authentication data.
• The most important requirement is that the feature
matching phase has to be processed within a short time
period, say 1 or 2 seconds.
22. • To reduce the response time, two aspects need to be
considered. The first issue is to reduce the amount of
repetition when doing authentication. This can be achieved
by reducing the false negative rate of authentication.
• The second issue is to reduce the waving time in the
matcher component.By using a small waving function input
with window size w, the waving time can be reduced .
• We calculate the real-time stability of the i th point as Pi
𝑥=𝑖−𝑏
𝑖
(Ax − 𝑦=𝑖−𝑏
𝑖 𝐴𝑦
𝑏+1
)
2
.
• Once the real-time stability value is greater than the
threshold, acceleration point Pi is set to be the initial point,
and the waving action detection terminates when
acceleration point sequence {Pi , Pi+1, ..., Pi+w} is
24. • We implement OpenSesame in Android-based
smart phones. The version of Android system is
2.3.3. the app is developed with Android-SDK
using Java SE.
• We use the open source library tool, LIBSVM [16],
to perform the classification of SVM. LIBSVM is an
integrated software for support vector
classification. The version we used is LIBSVM-
3.12.
25. METRICS
We evaluate OpenSesame in terms of the authentication
accuracy. The authentication accuracy is measured via the
following metrics:
False Negative Rate (FNR)
True Positive Rate (TPR)
False Positive Rate (FPR)
26. EXPERIMENT SETUP
• For investigating the uniqueness of handwaving, we
collect the waving action data from 200 distinct smart
phone users.
• Then he pushes the button of ‘start’ on the screen and
begins to wave the smart phone until the hint sound is
played by the smart phone.
• This waving process lasts for more than 10 seconds.
The user repeats the above action for three times to
terminate the data collection.
27. IMPACT OF WAVING
FUNCTIONSThere are four waving functions to parameterize the A- Space
representation of handwaving and maintains the window size as 50
tuples.
29. • We change the windows size from 5 to 50 with the
increment of 5 and employ S4 for testing.
• The average FNR decreases from 20% to 8% and the
average FPR reduces from 42% to 18% as the window size
increases. This shows that the larger window helps
improve the accuracy.
• The number of training tuples also affect the accuracy
33. CONCLUSION
We propose a novel behavioral biometric- based
authentication approach called OpenSesame for smart phone.
We design four waving functions to fetch the unique pattern of
user’s handwaving actions. By applying the SVM classifier, the
smart phone can accurately verify the authorized user with the
pattern of handwaving action. Experiment results based on
200 distinct users’ handwaving actions show that the
OpenSesame reaches high level of se- curity and robustness,
and achieves good user’s experience.
34. REFERENCES
• D. Florencio and C. Herley, “A large-scale study of web password
habits,” in Proc. of ACM WWW, 2007.
• J. Bonneau, “The science of guessing: analyzing an anonymized
corpus of 70 million passwords,” in Proc. of IEEE Security and
Privacy (SP), 2012.
• H.-A. Park, J. W. Hong, J. H. Park, J. Zhan, and D. H. Lee,
“Combined authentication-based multilevel access control in
mobile application for dailylifeservice,” IEEE Transactions on
Mobile Com- puting, 2010
The accelerometer in smart phones measures the acceleration of the phone relative to freefall. A value of 1 indicates that the phone is experiencing 1 g of acceleration exerting on it. 1 g of acceleration is the gravity, which the phone experiences when it is stationary. The accelerometer measures the acceleration of the phone in three different axes: X, Y, and Z.
Clearly, using normal sampling mode of accelerometer loses some data, but saves energy. We will compare these two modes in the evaluation section.
The challenge here is how to measure the handwaving represented in A-Space. We should transform the A-Space representation into a parameterized and comparable feature vector.
Efficiency: Since shape function will be performed in the smart phone, it should be simple enough to be fast and efficiently function.
Invariance: In most time, the smart phone is working in mobile environments. The waving function should be insensitive to the position or direction change of smart phones.
Robustness: Although the waving data generated by one person is similar, there always exist many noises and the sampling time is variable. Hence, the waving function should be robust to noise, blur, cracks, and dust in the waving.
The silent periods may exist at the initial stage before the user shakes his smart phone, or in the final stage after the user stops his waving. The period may also be observed in the intermediate stage when an unexpected user’s pause occurs.
In fact, the input waving action can be regarded as a series of small repeating waving actions which are very similar. Therefore, we can select a continuous sequence of acceleration points with a reasonable amount as an input to the waving function.
Feature vectors can be generated from these small inputs with low data loss.
Training tuples for SVM input is donated as
{v, y}, where v is the attribute vector used to describe the attributes of the training tuple, and y is the label of the
training tuple, which represents the actual class it belongs to. The basic idea of SVM is to transform these attribute vectors of training tuples into a higher dimensional space to make the training tuples linear-separable. Then the training tuples can be separated into two classes by a hyperplane. The SVM classifier classifies the training tuples based on this hyperplane, attempting to classify training tuples with same label into same class. Then a classification model is generated to describe the classification standard of a given tuple. Inputting an unclassified tuple into the SVM classifier using the generated classification model, the tuple can be predicted which class it most probably belongs to.
Feature vectors of the input waving action is generated and used to verify whether the user is the authorized user. If so, the access query is accepted and the smart phone is unlocked. If not, the access query is denied and the smart phone keeps locked.
False Negative Rate (FNR): The probability that an authorized user is treated as an unauthorized user. This rate is indeed the ratio of the number of incorrect authentications conducted by an authorized user to the number of his authentication attempts.
True Positive Rate (TPR): The probability that an authorized user is successfully verified. This rate de- rived from the ratio of correct authentication times of an authorized user to the number of his authentication attempts.
False Positive Rate (FPR): The probability that an unauthorized user is treated as an authorized user. This rate is obtained from the ratio of the incorrect authentication times of an unauthorized user to the
number of his authentication attempts.
Figure 6 plots the FNR and FPR for the four waving functions. From the Figure 6(a), we observe that the average FNR using S1 and S2 are around 20% while the values are below 10% using S3 and S4. The similar observation is obtained on FPR, as sho
We further focus on the distance-based waving functions. S3 and S4 have close FNRs and FPRs. However, the variance of S4 is smaller than that of S3, which means S4 is more stable than S3.
wn in Figure 6(b). This shows that the distance-based waving functions perform better than the angle-based ones.
A large window size will prolong the waving time period for unlocking and seriously affect user experiences.
But a small window size will influence the identification accuracy.