Smart Gesture control system. A supervised learning algorithm is used to recognise the algorithm followed by a publish subscribe mechanism to control the shades in a smart lab.
This document discusses integrating the Swarmpulse mobile app and website with the Nervousnet framework. The key changes are:
1. The Swarmpulse mobile app will now push data to Nervousnet CORE servers instead of its own servers.
2. The Swarmpulse website will receive data from the Nervousnet CORE servers instead of the old Swarmpulse servers.
3. New features have been added to the Swarmpulse mobile app and website like measuring earthquake intensity using the mobile's accelerometer and showing other users' locations on a map.
Nervousnet Platform Overview and Development Roadmap - (Build your own Sensor...Prasad Pulikal
The document provides an overview of the Nervousnet platform, which includes a mobile app called Nervousnet HUB that allows users to view and share sensor data. It connects to external apps called Axons and distributed servers called Nervousnet CORE that store and collect shared data. Axons can be native Android apps, HTML apps, or connected devices. The roadmap outlines developing the mobile apps, APIs, sample Axons, and converting existing apps to the Axon format by specific months. Similar platforms are also compared.
Blue eyes technology refers to using eye tracking and monitoring of eye movements to obtain information about a person and enable human-computer interaction. It allows computers to understand emotions, listen, talk, and interact with a person. Key technologies used in blue eyes technology include an emotion mouse that tracks physiological signals through a mouse, gaze and manual input, speech recognition, user interest tracking, and eye movement sensors. One experiment measured various physiological signals like heart rate, temperature, and galvanic skin response to determine a user's emotional state based on interactions through a computer mouse.
This document describes a presentation support system that allows users to control PowerPoint slides using natural body gestures detected by Kinect devices. The system defines several gestures like swiping left or right to switch slides. It uses Kinect to detect the user's posture and gestures and sends key commands to PowerPoint to advance or go back slides. The research aims to enable more engaging presentations by freeing presenters' hands so they can gesture vividly while controlling the presentation. Future work will focus on improving gesture recognition accuracy and adding more presentation functions.
(Enemy of the) State of Mobile Location TrackingRichard Keen
Since the rise of the smartphone location tracking has become ubiquitous and is an increasingly controversial and misunderstood technology. This talk discusses the latest approaches in location tracking across the major mobile platforms.
The document discusses the Internet of Things (IoT), which connects physical objects through electronics, software, and sensors to collect and share data. It describes IoT as having three major components: devices, connectivity, and data analytics. Several examples are provided of IoT applications, such as a glucose monitor that shares test results with patients, a smart air conditioner that automatically adjusts temperature based on user preferences, and a smart lock controlled by a smartphone.
A virtual touch event method using scene recognition for digital televisionecwayerode
This document proposes a method to operate applications designed for touchscreens on televisions using an infrared remote control. The method maps keystrokes on the remote control to virtual touch events, like taps and swipes, according to scene recognition algorithms. When an application is running, its current scene is identified and the corresponding mapping relationship is acquired to translate remote control inputs into touch inputs. This allows applications to be used on televisions without rewriting code or adding new hardware. The method was tested on a smart TV and introduced negligible input delay of less than one millisecond.
A virtual touch event method using scene recognition for digital televisionEcwaytech
This paper proposes a method to allow applications originally designed for touchscreens to be controlled using infrared remote controls on televisions. The method maps keystrokes on the remote to virtual touch events according to scene recognition of the application. Scene recognition identifies the current part of the application and acquires the corresponding mapping relationship between remote keys and touch events. When tested on a smart TV, the method allowed most applications to be operated remotely with negligible input delay of less than one millisecond.
This document discusses integrating the Swarmpulse mobile app and website with the Nervousnet framework. The key changes are:
1. The Swarmpulse mobile app will now push data to Nervousnet CORE servers instead of its own servers.
2. The Swarmpulse website will receive data from the Nervousnet CORE servers instead of the old Swarmpulse servers.
3. New features have been added to the Swarmpulse mobile app and website like measuring earthquake intensity using the mobile's accelerometer and showing other users' locations on a map.
Nervousnet Platform Overview and Development Roadmap - (Build your own Sensor...Prasad Pulikal
The document provides an overview of the Nervousnet platform, which includes a mobile app called Nervousnet HUB that allows users to view and share sensor data. It connects to external apps called Axons and distributed servers called Nervousnet CORE that store and collect shared data. Axons can be native Android apps, HTML apps, or connected devices. The roadmap outlines developing the mobile apps, APIs, sample Axons, and converting existing apps to the Axon format by specific months. Similar platforms are also compared.
Blue eyes technology refers to using eye tracking and monitoring of eye movements to obtain information about a person and enable human-computer interaction. It allows computers to understand emotions, listen, talk, and interact with a person. Key technologies used in blue eyes technology include an emotion mouse that tracks physiological signals through a mouse, gaze and manual input, speech recognition, user interest tracking, and eye movement sensors. One experiment measured various physiological signals like heart rate, temperature, and galvanic skin response to determine a user's emotional state based on interactions through a computer mouse.
This document describes a presentation support system that allows users to control PowerPoint slides using natural body gestures detected by Kinect devices. The system defines several gestures like swiping left or right to switch slides. It uses Kinect to detect the user's posture and gestures and sends key commands to PowerPoint to advance or go back slides. The research aims to enable more engaging presentations by freeing presenters' hands so they can gesture vividly while controlling the presentation. Future work will focus on improving gesture recognition accuracy and adding more presentation functions.
(Enemy of the) State of Mobile Location TrackingRichard Keen
Since the rise of the smartphone location tracking has become ubiquitous and is an increasingly controversial and misunderstood technology. This talk discusses the latest approaches in location tracking across the major mobile platforms.
The document discusses the Internet of Things (IoT), which connects physical objects through electronics, software, and sensors to collect and share data. It describes IoT as having three major components: devices, connectivity, and data analytics. Several examples are provided of IoT applications, such as a glucose monitor that shares test results with patients, a smart air conditioner that automatically adjusts temperature based on user preferences, and a smart lock controlled by a smartphone.
A virtual touch event method using scene recognition for digital televisionecwayerode
This document proposes a method to operate applications designed for touchscreens on televisions using an infrared remote control. The method maps keystrokes on the remote control to virtual touch events, like taps and swipes, according to scene recognition algorithms. When an application is running, its current scene is identified and the corresponding mapping relationship is acquired to translate remote control inputs into touch inputs. This allows applications to be used on televisions without rewriting code or adding new hardware. The method was tested on a smart TV and introduced negligible input delay of less than one millisecond.
A virtual touch event method using scene recognition for digital televisionEcwaytech
This paper proposes a method to allow applications originally designed for touchscreens to be controlled using infrared remote controls on televisions. The method maps keystrokes on the remote to virtual touch events according to scene recognition of the application. Scene recognition identifies the current part of the application and acquires the corresponding mapping relationship between remote keys and touch events. When tested on a smart TV, the method allowed most applications to be operated remotely with negligible input delay of less than one millisecond.
A virtual touch event method using scene recognition for digital televisionEcwayt
This document proposes a method to operate applications designed for touchscreens on televisions using an infrared remote control without rewriting code or adding new hardware. The method maps remote control keystrokes to virtual touch events based on recognizing the current scene or application. Scene recognition identifies the mapping relationship for that scene, allowing keystrokes to simulate touch operations like swipes and taps. Testing on a smart TV showed input delays of less than one millisecond when using this virtual touch event mapping method with a remote control.
A WiFi and Bluetooth Low Energy based wearable for infants that monitors Heart Rate, Body orientation and temperature. IoT implementation using IFFFT API that transmits push notifications directly to parent's mobile device.
The document describes a project to develop a vital neonatal monitor called 'ViNMo' that monitors an infant's heart rate, temperature, and body orientation. The monitor uses sensors including a thermocouple for temperature, an accelerometer for body orientation, and a pulse rate sensor for heart rate. It processes and relays the sensor data wirelessly using components like an Arduino Nano and Bluefruit module. The group aims to make the monitor comfortable for infants and integrate it to provide accurate physiological monitoring and alerts.
Maps for Mobile Apps provides unlimited access to map data that can be used on any mobile device. The company is led by CEO Rune Fjellvang and allows developers to integrate maps into their apps and give users mapping capabilities anywhere and anytime on any device. Developers can follow Maps for Mobile Apps on social media for updates.
The document discusses the potential for mind sensing technology. It describes how thoughts are transmitted as electric pulses in the nervous system, and by connecting a controlling device to decode these pulses, it may be possible to convert thoughts into digital signals. This could allow activities like texting, searching, or anything else to be done just by thinking. The tools required would include an electronic chip connected to the body to convert biosignals to digital signals, programming to decode the biosignals, and a supporting device connected wirelessly to the body chip, such as a laptop, tablet, or smartphone. Other potential applications discussed include mood scanning using sensors to detect pressure, temperature and touch, and a lie detector to identify differences in stress levels during lying.
This document presents a project on smart home automation. It introduces the concepts of the Internet of Things and how sensor networks and Raspberry Pi can be used to collect data and control devices in the home. The methodology section explains that a Raspberry Pi 3 will be used to send and receive signals from a mobile device to control appliances via relay switches. The hardware requirements include a Raspberry Pi 3, motor, relay switch and sensors. The software requirements include Quimi editor and a web server. Two research papers are referenced that discuss implementing home automation using cloud networks and mobile devices, and a microcontroller-based system with security features.
This document describes research into developing emotion-sensitive robots for human-robot interaction. It discusses using biofeedback sensors to detect human emotional states like anxiety levels through measurements of physiological responses. The robot is then able to recognize emotions and detect implicit and explicit communication from the human to determine how to respond appropriately. An experiment is described where a robot named Oracle interacts with a human operator and can change its behavior based on the detected urgency level, such as providing information, avoiding obstacles, or raising an alarm. The goal is to create robots that can assist humans in tasks by being sensitive to their emotional states.
This document outlines a smart home project to control home appliances via the internet using an ESP8266 WiFi module and relays. The objectives are to establish an internet connection between the appliances and allow control from anywhere in the world using apps on a mobile device. The process involves hardware with a WiFi-controlled relay, coding an Adafruit MQTT client, and using IFTTT to merge Google Assistant commands with the MQTT server to remotely control appliances via voice instructions.
Fuzzy Control System for Smart Home ControlNikAqilah2
This document proposes a fuzzy control system for smart home applications. The system would allow users to control home appliances like lights, air conditioning, and kitchen appliances remotely using a mobile application. It would use a Raspberry Pi as the central controller connected to sensors and devices. The system would employ fuzzy logic and communicate over WiFi to allow users to control devices from their phones. The goals of the project are to design and implement a smart home system that can control appliances anywhere and anytime to help people with disabilities.
Home automation is a growing industry that allows users to control and monitor their home systems remotely using internet-connected devices. It provides convenience, control, and a sense of coolness to users. Common early applications included HVAC, lighting, audio/video, and intercom systems. Hardware interfaces like Arduino, Raspberry Pi, and ESP8266 modules connect sensors and devices to cloud services for remote access via apps and websites. The technology is moving towards more energy efficient green building features, advanced security including biometrics, and capabilities for monitoring vacant homes. It allows for flexible, programmable, and affordable automation of various systems and peripherals to make homes smarter and more efficient.
The document describes a navigational shoe system for blind persons. The proposed system uses vibration motors in shoes connected to a microcontroller and Bluetooth module. An Android app and the blind person's mobile phone GPS would provide location coordinates to the microcontroller. The microcontroller would then vibrate the motors in the shoes to guide the blind person along the route from their starting point to their destination. This navigation system is intended to help blind persons travel independently without needing to refer to sign boards for route information.
This document summarizes a project to control electrical appliances using human gestures detected by a Kinect sensor. The Kinect tracks 20 joints in a user's skeleton. When a user's input gesture matches a predefined gesture, the corresponding appliance is switched on or off via a relay connected to the microcontroller. The system architecture includes the Kinect sensor, skeleton tracking and joint position data, gesture recognition software, and a microcontroller connected to appliances via a relay. The proposed system allows contactless control of home appliances through natural gestural interfaces.
A device uses WiFi and an Arduino microcontroller to control home appliances over the internet through voice commands or an application. The device connects to the cloud to receive control data from apps or voice assistants and then directs the Arduino controller to take appropriate actions based on the received commands, allowing users to control their home from anywhere using a wireless internet connection at low cost.
Fuzzy Control System for Smart Home ApplicationNikAqilah2
This document presents a smart home system using fuzzy logic to control temperature in a house through a mobile application. The system aims to reduce electricity wastage by allowing remote control of air conditioners. It consists of a microcontroller and sensors to monitor temperature and humidity. The system is expected to automatically control appliances according to user-defined settings to more efficiently manage energy usage and costs. Future work involves adding voice control capabilities, increasing storage, and performing data analytics on sensor readings.
This document provides an overview of a project on faster real-time gesture recognition of human hands. The project has 4 members and is guided by Miss Trupti Mane of the Computer Department. It will involve building 2 modules - a prototype and construction module as part 1 and a vision-based system as part 2. The document outlines the introduction of AI, what gesture recognition is, the requirements including a single web camera, sensor, color marker and projector, and concludes that integrating information into everyday objects can help bridge the digital divide and connect us more to the physical world.
Leap Motion is an American company that manufactures and markets a computer hardware sensor device that supports hand and finger motions as input, analogous to a mouse, but requires no hand contact or touching.
This document describes a project that developed an Android application called "Real Time Object Detection Through Voice Assistant" to help visually impaired users. The app uses machine learning and computer vision techniques like CNNs, OpenCV, and optical character recognition to detect objects and text in real-time using a device camera. It then provides audio feedback by translating the detections to speech using text-to-speech conversion. The app was implemented in Java using Android Studio and works without an internet connection by running entirely on the mobile device. Screenshots demonstrate its user interface and capabilities for real-time object and text detection with voice outputs.
This document presents a major project report on developing an automatic pet feeder. The objectives are to help pet owners feed their pets on a schedule even when they are not home. The system uses a Node MCU microcontroller, ultrasonic sensor, servo motor, buzzer, and Blynk app. It works by using the app to set a feeding timer, which triggers the buzzer and rotates the lid when the timer goes off. The pet's approach makes the buzzer stop, and it can eat from the open container. Testing showed the project meets requirements and is able to feed pets as scheduled and notify owners. Future work may integrate other pet care devices into a centralized IoT system.
Developing Rich Interfaces in JavaFX for UltrabooksFelipe Pedroso
1. The document discusses developing rich interfaces for ultrabooks using JavaFX, which supports touch and gestures. It describes various sensors available on ultrabooks like accelerometers, gyroscopes, and ambient light sensors.
2. It explains how to access these sensors from Java using JNI and the Windows sensor APIs. The process involves registering a Java object to handle sensor events, generating a header file, initializing the sensor in C++, and redirecting events to Java methods.
3. While touch works well on Linux with JavaFX, clear APIs do not exist for accessing sensors on Linux like they do on Windows. The document provides resources for learning more about JavaFX, sensors, and developing for Intel platforms.
This document is a final report on automation and robotics submitted by Truong Ha Anh to their advisor. The report provides an overview of automation and robotics in intelligent environments, including how robots can be used for tasks like home automation, personal assistance, cleaning, and security. It also discusses autonomous robot control and challenges like dealing with uncertainty. Key topics covered include modeling robot mechanisms, sensor-driven control, deliberative and behavior-based control architectures, and developing intuitive human-robot interfaces.
Control Buggy using Leap Sensor Camera in Data Mining DomainIRJET Journal
This document summarizes a research paper that proposes controlling a buggy using hand gestures detected by a Leap Motion sensor camera. The system extracts 6 points from the detected gesture, calculates 15 features from those points, and compares the features to stored gesture data using similarity algorithms to identify the gesture and command the buggy accordingly. This reduces human effort in driving compared to manual control. The system was designed to recognize gestures for basic buggy movements like forward, backward, left, and right.
It's our major project in which we make an augmented reality based android application.This application is best for travllers who can find nearby places like food,hospitals,ATM etc.just by holding his phone in any direction.
A virtual touch event method using scene recognition for digital televisionEcwayt
This document proposes a method to operate applications designed for touchscreens on televisions using an infrared remote control without rewriting code or adding new hardware. The method maps remote control keystrokes to virtual touch events based on recognizing the current scene or application. Scene recognition identifies the mapping relationship for that scene, allowing keystrokes to simulate touch operations like swipes and taps. Testing on a smart TV showed input delays of less than one millisecond when using this virtual touch event mapping method with a remote control.
A WiFi and Bluetooth Low Energy based wearable for infants that monitors Heart Rate, Body orientation and temperature. IoT implementation using IFFFT API that transmits push notifications directly to parent's mobile device.
The document describes a project to develop a vital neonatal monitor called 'ViNMo' that monitors an infant's heart rate, temperature, and body orientation. The monitor uses sensors including a thermocouple for temperature, an accelerometer for body orientation, and a pulse rate sensor for heart rate. It processes and relays the sensor data wirelessly using components like an Arduino Nano and Bluefruit module. The group aims to make the monitor comfortable for infants and integrate it to provide accurate physiological monitoring and alerts.
Maps for Mobile Apps provides unlimited access to map data that can be used on any mobile device. The company is led by CEO Rune Fjellvang and allows developers to integrate maps into their apps and give users mapping capabilities anywhere and anytime on any device. Developers can follow Maps for Mobile Apps on social media for updates.
The document discusses the potential for mind sensing technology. It describes how thoughts are transmitted as electric pulses in the nervous system, and by connecting a controlling device to decode these pulses, it may be possible to convert thoughts into digital signals. This could allow activities like texting, searching, or anything else to be done just by thinking. The tools required would include an electronic chip connected to the body to convert biosignals to digital signals, programming to decode the biosignals, and a supporting device connected wirelessly to the body chip, such as a laptop, tablet, or smartphone. Other potential applications discussed include mood scanning using sensors to detect pressure, temperature and touch, and a lie detector to identify differences in stress levels during lying.
This document presents a project on smart home automation. It introduces the concepts of the Internet of Things and how sensor networks and Raspberry Pi can be used to collect data and control devices in the home. The methodology section explains that a Raspberry Pi 3 will be used to send and receive signals from a mobile device to control appliances via relay switches. The hardware requirements include a Raspberry Pi 3, motor, relay switch and sensors. The software requirements include Quimi editor and a web server. Two research papers are referenced that discuss implementing home automation using cloud networks and mobile devices, and a microcontroller-based system with security features.
This document describes research into developing emotion-sensitive robots for human-robot interaction. It discusses using biofeedback sensors to detect human emotional states like anxiety levels through measurements of physiological responses. The robot is then able to recognize emotions and detect implicit and explicit communication from the human to determine how to respond appropriately. An experiment is described where a robot named Oracle interacts with a human operator and can change its behavior based on the detected urgency level, such as providing information, avoiding obstacles, or raising an alarm. The goal is to create robots that can assist humans in tasks by being sensitive to their emotional states.
This document outlines a smart home project to control home appliances via the internet using an ESP8266 WiFi module and relays. The objectives are to establish an internet connection between the appliances and allow control from anywhere in the world using apps on a mobile device. The process involves hardware with a WiFi-controlled relay, coding an Adafruit MQTT client, and using IFTTT to merge Google Assistant commands with the MQTT server to remotely control appliances via voice instructions.
Fuzzy Control System for Smart Home ControlNikAqilah2
This document proposes a fuzzy control system for smart home applications. The system would allow users to control home appliances like lights, air conditioning, and kitchen appliances remotely using a mobile application. It would use a Raspberry Pi as the central controller connected to sensors and devices. The system would employ fuzzy logic and communicate over WiFi to allow users to control devices from their phones. The goals of the project are to design and implement a smart home system that can control appliances anywhere and anytime to help people with disabilities.
Home automation is a growing industry that allows users to control and monitor their home systems remotely using internet-connected devices. It provides convenience, control, and a sense of coolness to users. Common early applications included HVAC, lighting, audio/video, and intercom systems. Hardware interfaces like Arduino, Raspberry Pi, and ESP8266 modules connect sensors and devices to cloud services for remote access via apps and websites. The technology is moving towards more energy efficient green building features, advanced security including biometrics, and capabilities for monitoring vacant homes. It allows for flexible, programmable, and affordable automation of various systems and peripherals to make homes smarter and more efficient.
The document describes a navigational shoe system for blind persons. The proposed system uses vibration motors in shoes connected to a microcontroller and Bluetooth module. An Android app and the blind person's mobile phone GPS would provide location coordinates to the microcontroller. The microcontroller would then vibrate the motors in the shoes to guide the blind person along the route from their starting point to their destination. This navigation system is intended to help blind persons travel independently without needing to refer to sign boards for route information.
This document summarizes a project to control electrical appliances using human gestures detected by a Kinect sensor. The Kinect tracks 20 joints in a user's skeleton. When a user's input gesture matches a predefined gesture, the corresponding appliance is switched on or off via a relay connected to the microcontroller. The system architecture includes the Kinect sensor, skeleton tracking and joint position data, gesture recognition software, and a microcontroller connected to appliances via a relay. The proposed system allows contactless control of home appliances through natural gestural interfaces.
A device uses WiFi and an Arduino microcontroller to control home appliances over the internet through voice commands or an application. The device connects to the cloud to receive control data from apps or voice assistants and then directs the Arduino controller to take appropriate actions based on the received commands, allowing users to control their home from anywhere using a wireless internet connection at low cost.
Fuzzy Control System for Smart Home ApplicationNikAqilah2
This document presents a smart home system using fuzzy logic to control temperature in a house through a mobile application. The system aims to reduce electricity wastage by allowing remote control of air conditioners. It consists of a microcontroller and sensors to monitor temperature and humidity. The system is expected to automatically control appliances according to user-defined settings to more efficiently manage energy usage and costs. Future work involves adding voice control capabilities, increasing storage, and performing data analytics on sensor readings.
This document provides an overview of a project on faster real-time gesture recognition of human hands. The project has 4 members and is guided by Miss Trupti Mane of the Computer Department. It will involve building 2 modules - a prototype and construction module as part 1 and a vision-based system as part 2. The document outlines the introduction of AI, what gesture recognition is, the requirements including a single web camera, sensor, color marker and projector, and concludes that integrating information into everyday objects can help bridge the digital divide and connect us more to the physical world.
Leap Motion is an American company that manufactures and markets a computer hardware sensor device that supports hand and finger motions as input, analogous to a mouse, but requires no hand contact or touching.
This document describes a project that developed an Android application called "Real Time Object Detection Through Voice Assistant" to help visually impaired users. The app uses machine learning and computer vision techniques like CNNs, OpenCV, and optical character recognition to detect objects and text in real-time using a device camera. It then provides audio feedback by translating the detections to speech using text-to-speech conversion. The app was implemented in Java using Android Studio and works without an internet connection by running entirely on the mobile device. Screenshots demonstrate its user interface and capabilities for real-time object and text detection with voice outputs.
This document presents a major project report on developing an automatic pet feeder. The objectives are to help pet owners feed their pets on a schedule even when they are not home. The system uses a Node MCU microcontroller, ultrasonic sensor, servo motor, buzzer, and Blynk app. It works by using the app to set a feeding timer, which triggers the buzzer and rotates the lid when the timer goes off. The pet's approach makes the buzzer stop, and it can eat from the open container. Testing showed the project meets requirements and is able to feed pets as scheduled and notify owners. Future work may integrate other pet care devices into a centralized IoT system.
Developing Rich Interfaces in JavaFX for UltrabooksFelipe Pedroso
1. The document discusses developing rich interfaces for ultrabooks using JavaFX, which supports touch and gestures. It describes various sensors available on ultrabooks like accelerometers, gyroscopes, and ambient light sensors.
2. It explains how to access these sensors from Java using JNI and the Windows sensor APIs. The process involves registering a Java object to handle sensor events, generating a header file, initializing the sensor in C++, and redirecting events to Java methods.
3. While touch works well on Linux with JavaFX, clear APIs do not exist for accessing sensors on Linux like they do on Windows. The document provides resources for learning more about JavaFX, sensors, and developing for Intel platforms.
This document is a final report on automation and robotics submitted by Truong Ha Anh to their advisor. The report provides an overview of automation and robotics in intelligent environments, including how robots can be used for tasks like home automation, personal assistance, cleaning, and security. It also discusses autonomous robot control and challenges like dealing with uncertainty. Key topics covered include modeling robot mechanisms, sensor-driven control, deliberative and behavior-based control architectures, and developing intuitive human-robot interfaces.
Control Buggy using Leap Sensor Camera in Data Mining DomainIRJET Journal
This document summarizes a research paper that proposes controlling a buggy using hand gestures detected by a Leap Motion sensor camera. The system extracts 6 points from the detected gesture, calculates 15 features from those points, and compares the features to stored gesture data using similarity algorithms to identify the gesture and command the buggy accordingly. This reduces human effort in driving compared to manual control. The system was designed to recognize gestures for basic buggy movements like forward, backward, left, and right.
It's our major project in which we make an augmented reality based android application.This application is best for travllers who can find nearby places like food,hospitals,ATM etc.just by holding his phone in any direction.
The document discusses a proposed system called "Brain Access" that would allow users to control devices through brain or muscle signals without needing specialized interfaces. It reviews existing brain-computer interface (BCI) and electromyography (EMG) works and their limitations. The proposed system would overlay numbers on existing device screens and map brain/muscle signals to numbers to enable universal control of any device or application. It describes a prototype implementation and presents results from initial experiments classifying EEG signals, showing improved accuracy when electrodes were positioned in the right hemisphere. The document concludes by discussing future work to improve the system for ubiquitous computing and control of Internet of Things devices.
The document describes the process of developing an application on Raspberry Pi that uses computer vision and Java APIs to detect hand gestures and control home appliances via IR signals. The application continuously captures camera input, detects hand gestures using OpenCV, and passes appropriate signals to an IR device to control devices like a TV. Key steps include connecting the IR device to the application via SmartConfig, teaching gestures during a learning mode, and executing gestures during operation to control devices in real-time. The goal is to implement intelligent home automation through contactless hand gesture recognition.
powerpoint presentation on sixth sense TechnologyJawhar Ali
The document discusses the Sixth Sense technology, which aims to connect the physical and digital world without hardware devices through an additional "sixth sense". It provides a brief history, outlines the key components including a camera and projector, and describes how the technology works by recognizing gestures with computer vision techniques. A range of applications are presented, from drawing and mapping to getting flight information. Related technologies like augmented reality, gesture recognition, and computer vision are also discussed. Finally, advantages like portability and connecting the real/digital world are highlighted, alongside disadvantages such as battery life.
The goal of this project is to provide a platform that allows for communication between able-bodied and disabled people or between computers and human beings. There has been great emphasis on Human-Computer-Interaction research to create easy-to-use interfaces by directly employing natural communication and manipulation skills of humans . As an important part of the body, recognizing hand gesture is very important for Human-Computer-Interaction. In recent years, there has been a tremendous amount of research on hand gesture recognition
IoTSuite: A Framework to Design, Implement, and Deploy IoT ApplicationsPankesh Patel
IoTSuite is a framework that provides modeling languages and automation techniques to design, implement, and deploy IoT applications with reduced development effort compared to existing approaches. It integrates different life-cycle phases like design, implementation, and deployment. Early results show that IoTSuite requires fewer lines of code than general purpose languages or Node-RED to develop a smart home IoT application.
"Fun with JavaScript and sensors" by Jan JongboomFwdays
This document discusses using sensors and device capabilities with JavaScript. It begins by describing the various sensors available on mobile devices like accelerometers and gyroscopes. It then provides examples of projects that utilize these sensors, such as using the light sensor to control an on-screen music player or tracking device movement to render a 3D model. The document also introduces JanOS, a fork of Firefox OS intended for phones and Raspberry Pi devices that provides access to phone APIs in JavaScript. It encourages attendees to experiment with sensors and think creatively about new uses.
This document describes a virtual mouse system that uses hand gestures as detected by a webcam to control the computer cursor and perform mouse functions like clicking and dragging. The proposed system aims to overcome limitations of physical mice like requiring batteries, wireless receivers or specific surfaces. It analyzes video frames from the webcam using OpenCV and MediaPipe to detect hand positions and gestures. Mouse movements and actions are mapped to specific gestures. The system was tested in different lighting conditions and distances from the webcam and was found to work effectively in most scenarios. Further improvements to accuracy and adapting it to mobile devices are discussed as future work.
This document discusses the development of an Android application for physical activity recognition using the accelerometer sensor. It provides background on the Android operating system and its open development environment. It then summarizes relevant research papers on activity recognition using mobile sensors. The document outlines the process of collecting and labeling accelerometer data from smartphone sensors during different physical activities. Features are extracted from the sensor data and several machine learning classifiers are evaluated for activity recognition. The application will recognize activities and track metrics like calories burned, distance traveled, and implement fall detection and medical reminders.
This lecture discusses intelligent agents and their key components. It defines agents as things that can perceive their environment and take actions. An agent's behavior is defined by its agent function, which maps percept sequences to actions. The lecture then covers the nature of environments agents operate in, describing their properties like observability, determinism, and more. It also outlines the basic structures of agents, including reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Learning-based agents are introduced as a way to allow agents to improve through experience.
The document outlines 7 thinking tools to help with rapid testing:
1. Landscaper - Do a survey to understand the big picture
2. Persona map - Map out who uses what
3. Scope map - Map out user expectations
4. Interaction map - Map what may affect what
5. Environment map - Map test environments
6. Scenario creator - Create test scenarios
7. Dashboard - Stop, analyze, and refine
These tools are part of an immersive session testing approach using reconnaissance, exploration, and rest/recovery phases to facilitate rapid yet thorough scientific exploration. A related SaaS tool called doSmartQA will offer these tools and interested users can email the founder for
This document describes an AI-based virtual mouse system that is operated using hand gestures detected by a webcam, without needing to physically touch a mouse or other device. The system uses computer vision and mediapipe to detect hand landmarks and track finger positions in real-time video input. By analyzing which fingers are raised, it can determine mouse movement or click functions. The goal is to create a touchless input that could be useful during the pandemic by reducing virus transmission through shared surfaces. The virtual mouse is implemented using OpenCV and other Python libraries to process video, smooth output, and perform mouse functions based on hand and finger tracking.
The document summarizes a student project to develop a virtual mouse interface using computer vision and finger tracking. The project is divided into 5 modules: 1) basic video operations in OpenCV, 2) image processing techniques, 3) object tracking, 4) finger-tip detection, and 5) using detected finger motions to control mouse functions. Key functions demonstrated include moving the cursor, left and right clicking, dragging, brightness control, and scrolling. Evaluation of the system found finger tracking accuracy between 60-85% for different gestures. The project aims to provide an alternative input method that reduces hardware needs and workspace.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Gas agency management system project report.pdfKamal Acharya
The project entitled "Gas Agency" is done to make the manual process easier by making it a computerized system for billing and maintaining stock. The Gas Agencies get the order request through phone calls or by personal from their customers and deliver the gas cylinders to their address based on their demand and previous delivery date. This process is made computerized and the customer's name, address and stock details are stored in a database. Based on this the billing for a customer is made simple and easier, since a customer order for gas can be accepted only after completing a certain period from the previous delivery. This can be calculated and billed easily through this. There are two types of delivery like domestic purpose use delivery and commercial purpose use delivery. The bill rate and capacity differs for both. This can be easily maintained and charged accordingly.
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...PIMR BHOPAL
Variable frequency drive .A Variable Frequency Drive (VFD) is an electronic device used to control the speed and torque of an electric motor by varying the frequency and voltage of its power supply. VFDs are widely used in industrial applications for motor control, providing significant energy savings and precise motor operation.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
1. Smart Gesture Based
Control System
1
Controlling the environment with Human
Gestures and Position
Arjun Haridas
Darshit Pandya
Pradeep Siddagangaiah
Shaker Abu Sailik
Zankrut Antani
2. Goal of the System:
2
To Control Lab Environment
(Shades and Screens ) with hand
gestures.
3. Components Used
Hardware :
• Accelerometer (Smart Phone)
• Positioning System(Ubisense)
Software :
• Smartphone application – PhonePi to acquire
accelerometer readings
• Gesture Record and Recognition algorithm
written in Python environment
• Shell scripts to connect with Middleware,
invoked through Python
3
4. Communication among Components
• For Accelerometer reading from Smartphone: Socket
communication between the PhonePi application and the
Python script
• For Position data recording from UbiSense: Telnet
communication between UbiSense and Python script
• Proxy Java objects are interacting with Middleware,
invoking Methods on these objects using Python control
scripts.
6. Distribution of workload
Sr
No.
Task Main Team Support Team
1 Task 1 Shaker Abu Sailik Pradeep S.
2 Task 2 Pradeep S.
Darshit Pandya
Zankrut Antani
3 Task 3 Pradeep S. Arjun Haridas
4 Task 4
Darshit Pandya
Zankrut Antani
Arjun Haridas
Pradeep S.
5 Task 5 Zankrut Antani
Darshit Pandya
Pradeep S.
7. Zone Division layout for the control of
Shades/Screens based on gestures
Shade 1 Shade 2 Shade 3 Shade 4 Shade 5 Shade 6
Screen3Screen1
Screen7Screen9
Zone 1.1
Zone 1.2
Zone 2
Zone 3.2
Zone 3.1