The interface combines Speech and Tongue Motion for People with Severe Disabilities. This can help people with severe disabilities to become more independent.
The document describes an electronic safety assistance system for blind people using a microcontroller and DSP voice processor. The system uses an IR transmitter and receiver to detect obstructions, and notifies the user via a voice processor and buzzer. It is designed to help blind people safely reach their destinations by warning them of detected objects through audio cues. The system's main components are an MCS51 microcontroller, IR transmitter and receiver, buzzer, and DSP voice processor. It operates by transmitting and receiving IR signals to detect reflections from objects, and triggering an audible response from the voice processor and buzzer upon detection.
RedTacton is a new technology developed by NTT that enables communication through the human body using very weak electric fields on the skin's surface. It allows for a "Human Area Network" where electronic devices can connect and exchange data when touching different parts of the body. RedTacton uses a photonic electric field sensor combining an electro-optic crystal and laser to detect tiny fluctuations in the electric field caused by transmitting data. This powers duplex communication at speeds up to 10 Mbps. Potential applications include instantly sharing files between devices with a handshake, personalizing devices with a touch, and triggering alarms if the wrong medicine is touched.
This document describes an electronic travel aid device for the blind using ultrasonic sensors to detect obstacles. It consists of an ultrasonic sensor that transmits ultrasound beams to detect objects within 2-3 meters. The distance to objects is categorized into discrete levels of 1, 2, or 3 meters which are indicated to the user via tactile vibrators. The device also detects water pits using audio signals to inform users. It aims to provide mobility information to visually impaired people to help them safely navigate environments.
This document describes a voicebox system for mute people that uses flex sensors, an accelerometer, and wireless transmission to recognize hand gestures and provide emergency messaging. The system includes a transmitter glove fitted with flex sensors to detect gestures. An accelerometer detects falls and triggers an emergency SMS. The gestures and signals are sent wirelessly and received by a microcontroller connected to a voice module that plays pre-recorded messages corresponding to the gestures and signals. The system aims to help mute people communicate through translated hand gestures.
This document describes a sign language to voice conversion glove project. The project aims to help facilitate communication between deaf/mute communities and others by translating sign language gestures into speech. The glove uses flex sensors along the fingers connected to a microcontroller that analyzes the gestures and triggers a voice processing chip to output the corresponding word or phrase. The system is powered by a voltage regulator and includes an LCD for feedback. It provides a low-cost and portable way to bridge the communication gap experience by those in the deaf/mute community.
RedTacton is a Human Area Networking technology, which is developed by Robin Gaur Jind, that uses the surface of the human body as a safe, high speed network transmission path. It is completely distinct from wireless and infrared technologies as it uses the minute electric field emitted on the surface of the human body.
A transmission path is formed at the moment a part of the human body comes in contact with a RedTacton transceiver. Communication is possible using any body surfaces, such as the hands, fingers, arms, feet, face, legs or torso. RedTacton works through shoes and clothing as well. When the physical contact gets separated, the communication is ended.
Red Tacton is a new technology for data transmission using the human body as a transmission medium. It works by inducing extremely weak electric fields on the surface of the body through a transmitter. These fields pass through the body and are detected by a receiver using an electro-optic crystal and laser light. This allows for high-speed transmission of up to 10 Mbit/s through surfaces of the body like hands or arms without needing direct contact. Communication is started when two Red Tacton devices are brought near each other and ends when they are separated. It has applications for areas like conferencing systems and security through transmission using natural movements and contacts with surfaces of the body.
Generally dumb people use sign language for communication but they find difficulty in communicating with others who don’t understand sign language. This project aims to lower this barrier in communication. It is based on the need of developing an electronic device that can translate sign language into speech in order to make the communication take place between the mute communities with the general public possible. A Wireless data gloves is used which is normal cloth driving gloves fitted with flex sensors along the length of each finger and the thumb. Mute people can use the gloves to perform hand gesture and it will be converted into speech so that normal people can understand their expression. Sign language is the language used by mute people and it is a communication skill that uses gestures instead of sound to convey meaning simultaneously combining hand shapes, orientations and movement of the hands, arms or body and facial expressions to express fluidly a speaker’s thoughts. Signs are used to communicate words and sentences to audience.
The document describes an electronic safety assistance system for blind people using a microcontroller and DSP voice processor. The system uses an IR transmitter and receiver to detect obstructions, and notifies the user via a voice processor and buzzer. It is designed to help blind people safely reach their destinations by warning them of detected objects through audio cues. The system's main components are an MCS51 microcontroller, IR transmitter and receiver, buzzer, and DSP voice processor. It operates by transmitting and receiving IR signals to detect reflections from objects, and triggering an audible response from the voice processor and buzzer upon detection.
RedTacton is a new technology developed by NTT that enables communication through the human body using very weak electric fields on the skin's surface. It allows for a "Human Area Network" where electronic devices can connect and exchange data when touching different parts of the body. RedTacton uses a photonic electric field sensor combining an electro-optic crystal and laser to detect tiny fluctuations in the electric field caused by transmitting data. This powers duplex communication at speeds up to 10 Mbps. Potential applications include instantly sharing files between devices with a handshake, personalizing devices with a touch, and triggering alarms if the wrong medicine is touched.
This document describes an electronic travel aid device for the blind using ultrasonic sensors to detect obstacles. It consists of an ultrasonic sensor that transmits ultrasound beams to detect objects within 2-3 meters. The distance to objects is categorized into discrete levels of 1, 2, or 3 meters which are indicated to the user via tactile vibrators. The device also detects water pits using audio signals to inform users. It aims to provide mobility information to visually impaired people to help them safely navigate environments.
This document describes a voicebox system for mute people that uses flex sensors, an accelerometer, and wireless transmission to recognize hand gestures and provide emergency messaging. The system includes a transmitter glove fitted with flex sensors to detect gestures. An accelerometer detects falls and triggers an emergency SMS. The gestures and signals are sent wirelessly and received by a microcontroller connected to a voice module that plays pre-recorded messages corresponding to the gestures and signals. The system aims to help mute people communicate through translated hand gestures.
This document describes a sign language to voice conversion glove project. The project aims to help facilitate communication between deaf/mute communities and others by translating sign language gestures into speech. The glove uses flex sensors along the fingers connected to a microcontroller that analyzes the gestures and triggers a voice processing chip to output the corresponding word or phrase. The system is powered by a voltage regulator and includes an LCD for feedback. It provides a low-cost and portable way to bridge the communication gap experience by those in the deaf/mute community.
RedTacton is a Human Area Networking technology, which is developed by Robin Gaur Jind, that uses the surface of the human body as a safe, high speed network transmission path. It is completely distinct from wireless and infrared technologies as it uses the minute electric field emitted on the surface of the human body.
A transmission path is formed at the moment a part of the human body comes in contact with a RedTacton transceiver. Communication is possible using any body surfaces, such as the hands, fingers, arms, feet, face, legs or torso. RedTacton works through shoes and clothing as well. When the physical contact gets separated, the communication is ended.
Red Tacton is a new technology for data transmission using the human body as a transmission medium. It works by inducing extremely weak electric fields on the surface of the body through a transmitter. These fields pass through the body and are detected by a receiver using an electro-optic crystal and laser light. This allows for high-speed transmission of up to 10 Mbit/s through surfaces of the body like hands or arms without needing direct contact. Communication is started when two Red Tacton devices are brought near each other and ends when they are separated. It has applications for areas like conferencing systems and security through transmission using natural movements and contacts with surfaces of the body.
Generally dumb people use sign language for communication but they find difficulty in communicating with others who don’t understand sign language. This project aims to lower this barrier in communication. It is based on the need of developing an electronic device that can translate sign language into speech in order to make the communication take place between the mute communities with the general public possible. A Wireless data gloves is used which is normal cloth driving gloves fitted with flex sensors along the length of each finger and the thumb. Mute people can use the gloves to perform hand gesture and it will be converted into speech so that normal people can understand their expression. Sign language is the language used by mute people and it is a communication skill that uses gestures instead of sound to convey meaning simultaneously combining hand shapes, orientations and movement of the hands, arms or body and facial expressions to express fluidly a speaker’s thoughts. Signs are used to communicate words and sentences to audience.
Maintaining Good User Experience as Touch Screen Size IncreasesHenry Wong
As touchscreen sizes increase in devices like smartphones, tablets, and PCs, maintaining a good user experience becomes more challenging. Larger screens have higher parasitic capacitance and resistance, which reduces transmit frequency and refresh rates. This can negatively impact touch sensitivity, tracking, and responsiveness. To overcome these issues, touch controllers must boost transmit voltages, add receive channels, offload processing to host CPUs, and implement dynamic power management with multiple states to optimize battery life for different usage scenarios. A system-wide approach is needed to balance touch performance and power consumption as screen sizes continue growing.
RedTacton is a Human Area Networking technology/Wireless Network, which is developed by Robin Gaur Jind, that uses the surface of the human body as a safe, high speed network transmission path.
GPS & GSM based Voice Alert System for Blind Personijsrd.com
This paper presents a theoretical model and a system concept to provide a smart electronic aid for blind people. This system is intended to provide overall measures –object detection and real time assistance via Global Positioning System (GPS).The system consist of ultrasonic sensor, GPS Module, GSM Module and vibratory circuit (speakers or head phones). This project aims at the development of an Electronic Travelling Aid (ETA) kit to help the blind people to find obstacle free path. This ETA is fixed to the stick of the blind people. When the object is detected near to the blinds stick it alerts them with the help of vibratory circuit (speakers or head phones). The location of the blind is found using Global System for Mobile communications (GSM) and Global Position System (GPS).
Red Tacton is a new technology developed by NTT that uses the human body as a communication network. It induces a weak electric field on the skin's surface to transmit data at speeds up to 10 Mbps. The transmitter induces fluctuations in the body's electric field while the receiver detects these changes using a photonic sensor. Red Tacton has applications for intuitive device connections, secure authentication, and personalized services simply through touch. It provides a more natural way to exchange data compared to traditional wireless technologies.
The document describes the APR9600 single-chip voice recording and playback integrated circuit from APLUS Integrated Circuits. Key features include 60 seconds of recording time, non-volatile flash memory, random and sequential access of multiple messages, and low power consumption. It provides detailed descriptions of the device's functionality in random access mode and tape mode, including recording and playback procedures in each mode. Block diagrams and pin descriptions are also included to explain the device's internal architecture and interface.
IRJET- IoT based Portable Hand Gesture Recognition SystemIRJET Journal
This document describes a portable hand gesture recognition system to help deaf and mute people communicate. The system uses flex sensors on a glove to detect hand gestures. An Arduino microcontroller matches the gestures to a database and sends the output to a smartphone via Bluetooth. The smartphone app then converts the text to speech so others can understand the user's message. The system is meant to overcome communication barriers for those unable to speak or hear by translating sign language gestures into audible speech in real-time.
Microcontroller Based Obstacle Detection Device Using Voice Signal for the V...IJMER
This paper aims in helping the visually impaired people through an electronic aid, which
senses any obstacle in the path and alarms the user of the obstacle. The device uses a simple principle of
transmitting an ultrasonic signal in the path generated by a wave generator. The signal gets reflected by
the obstacle (if any) in the path. The reflected signal is sensed by a sensor and produces a sound signal in
the form of voice. This voice signal directs the visually impaired person to identify the obstacles in front
of them
Electronic Hand Glove for Speed Impaired and Paralyzed PatientsIEEEP Karachi
This document describes an electronic hand glove designed to help people with speech impairments or paralysis communicate through gestures. It contains flex sensors that detect finger movements and a microcontroller that interprets the gestures using a lookup table to display letters on an LCD screen. The flex sensors are an economical and robust option that converts finger bends into electrical resistance. The glove allows people with signing abilities to communicate without others understanding sign language. It has applications for home devices, security, industries, biomedicine, and virtual reality. The gesture-based control provides an alternative to keyboards/mice and does not require the user or others to understand sign language.
Smart Remote for the Setup Box Using Gesture ControlIJERA Editor
The basic purpose of this project is to provide a means to control a set top box (capable of infrared
communication), in this case Hathway using hand gestures. Thus, this system will act like a remote control for
operating set top box, but this will be achieved through hand gestures instead of pushing buttons. To send and
receive remote control signals, this project uses an infrared LED as Transmitter. Using an infrared receiver, an
Arduino can detect the bits being sent by a remote control. And to playback a remote control signal, the
Arduino can flash an infrared LED at 38 kHz. With this project we can design a gesture controlled remote by
using a glove, it can be fixed to the hand, we can send any signal of any length, at any related frequency, and
thus we can design a universal remote
Blind Stick Using Ultrasonic Sensor with Voice announcement and GPS trackingvivatechijri
for blind individuals. Basically, the ultrasonic detector is enforced within the walking stick for detection the obstacles ahead of the blind/impaired persons. If there are any obstacles, it'll alert the blind man to avoid that obstacles and therefore the alert in Our project proposes a low-priced walking stick supported latest technology and a brand-new implementation are created for economical interface the shape of voice announcement and buzzer to form a lot of helpful the stick is additionally mounted with the water detector that detects and alerts the blind if any wetness content is present to avoid slippery methods. Daily in several aspects so as to produce versatile and safe movement for the individuals. During this technology driven world, wherever individuals try to measure severally, this project propose a low-priced stick for blind individuals to achieve personal independence, in order that they will move from one place to a different simply and safely. A conveyable stick is style and developed that detects the obstacles within the path of the blind using sensors. The buzzer and vibration motor are activated once any obstacle is detected. Additionally, the stick is provided with GPS and SMS message system. GPS system give the knowledge relating to the situation of the blind man using the stick with his relations. SMS system is employed by the blind to send SMS message to the saved numbers within the microcontroller just in case of emergency.
Advance communication through red tacton human area networking technology Pawan Sharma
WELCOME TO OUR PRESENTATION Its our great pleasure to presenting a paper at the NATIONAL CONFERENCE ON HUMAN COMPUTER INTERACTION IN ENGINEERING EDUCATION (NCHCIEE -2013) Organized by : Jahwaharlal institute of technology, borawa , dist khargone M.P Presented by : Mr. Pawan sharma Proff . Lokesh mehta Mr. Lokendre singh rathore (From SPITM, Mandleshwar dist khargone . M.p )
Advance Communication through Red Tacton -Human Area Networking Technology: Presented by : Mr. Pawan Sharma Proff . Lokesh mehta Mr. Lokendra Singh Rathore Advance Communication through Red Tacton -Human Area Networking Technology Dept. of Electronics & Communication
Bluetooth, infrared etc. were the most commonly used techniques for data transmission.
But these short-range wireless communication systems have some problems of packet collisions and it can be reduce by RED-TECTON.
The ultimate human area network solution to all these constraints of conventional technologies is “intra body” communication, in which the human body serves as the transmission medium.
RedTacton is a new human area networking technology that uses the human body as a transmission medium. It works by inducing weak electric fields on the skin's surface during contact between two devices. Data can be transmitted at speeds up to 10 Mbps without interference even in crowded areas. RedTacton communication is initiated with a touch and allows for interactive digital transfer of various media types through natural human movements and contact with surfaces like walls or desks. Potential applications include secure medical information transmission and safety alerts in hazardous environments.
This document summarizes a student project to implement 3D touch recognition using an infrared sensor matrix and Atmega 16 microcontroller. The project involves using multiple IR sensors in a grid to detect hand gestures in 3D space and representing the output on an 8x8 LED matrix. Key aspects discussed include the circuit diagram, flow chart, code snippets, applications of the technology, and challenges faced by the students in the project.
This document summarizes a student project to create a digital navigation system for blind people as an alternative to a walking cane. The system uses an ultrasonic sensor and microcontroller to detect obstacles and a vibrating motor to provide distance feedback to the user. Initial testing showed the prototype could accurately measure distances and differentiate between distance intervals. However, users had difficulty distinguishing the intensity of vibrations corresponding to different distances. Further development is needed to improve the feedback mechanism so blind users can safely navigate independently.
This document describes a microcontroller-based gesture vocalizer system that facilitates communication for the deaf, dumb, and blind. It consists of:
1) A data glove with bend sensors to detect finger movements
2) An accelerometer-based tilt detector to sense hand tilting
3) A gesture detector module that analyzes the bend and tilt data and identifies gestures
4) A speech synthesizer and speaker controlled by the microcontroller to vocalize the recognized gestures
5) An LCD display to show the gesture text for the deaf. The system uses microcontrollers, sensors, and other circuits to recognize static finger gestures and hand tilts and convert them into synthesized speech or displayed text.
Human Computer Interface Glove for Sign Language TranslationPARNIKA GUPTA
A human computer interface glove was developed with the aim of translating sign language to text & speech. The glove utilizes five flex sensors and an inertial measurement unit to accurately capture hand gestures. All components were placed on the backside of the glove providing the user with full range of motion, and not restricting the user from performing other tasks while wearing the glove.
Design and Implementation of Ultrasonic Navigator for Visually ImpairedDr.SHANTHI K.G
The document describes the design and implementation of an ultrasonic navigator device to help visually impaired people navigate their environment. The device uses ultrasonic sensors to detect obstacles in the environment and servo motors mounted on a glove to provide haptic feedback to the user about the distance and direction of obstacles. It also includes a GPS system to provide navigation instructions to help users reach their destinations. The device aims to provide more information about obstacles than a traditional walking cane by detecting objects in multiple directions and at distances over 1 meter.
the red tacton technology is a new technology in which the transfer of data is soo speed and is safe as it uses human area network for file transfer..... the red tacton applications are described in this presentation which are immense anxious and are exciting to listen and see.. the red tacton under tasks perform best work by helping in military, entertainment, medical, technology, and many other human comfort areas..... the future is going to be very much of red tacton. technology .
this is a safer technology which doesniot harm any being by any sort of radiations so it can be called asa form of GREENER TECHNOLOGY and needed to be more studied for HUMAN SURVEILLANCE.......
The document discusses the design of an electronic travel aid (ETA) to help blind individuals navigate safely. It describes some challenges with existing ETAs, such as unreliable detection of obstacles and confusing sensory feedback. The document then covers spatial sensing techniques, parameters for displaying spatial information through sound and touch, and limitations of conventional ETAs and mobile robot guides. Finally, it introduces the NavBelt system which provides either an audio spatial image or single guidance signal to direct travel.
it is a smart wheelchair which uses voice and bluetooth commands . Also consists of temperature and heartbeat sensors for continuous monitoring by the doctor.
The document describes a smart glove system for deaf and mute people that uses flex sensors and an Arduino board to translate sign language gestures into text or speech. The system captures gestures using flex sensors on a glove connected to an Arduino board. The Arduino board transforms the gestures into text or speech using a text-to-speech converter. An Android app is also proposed to receive the translated messages and output them as voice, allowing deaf people to communicate with others. The system aims to provide an easy and portable way for speech and hearing impaired individuals to communicate.
Maintaining Good User Experience as Touch Screen Size IncreasesHenry Wong
As touchscreen sizes increase in devices like smartphones, tablets, and PCs, maintaining a good user experience becomes more challenging. Larger screens have higher parasitic capacitance and resistance, which reduces transmit frequency and refresh rates. This can negatively impact touch sensitivity, tracking, and responsiveness. To overcome these issues, touch controllers must boost transmit voltages, add receive channels, offload processing to host CPUs, and implement dynamic power management with multiple states to optimize battery life for different usage scenarios. A system-wide approach is needed to balance touch performance and power consumption as screen sizes continue growing.
RedTacton is a Human Area Networking technology/Wireless Network, which is developed by Robin Gaur Jind, that uses the surface of the human body as a safe, high speed network transmission path.
GPS & GSM based Voice Alert System for Blind Personijsrd.com
This paper presents a theoretical model and a system concept to provide a smart electronic aid for blind people. This system is intended to provide overall measures –object detection and real time assistance via Global Positioning System (GPS).The system consist of ultrasonic sensor, GPS Module, GSM Module and vibratory circuit (speakers or head phones). This project aims at the development of an Electronic Travelling Aid (ETA) kit to help the blind people to find obstacle free path. This ETA is fixed to the stick of the blind people. When the object is detected near to the blinds stick it alerts them with the help of vibratory circuit (speakers or head phones). The location of the blind is found using Global System for Mobile communications (GSM) and Global Position System (GPS).
Red Tacton is a new technology developed by NTT that uses the human body as a communication network. It induces a weak electric field on the skin's surface to transmit data at speeds up to 10 Mbps. The transmitter induces fluctuations in the body's electric field while the receiver detects these changes using a photonic sensor. Red Tacton has applications for intuitive device connections, secure authentication, and personalized services simply through touch. It provides a more natural way to exchange data compared to traditional wireless technologies.
The document describes the APR9600 single-chip voice recording and playback integrated circuit from APLUS Integrated Circuits. Key features include 60 seconds of recording time, non-volatile flash memory, random and sequential access of multiple messages, and low power consumption. It provides detailed descriptions of the device's functionality in random access mode and tape mode, including recording and playback procedures in each mode. Block diagrams and pin descriptions are also included to explain the device's internal architecture and interface.
IRJET- IoT based Portable Hand Gesture Recognition SystemIRJET Journal
This document describes a portable hand gesture recognition system to help deaf and mute people communicate. The system uses flex sensors on a glove to detect hand gestures. An Arduino microcontroller matches the gestures to a database and sends the output to a smartphone via Bluetooth. The smartphone app then converts the text to speech so others can understand the user's message. The system is meant to overcome communication barriers for those unable to speak or hear by translating sign language gestures into audible speech in real-time.
Microcontroller Based Obstacle Detection Device Using Voice Signal for the V...IJMER
This paper aims in helping the visually impaired people through an electronic aid, which
senses any obstacle in the path and alarms the user of the obstacle. The device uses a simple principle of
transmitting an ultrasonic signal in the path generated by a wave generator. The signal gets reflected by
the obstacle (if any) in the path. The reflected signal is sensed by a sensor and produces a sound signal in
the form of voice. This voice signal directs the visually impaired person to identify the obstacles in front
of them
Electronic Hand Glove for Speed Impaired and Paralyzed PatientsIEEEP Karachi
This document describes an electronic hand glove designed to help people with speech impairments or paralysis communicate through gestures. It contains flex sensors that detect finger movements and a microcontroller that interprets the gestures using a lookup table to display letters on an LCD screen. The flex sensors are an economical and robust option that converts finger bends into electrical resistance. The glove allows people with signing abilities to communicate without others understanding sign language. It has applications for home devices, security, industries, biomedicine, and virtual reality. The gesture-based control provides an alternative to keyboards/mice and does not require the user or others to understand sign language.
Smart Remote for the Setup Box Using Gesture ControlIJERA Editor
The basic purpose of this project is to provide a means to control a set top box (capable of infrared
communication), in this case Hathway using hand gestures. Thus, this system will act like a remote control for
operating set top box, but this will be achieved through hand gestures instead of pushing buttons. To send and
receive remote control signals, this project uses an infrared LED as Transmitter. Using an infrared receiver, an
Arduino can detect the bits being sent by a remote control. And to playback a remote control signal, the
Arduino can flash an infrared LED at 38 kHz. With this project we can design a gesture controlled remote by
using a glove, it can be fixed to the hand, we can send any signal of any length, at any related frequency, and
thus we can design a universal remote
Blind Stick Using Ultrasonic Sensor with Voice announcement and GPS trackingvivatechijri
for blind individuals. Basically, the ultrasonic detector is enforced within the walking stick for detection the obstacles ahead of the blind/impaired persons. If there are any obstacles, it'll alert the blind man to avoid that obstacles and therefore the alert in Our project proposes a low-priced walking stick supported latest technology and a brand-new implementation are created for economical interface the shape of voice announcement and buzzer to form a lot of helpful the stick is additionally mounted with the water detector that detects and alerts the blind if any wetness content is present to avoid slippery methods. Daily in several aspects so as to produce versatile and safe movement for the individuals. During this technology driven world, wherever individuals try to measure severally, this project propose a low-priced stick for blind individuals to achieve personal independence, in order that they will move from one place to a different simply and safely. A conveyable stick is style and developed that detects the obstacles within the path of the blind using sensors. The buzzer and vibration motor are activated once any obstacle is detected. Additionally, the stick is provided with GPS and SMS message system. GPS system give the knowledge relating to the situation of the blind man using the stick with his relations. SMS system is employed by the blind to send SMS message to the saved numbers within the microcontroller just in case of emergency.
Advance communication through red tacton human area networking technology Pawan Sharma
WELCOME TO OUR PRESENTATION Its our great pleasure to presenting a paper at the NATIONAL CONFERENCE ON HUMAN COMPUTER INTERACTION IN ENGINEERING EDUCATION (NCHCIEE -2013) Organized by : Jahwaharlal institute of technology, borawa , dist khargone M.P Presented by : Mr. Pawan sharma Proff . Lokesh mehta Mr. Lokendre singh rathore (From SPITM, Mandleshwar dist khargone . M.p )
Advance Communication through Red Tacton -Human Area Networking Technology: Presented by : Mr. Pawan Sharma Proff . Lokesh mehta Mr. Lokendra Singh Rathore Advance Communication through Red Tacton -Human Area Networking Technology Dept. of Electronics & Communication
Bluetooth, infrared etc. were the most commonly used techniques for data transmission.
But these short-range wireless communication systems have some problems of packet collisions and it can be reduce by RED-TECTON.
The ultimate human area network solution to all these constraints of conventional technologies is “intra body” communication, in which the human body serves as the transmission medium.
RedTacton is a new human area networking technology that uses the human body as a transmission medium. It works by inducing weak electric fields on the skin's surface during contact between two devices. Data can be transmitted at speeds up to 10 Mbps without interference even in crowded areas. RedTacton communication is initiated with a touch and allows for interactive digital transfer of various media types through natural human movements and contact with surfaces like walls or desks. Potential applications include secure medical information transmission and safety alerts in hazardous environments.
This document summarizes a student project to implement 3D touch recognition using an infrared sensor matrix and Atmega 16 microcontroller. The project involves using multiple IR sensors in a grid to detect hand gestures in 3D space and representing the output on an 8x8 LED matrix. Key aspects discussed include the circuit diagram, flow chart, code snippets, applications of the technology, and challenges faced by the students in the project.
This document summarizes a student project to create a digital navigation system for blind people as an alternative to a walking cane. The system uses an ultrasonic sensor and microcontroller to detect obstacles and a vibrating motor to provide distance feedback to the user. Initial testing showed the prototype could accurately measure distances and differentiate between distance intervals. However, users had difficulty distinguishing the intensity of vibrations corresponding to different distances. Further development is needed to improve the feedback mechanism so blind users can safely navigate independently.
This document describes a microcontroller-based gesture vocalizer system that facilitates communication for the deaf, dumb, and blind. It consists of:
1) A data glove with bend sensors to detect finger movements
2) An accelerometer-based tilt detector to sense hand tilting
3) A gesture detector module that analyzes the bend and tilt data and identifies gestures
4) A speech synthesizer and speaker controlled by the microcontroller to vocalize the recognized gestures
5) An LCD display to show the gesture text for the deaf. The system uses microcontrollers, sensors, and other circuits to recognize static finger gestures and hand tilts and convert them into synthesized speech or displayed text.
Human Computer Interface Glove for Sign Language TranslationPARNIKA GUPTA
A human computer interface glove was developed with the aim of translating sign language to text & speech. The glove utilizes five flex sensors and an inertial measurement unit to accurately capture hand gestures. All components were placed on the backside of the glove providing the user with full range of motion, and not restricting the user from performing other tasks while wearing the glove.
Design and Implementation of Ultrasonic Navigator for Visually ImpairedDr.SHANTHI K.G
The document describes the design and implementation of an ultrasonic navigator device to help visually impaired people navigate their environment. The device uses ultrasonic sensors to detect obstacles in the environment and servo motors mounted on a glove to provide haptic feedback to the user about the distance and direction of obstacles. It also includes a GPS system to provide navigation instructions to help users reach their destinations. The device aims to provide more information about obstacles than a traditional walking cane by detecting objects in multiple directions and at distances over 1 meter.
the red tacton technology is a new technology in which the transfer of data is soo speed and is safe as it uses human area network for file transfer..... the red tacton applications are described in this presentation which are immense anxious and are exciting to listen and see.. the red tacton under tasks perform best work by helping in military, entertainment, medical, technology, and many other human comfort areas..... the future is going to be very much of red tacton. technology .
this is a safer technology which doesniot harm any being by any sort of radiations so it can be called asa form of GREENER TECHNOLOGY and needed to be more studied for HUMAN SURVEILLANCE.......
The document discusses the design of an electronic travel aid (ETA) to help blind individuals navigate safely. It describes some challenges with existing ETAs, such as unreliable detection of obstacles and confusing sensory feedback. The document then covers spatial sensing techniques, parameters for displaying spatial information through sound and touch, and limitations of conventional ETAs and mobile robot guides. Finally, it introduces the NavBelt system which provides either an audio spatial image or single guidance signal to direct travel.
it is a smart wheelchair which uses voice and bluetooth commands . Also consists of temperature and heartbeat sensors for continuous monitoring by the doctor.
The document describes a smart glove system for deaf and mute people that uses flex sensors and an Arduino board to translate sign language gestures into text or speech. The system captures gestures using flex sensors on a glove connected to an Arduino board. The Arduino board transforms the gestures into text or speech using a text-to-speech converter. An Android app is also proposed to receive the translated messages and output them as voice, allowing deaf people to communicate with others. The system aims to provide an easy and portable way for speech and hearing impaired individuals to communicate.
Ubiquitous services that are genuinely user-friendly to everyone will require technologies that enable communication between people and objects in close proximity.
Focusing on the naturalness, inevitability, and sense of security conveyed by touching in everyday life, which describes Human area network that enables communication by touching, which we call RedTacton.
Here, the human body acts as a transmission medium supporting IEEE 802.3 half-duplex communication at 10Mbit/s. The key component of the transceiver is an electric-field sensor implemented with an electro optic crystal and laser light.
Power constraints play a key role in designing Human Area Networks (HANs) for biomonitoring. To alleviate the power constraints, we advocate a design that uses an asynchronous time encoding mechanisms for representing biomonitoring information and the skin surface as the communication channel.
Time encoding does not require a clock while allows perfect signal recovery; the communication channel is operated below 1 MHz. The ultimate human area network solution to all these constraints of conventional technologies is “intrabody” communication, in which the human body serves as the transmission medium.
The concept of intrabody communication, which uses the minute electric field propagated by the human body to transmit information, was first proposed by IBM [1]. The communication mechanism has subsequently been evaluated and reported by several research groups around the world.
This document describes an ultrasonic spectacles and waist-belt system for visually impaired and blind people. The system uses ultrasonic sensors to detect obstacles in front, left, and right directions up to 500cm away. It calculates the distance to detected objects and provides navigation guidance via prerecorded speech messages to help the user avoid obstacles. The system is controlled by a microcontroller that processes real-time sensor data and triggers the appropriate audio messages.
RedTacton is a technology developed in 2005 that allows devices to communicate when touched to a human body. It uses electro-optic effects and electric fields conducted through the body. RedTacton transceivers cover electrodes with insulating film so current does not enter the body. This novel approach provides high-speed, intuitive data transmission without wires at distances of a few centimeters. While expensive initially, RedTacton could revolutionize ubiquitous computing by facilitating communication between many networked devices and environments.
The document describes a Tongue Driver System (TDS) which is an assistive technology that allows users to control devices through voluntary tongue motions. It works by detecting the movement of a small magnet attached to the tongue using magnetic sensors placed in the mouth. The signals from the sensors are sent wirelessly to a microcontroller and used to assign specific tongue movements to commands for operating wheelchairs, computers, phones and other devices. The TDS provides an alternative control method that does not require the use of hands or feet and could help people with disabilities.
This project aims to develop a smart glove interpreter to facilitate communication between deaf or impaired people and normal people using wireless data transmission. The glove is fitted with flex sensors that detect gestures which are processed by a microcontroller to provide voice outputs or messages based on the gesture. It allows impaired people to control devices or send alerts. The system works by mapping different finger flexing patterns detected by flex sensors to specific text messages or voice outputs. This provides an easier means of communication for impaired individuals.
The document describes a wireless hand gesture system called GestureNail that is installed on artificial fingernails. It uses an LED and coil powered by radio frequency to notify the user when their finger is within the gesture detection area. A study found the notification increased task success rate from 85% to 100% and reduced task time by 0.7 seconds. The system could enable contactless control of devices like home electronics, public interfaces, or adding click gestures to tablets.
Third Eye for Blind using Ultrasonic Sensor Vibrator Gloveijtsrd
The primary goal of the project is to enable blind people to use an RF remote to find their gloves. This system gives visually impaired individuals walking exceptional security by incorporating a siren and attaching many sensors. Nowadays, individuals prioritize their safety above all else when they are driving, walking, or otherwise moving around. With the help of this system, we can track a blind persons whereabouts using a mobile device and receive emergency alert messages with their precise location. The technology also provides excellent security and shows them how to walk. The system has sensors for stair detection, soil detection, and obstacle recognition so that it may automatically identify impediments and deliver alerts. Using a soil moisture detector is used to find alerts in line with soil moisture levels. So that people may see the proper path while walking on the floor, stairs, and in many other locations, this approach can be very helpful. When an emergency arises, the system can be connected to a microcontroller and notify the appropriate people. A GPS receiver, a microcontroller, and a GSM modem are the components of this tracking system. This information is processed by the microcontroller and forwarded to the appropriate numbers after processing. Dr. B. Rambabu | S. Navya | M. Sahithi Vyas | A. Dishendra Sekhar "Third Eye for Blind using Ultrasonic Sensor Vibrator Glove" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-1 , February 2023, URL: https://www.ijtsrd.com/papers/ijtsrd53888.pdf Paper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/53888/third-eye-for-blind-using-ultrasonic-sensor-vibrator-glove/dr-b-rambabu
Human Area Networking, also called RedTacton, is a technology developed by NTT that uses the human body's natural electric field to transmit data through physical contact at speeds up to 10 Mbps. RedTacton overcomes earlier limitations of short operating ranges and slow speeds. It works by detecting minute changes in the body's electric field using a sensitive photonic electric field sensor. RedTacton transceivers induce a weak electric field for transmission and sense changes using an electro-optic crystal and laser. This allows for interactive high-speed communication through touch with various devices and surfaces.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IRJET - Third Eye for Blind People using Ultrasonic Vibrating Gloves with Ima...IRJET Journal
This document describes a proposed system called a "Third Eye for Blind People Using Ultrasonic Vibrating Gloves with Image Processing" that aims to help blind people identify objects and navigate safely. The system uses ultrasonic sensors on gloves to detect obstacles within 3 meters and vibrates motors or provides audio alerts. It also includes a camera on a hat that captures images for object recognition processing on a Raspberry Pi module. The proposed system is presented as a low-cost, portable solution to help blind people with navigation and object identification challenges in daily life.
The document describes a Tongue Drive System, an assistive technology that allows severely disabled individuals to control their environment through tongue movements. A small magnet is attached to the tongue, and its movements are detected by sensors in a headset or dental brace. The sensor signals are transmitted to a computer and converted into commands to control a cursor or powered wheelchair. Initial testing with novice users found response times under one second with 100% accuracy for six tongue commands, allowing basic computer and mobility functions. Further clinical trials are needed before commercial development. The system aims to give severely disabled people more independence through control of devices with their tongues.
This document summarizes an input device and software interface that allows people with severe tetraplegia and anarthria to interact with computers. The input device uses either skin electrical signals processed with neural networks or a proximity sensor activated by minimal movement. The software interface can be adapted to any input device and gives access to standard PC software through a mono-command system navigated by the input signals. The system aims to provide affordable computer access to severely impaired users.
The document describes RedTacton, a new human area networking technology developed by NTT Lab in Japan. RedTacton uses the human body as a transmission path for data, transmitting at speeds of 10mbps using the minute electric fields generated by the body. It allows for touch-based communication and transmission of media between devices carried or embedded on the user. Potential applications include military and medical device security, access control, and media sharing between devices in close proximity to the body.
Red Tacton is a new Human Area Networking (HAN) technology that uses the human body as a transmission path for data. It works by inducing a weak electric field on the body's surface from a transmitter chip, which is then detected by a receiver chip. This allows devices to communicate by touch at speeds up to 10Mbps without needing connectors. Potential applications include instantly downloading device drivers or sharing data between devices with a touch. While it provides a low-cost way to network devices, there are limitations on conductor size and potential safety issues from power dissipation that need further research.
This document provides an overview of robotics and embedded systems topics, including definitions of key concepts. It discusses embedded systems, robotics, advanced robotics involving various sensors and modules. It also introduces the ATmega16 microcontroller and programming in Arduino. Finally, it covers interfacing technologies like Bluetooth, Zigbee, GPS and ultrasonic sensors with microcontrollers.
NTT discovered RedTacton technology in 2005, which uses weak electric fields on the human body as a transmission medium to enable data transfer speeds of up to 10Mbps. RedTacton relies on changes in the optical properties of an electro-optic crystal caused by variations in the electric field to detect transmissions. It allows for intuitive data transfer between devices through natural human movements like touching or gripping and has applications in advertising, security, and wireless transmission between devices in contact with the body. RedTacton is seen as a promising alternative to technologies like Wi-Fi and Bluetooth for short-range personal area networks.
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
QA or the Highway - Component Testing: Bridging the gap between frontend appl...zjhamm304
These are the slides for the presentation, "Component Testing: Bridging the gap between frontend applications" that was presented at QA or the Highway 2024 in Columbus, OH by Zachary Hamm.
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
In our second session, we shall learn all about the main features and fundamentals of UiPath Studio that enable us to use the building blocks for any automation project.
📕 Detailed agenda:
Variables and Datatypes
Workflow Layouts
Arguments
Control Flows and Loops
Conditional Statements
💻 Extra training through UiPath Academy:
Variables, Constants, and Arguments in Studio
Control Flow in Studio
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
High performance Serverless Java on AWS- GoTo Amsterdam 2024Vadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless community. Java is known for its high cold start times and high memory footprint, comparing to other programming languages like Node.js and Python. In this talk I'll look at the general best practices and techniques we can use to decrease memory consumption, cold start times for Java Serverless development on AWS including GraalVM (Native Image) and AWS own offering SnapStart based on Firecracker microVM snapshot and restore and CRaC (Coordinated Restore at Checkpoint) runtime hooks. I'll also provide a lot of benchmarking on Lambda functions trying out various deployment package sizes, Lambda memory settings, Java compilation options and HTTP (a)synchronous clients and measure their impact on cold and warm start times.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectors
A Dual-Mode Human Computer Interface
1. A DUAL-MODE HUMAN COMPUTER INTERFACE COMBINING SPEECH
AND TONGUE MOTION FOR PEOPLE WITH SEVERE DISABILITIES
2. 54MAmericans (~20%)
living with disabilities
55%of the SCI victims are
16~30
years old
They need lifelong
special care
Financial, emotional, and
productivity cost to the families
11,000
cases of severe SCI add every
year to a total population of
250,000
5. Assistive technology is any tool that helps people with disabilities to do things more quickly,
easily or independently.
• Head motion
• Eye movement
• Muscle contractions
• Tongue motion
6. Tongue Drive System (TDS)
Enables individuals with severe physical disabilities to control their environments, access
computers, and drive powered wheelchairs through their volitional tongue movements.
TDS detects the tongue motion by measuring the magnetic field variation generated by a
magnetic tracer attached to the tongue using an array of magnetic sensors mounted on a
wireless headset.
7. The TDS in its current form has been mainly designed to substitute mouse cursor movements in
cardinal directions plus clicking functions by offering users with six simultaneously accessible
commands associated with particular user-defined positions in the mouth, which are activated
when they are reached by their tongues.
8. Speech Recognition
Speech recognition is a technology that allows spoken input into systems.
You talk to your computer, phone or device and it uses what you said as input to
trigger some action.
Individuals with severe disabilities can benefit from this technology as long as their
vocal abilities are intact.
Takes too much time for task completion.
10. OVERVIEW
The dual-mode Tongue Drive System (dTDS) is a wireless and wearable human
computer interface.
It is designed to allow people with severe disabilities to use computers more
effectively with increased speed, flexibility, and independence through their tongue
motion and speech.
The dTDS detects user’s tongue motion using a magnetic tracer and an array of
magnetic sensors embedded in a compact and ergonomic wireless headset.
It also captures the user’s voice wirelessly using a small microphone embedded in
the same headset.
11.
12. DUAL-MODE TONGUE DRIVE SYSTEM (dTDS)
The dTDS operates based on the information collected from two independent input
channels; free voluntary tongue motion and speech.
The two input channels are processed independently.
The primary dTDS modality involves tracking tongue motion in the 3D oral space
using a small magnetic tracer attached to the tongue via adhesives, piercing, or
implantation and an array of magnetic sensors, similar to the original TDS.
13. The secondary dTDS input modality is based on the user’s speech, captured using a
microphone.
It is conditioned, digitized, and wirelessly transmitted to the smartphone/PC along
with the magnetic sensor data.
Both TDS and SR modalities are simultaneously accessible to the dTDS users.
14. The tongue-based primary modality is always active and regarded as the default
input modality.
The tongue commands, however, can be used to enable/disable the speech-based
secondary modality via the dTDS graphical user interface (GUI) to reduce the system
power consumption and extend battery lifetime.
16. A small permanent magnetic tracer attached to the tongue using tissue adhesives or
embedded in a titanium tongue stud.
A custom-designed wireless headset that supports 3-axial magnetic sensors and a
microphone plus a control unit that combines and packetizes the acquired raw data before
wireless transmission.
A wireless transceiver that receives the data packets from the headset and delivers them to
the PC or smartphone.
A GUI running on the PC or smartphone .
17. Permanent Magnetic Tracer
A small (3mm × 1.6 mm) disc-shaped rare earth magnet with high residual magnetic
strength (Br = 14,500 Gauss) was used as the tracer. A small tracer is desired to minimize
any risk o f discomfort and potential impact on the user’s speech, which is important in
achieving high accuracy with commercial SR software. The high Br resulted in
maintaining the signal-to-noise (SNR).
18. Wireless Headset
A customized wireless headset was designed to combine aesthetics with user comfort,
mechanical strength, and stable positioning of the sensors. The headset was also
designed to offer flexibility and adjustability to adapt to the user’s head anatomy, while
enabling proper positioning of the magnetic sensors and the microphone near the user’s
cheeks.
It also has a control unit that combines and packetizes the acquired raw data
before wireless transmission.
19. Block diagram of circuit in the headset
The MCU assembles an RF packet containing one audio and one magnetic data
frame and transmits it wirelessly.
After sending each RF packet, the MCU expects to receive a back telemetry packet.
Telemetry packet contains one data frame and an optional audio frame.
20. The data frame contains control commands from the PC/smartphone to switch on/off
the speech modality.
The audio frame in the back telemetry packet contains digitized sound signals from
the PC/smartphone.
The MCU extracts the audio signal from the telemetry packet and sends it to the
earphones.
Then it sends next RF packet.
21. How data is transferred?
A simple but effective wireless handshaking has been implemented between the
headset and the wireless transceiver to establish a dedicated wireless connection
between the two devices without interference by other nearby dTDS headsets.
When the dTDS headset is turned on, it enters an initialization mode by default and
broadcasts a handshaking request packet that contains a unique ID.
This is done at 1s time intervals for one minute.
If the headset receives a handshaking response packet back from a nearby USB
transceiver within the initialization period, it will update its frequency channel.
22. Then it sends an acknowledgement packet back to the transceiver to complete the
handshaking.
The headset then switches to normal operating mode using the received parameters.
Otherwise, in the absence of a handshaking, the headset will enter the standby mode
by blinking a red LED to indicate that the initialization has failed and the power
cycle should be repeated.
23. Wireless Transceiver
Transceiver has two operating modes: handshaking and normal.
In the handshaking mode, the transceiver first listens to any incoming handshaking
request packets from dTDS headsets within range (~10 m).
If the transceiver receives a handshaking request packet with an appropriate header and a
valid network ID, it will scan through all available frequency channels, and chooses the
least crowded one as the communication channel for that specific headset.
The transceiver then switches to transmit mode and sends a handshaking response packet
to the headset.
24. If an acknowledge is received within 5 s, the transceiver will update its frequency
channel to the same frequency as the dTDS headset channel and enters the normal
operating mode.
Otherwise, the transceiver will notify the PC/smartphone that the handshaking has
failed.
25. Graphical User Interface (GUI)
Generally, there is no need to present the dTDS users with its GUI.
As long as the SSP engine is running in the background, the dTDS can be used to
directly substitute the mouse and keyboard functions to provide the user with access to
all the applications or software on the PC.
GUI is mainly used to activate/deactivate Speech mode.
26. Major benefits Of dTDS
Increasing the speed of access by using each modality for its optimal target tasks and functions.
Allowing users to select either technology depending on the personal and environmental
conditions, such as weakness, fatigue, acoustic noise, and privacy.
Provide users with a higher level of independence by eliminating the need for switching from
one AT to another, which often requires receiving assistance from a caregiver.
28. Conclusions
The dual-mode Tongue Drive System (dTDS) allows people with severe disabilities to
use computers by navigating a mouse cursor and typing via two modalities: voluntary
tongue motion and speech, which are simultaneously available to them.
The dTDS users can choose their preferred modality based on the nature of the tasks,
operating environments, and their physical conditions.
The performance of the dTDS was significantly better than unimodal TDS and SR in a
task that involved both navigation and typing (e.g. web surfing).