This document describes a voice-operated wheelchair system that allows disabled users to control a wheelchair through voice commands. The system uses a microcontroller, wireless microphone, voice recognition processor and motor control interface to integrate voice command functionality. It is trained to recognize basic movement commands like forward, reverse, left and right. When a user speaks a command into the microphone, the voice recognition processor detects the word and sends the corresponding signal to the microcontroller to drive the motors and move the wheelchair. This system is designed to give wheelchair users independence by enabling control through their voice.
This document describes a project to build a voice-operated wheelchair for physically disabled persons. The objective is to design hardware for voice recognition and corresponding wheelchair actions. Group members include Mandar Jadhav, Mayuresh Todkar and Dayanand Patil, guided by Dr. V. Jayashree. The system is aimed to help those paralyzed below the neck or with quadriplegia. It will allow independent wheelchair movement through voice commands without need for personal assistance. The design uses a microphone, voice recognition IC, microcontroller, motor drivers and batteries to power DC motors for forward, reverse, left and right wheelchair movement.
It consisting Mobile app, Arduino, Bluetooth Receiver module, L293D ic etc. The movement of robot is controlled by the voice which catch by the microphone inside the mobile.
Smart wheel chair based on voice recognition for handicappedSagar Bayas
This project aims to develop a voice controlled wheelchair system using a speech recognition module. The goal is to allow disabled or elderly people who have difficulty moving to control a wheelchair independently using their voice. The system uses a microcontroller and DC motors to move the wheelchair based on voice commands detected by the microphone. The voice commands will allow the user to move the wheelchair forward, backward, left, right, or stop. This aims to give users more independence and a better quality of life without relying on caregivers for mobility assistance.
it is a smart wheelchair which uses voice and bluetooth commands . Also consists of temperature and heartbeat sensors for continuous monitoring by the doctor.
This document describes an IOT-operated wheelchair that can be controlled through hand gestures. The wheelchair was implemented using an Android smartphone equipped with an accelerometer sensor for gesture recognition, a microcontroller, DC motors, an H-bridge motor driver, and Bluetooth. The wheelchair can move forward, backward, left and right based on gestures detected by the smartphone accelerometer. It is intended to help elderly and physically disabled users navigate inside their homes independently without external assistance. Some potential applications mentioned include use in hospitals and for disabled patients.
This robotic wheelchair operated by human speech commands. The system operates with the use of a android device which is transmits voice commands to an 8051 microcontroller to achieve this functionally. The transmitter consists of the Bluetooth devices . The voice commands recognized by the module are transmitted by through the Bluetooth transmitter . This commands are detected by the robotic wheelchair in order to move it in left , right ,backward and front direction
The document describes an eye-tracking wheelchair designed for paralyzed or motor disabled patients. The wheelchair uses an optical eye tracking system to track the user's eye movements and direct the wheelchair accordingly without physical contact. When the user looks left, right, or straight, the wheelchair will move in that direction. An Arduino microcontroller interfaces with the camera and motors to process images of the user's pupil position and control the wheelchair's movement. The system aims to provide independent mobility to those with physical disabilities.
This document describes a project to build a voice-operated wheelchair for physically disabled persons. The objective is to design hardware for voice recognition and corresponding wheelchair actions. Group members include Mandar Jadhav, Mayuresh Todkar and Dayanand Patil, guided by Dr. V. Jayashree. The system is aimed to help those paralyzed below the neck or with quadriplegia. It will allow independent wheelchair movement through voice commands without need for personal assistance. The design uses a microphone, voice recognition IC, microcontroller, motor drivers and batteries to power DC motors for forward, reverse, left and right wheelchair movement.
It consisting Mobile app, Arduino, Bluetooth Receiver module, L293D ic etc. The movement of robot is controlled by the voice which catch by the microphone inside the mobile.
Smart wheel chair based on voice recognition for handicappedSagar Bayas
This project aims to develop a voice controlled wheelchair system using a speech recognition module. The goal is to allow disabled or elderly people who have difficulty moving to control a wheelchair independently using their voice. The system uses a microcontroller and DC motors to move the wheelchair based on voice commands detected by the microphone. The voice commands will allow the user to move the wheelchair forward, backward, left, right, or stop. This aims to give users more independence and a better quality of life without relying on caregivers for mobility assistance.
it is a smart wheelchair which uses voice and bluetooth commands . Also consists of temperature and heartbeat sensors for continuous monitoring by the doctor.
This document describes an IOT-operated wheelchair that can be controlled through hand gestures. The wheelchair was implemented using an Android smartphone equipped with an accelerometer sensor for gesture recognition, a microcontroller, DC motors, an H-bridge motor driver, and Bluetooth. The wheelchair can move forward, backward, left and right based on gestures detected by the smartphone accelerometer. It is intended to help elderly and physically disabled users navigate inside their homes independently without external assistance. Some potential applications mentioned include use in hospitals and for disabled patients.
This robotic wheelchair operated by human speech commands. The system operates with the use of a android device which is transmits voice commands to an 8051 microcontroller to achieve this functionally. The transmitter consists of the Bluetooth devices . The voice commands recognized by the module are transmitted by through the Bluetooth transmitter . This commands are detected by the robotic wheelchair in order to move it in left , right ,backward and front direction
The document describes an eye-tracking wheelchair designed for paralyzed or motor disabled patients. The wheelchair uses an optical eye tracking system to track the user's eye movements and direct the wheelchair accordingly without physical contact. When the user looks left, right, or straight, the wheelchair will move in that direction. An Arduino microcontroller interfaces with the camera and motors to process images of the user's pupil position and control the wheelchair's movement. The system aims to provide independent mobility to those with physical disabilities.
This document describes the implementation of a smart helmet system using solar power. The system includes an alcohol sensor, temperature sensor, vibration sensor, GPS module, GSM module, and microcontroller to provide safety features like detecting accidents, monitoring alcohol levels, tracking location, and regulating temperature. The smart helmet aims to reduce accidents by preventing intoxicated riding and automatically alerting emergency contacts in the event of a crash. It can also be used to remotely immobilize a vehicle if the helmet is removed or stolen. The system design uses low power components like a solar cell to allow for portable operation.
This document describes an android-based automated smart wheelchair that can be controlled via smartphone. Key points:
1) It uses a smartphone's built-in accelerometer sensors and Bluetooth technology to transmit control signals to a microcontroller connected to DC motors that power the wheelchair's wheels.
2) The microcontroller receives the Bluetooth signals and controls the wheelchair motion.
3) It allows for easier mobility for disabled users by automating the wheelchair and controlling it remotely with a smartphone.
soldier tracking and health monitoring systemJoshpin Bala.B
This document describes a project to track soldiers and monitor their health status during war using sensors, GPS, and GSM. The system includes sensors to monitor a soldier's pulse rate and temperature, along with a GPS module to track location. If the soldier's pulse drops below 60 or their coordinates exceed a certain range, the system will automatically make an emergency call through the GSM module to alert others. The goal is to allow army personnel to plan strategies based on real-time soldier location and health data. The system uses an ARM7 microcontroller to process data from the sensors and GPS and communicate through GSM when needed.
This document describes a direction controlled wheelchair for physically disabled people using voice control and an RF module. The wheelchair can be controlled through voice commands to a voice recognition module or through an RF remote. It also monitors the user's temperature and detects obstacles using sensors. The system uses a PIC microcontroller, voice recognition module, RF transmitter/receiver, temperature sensor, IR obstacle sensor, motor driver, and LCD display. The goal is to allow disabled individuals to move independently through voice or remote control while also monitoring their health conditions.
As Digital Still Cameras (DSC) become smaller, cheaper and higher in resolution, photographs are increasingly prone to blurring from shaky hands. Optical image stabilization (OIS) is an effective solution that addresses the quality of images, and is an idea that has been around for at least 30 years. It has only recently made its way into the low-cost consumer camera market, and will soon be migrating to the higher end camera phones. This paper provides an overview of common design practices and considerations for optical image stabilization and how silicon-based MEMS dual-axis gyroscopes with their size, cost and performance advantages are enabling this vital function for image capturing devices
Iot operated wheel chair / smart wheelchair YOGEESH M
This document describes a project to create an IOT operated wheelchair that can be controlled through hand gestures using an accelerometer sensor. The wheelchair is intended to help elderly and physically disabled users navigate inside their home independently. It uses a Raspberry Pi microcontroller along with sensors like an IR sensor for obstacle detection and a camera for live video streaming. The wheelchair's motion in four directions is controlled by tilting the hand in those directions which is detected by the accelerometer sensor. This provides independent mobility assistance without requiring another person's help.
This document describes an eye-controlled wheelchair that allows quadriplegic users to move independently. A webcam detects eye movements using MATLAB software and sends signals to a microcontroller. The microcontroller then drives motors to move the wheelchair left, right or straight. Key aspects include using eye detection algorithms to determine direction, an ATMega32A microcontroller to control motors based on eye signals, and safety features like speed control and blink detection to halt movement. The system aims to improve mobility for quadriplegic individuals but requires further refinement for commercial use, such as improving movement detection during casual eye movements.
The document describes an eye movement controlled powered wheelchair for people with physical disabilities. It uses an optical eye tracking system to detect eye movements and translate them to commands to control the wheelchair's movement and direction. Sensors are also included for obstacle detection. The system aims to provide an alternative mobility option for those unable to use traditional interfaces. It consists of a wireless camera, computer for processing eye images, microcontrollers to transmit commands and control motors, and motors attached to the wheelchair. Eye movements are detected using computer vision algorithms and translated to forward, left, or right motions. Additional safety features like obstacle detection and a manual joystick mode are included. The wheelchair aims to improve mobility for quadriplegics and others through a non-invasive
The aim of this project is to controlling a wheel chair and electrical devices by using MEMS accelerometer sensor (Micro Electro-Mechanical Systems) technology. MEMS accelerometer sensor is a Micro Electro Mechanical Sensor which is a highly sensitive sensor and capable of detecting the tilt. This sensor finds the tilt and makes use of the accelerometer to change the direction of the wheel chair depending on tilt. For example if the tilt is to the right side then the wheel chair moves in right direction or if the tilt is to the left side then the wheel chair moves in left direction. Wheel chair movement can be controlled in Forward, Reverse, and Left and Right direction.
This ppt describes how to detect vehicle movement on highways to switch ON only a block of street lights ahead of it (vehicle), and to switch OFF the trailing lights to save energy.
Edgefxkits.com has a wide range of electronic projects ideas that are primarily helpful for ECE, EEE and EIE students and the ideas can be applied for real life purposes as well.
http://www.edgefxkits.com/
Visit our page to get more ideas on popular electronic projects developed by professionals.
Edgefx provides free verified electronic projects kits around the world with abstracts, circuit diagrams, and free electronic software. We provide guidance manual for Do It Yourself Kits (DIY) with the modules at best price along with free shipping.
The document describes a smart note taker product that allows users to take notes by writing in the air. The notes are sensed and stored digitally. Key features include allowing blind users to write freely, and enabling instructors to write notes during presentations that are broadcast to students. It works using sensors to detect 3D writing motions, which are processed, stored, and can be viewed on a display or sent to other devices. An applet program and database are used to recognize words written in the air and print them. The smart note taker offers advantages over digital pens like ease of use and time savings.
ACCIDENT DETECTION AND VEHICLE TRACKING USING GPS,GSM AND MEMSKrishna Moparthi
This document describes a vehicle accident detection and tracking system using GPS, GSM, and MEMS sensors. The system detects accidents using a MEMS sensor and then uses GPS to determine the vehicle's location. The location is sent via GSM to emergency services and authorized contacts to provide rapid response. The system aims to quickly locate accident sites and notify help in remote areas with limited communication infrastructure.
Smart note taker is a pen that can write in air and store the information in an internal memory chip. It uses displacement sensors to sense the pen's movement and compare the handwriting to letters in its database to store what is written. Notes can then be uploaded and edited on a PC by docking the pen. The smart note taker allows paperless note taking anywhere and saves time over traditional notetaking. However, it has a very high cost which limits its accessibility. It finds applications in presentations, document editing and signatures.
Vehicle-to-vehicle communication can help reduce traffic accidents and increase road safety. The document discusses a vehicle-to-vehicle communication protocol for cooperative collision warning. It proposes that abnormal vehicles generate and transmit emergency warning messages to surrounding vehicles. The protocol aims to deliver these messages with low latency while supporting multiple abnormal vehicles. It uses a rate decreasing transmission algorithm and state transitions to prioritize warnings and eliminate redundant messages. The approach seeks to warn endangered vehicles in milliseconds while enabling communication from many abnormal vehicles.
PROJECT REPORT ON Home automation using by BluetoothAakashkumar276
This document summarizes a student project on developing a home automation system using an Arduino board and Bluetooth. The system allows users to control electrical appliances like fans and lights in their home remotely using an Android phone app. The app communicates with an Arduino Uno microcontroller via HC-05 Bluetooth module. The Arduino is connected to a 4-channel relay board to switch appliances on and off. The project aims to provide a low-cost solution for remote home control without needing physical switches or remote controls.
Gesture control robot using accelerometer pptRajendra Prasad
This document describes a gesture control robot project that uses an accelerometer. The aim of the project is to control the movements and directions of vehicles like airplanes, trains and cars using MEMS technology. The transmitter module uses an accelerometer, comparator, encoder and RF transmitter. The receiver module uses an RF receiver, decoder, microcontroller and actuator motor driver. The accelerometer provides analog data about movement in the X, Y and Z directions. The comparator and encoder convert the analog data for transmission. The RF modules transmit and receive the signals. The microcontroller processes the received data and the actuator converts it to control vehicle movements based on hand gestures detected by the accelerometer.
This document describes a hand gesture controlled wireless land rover. The project uses an accelerometer to detect hand gestures which are transmitted via RF to control motors and move the land rover in four directions. The key components are a microcontroller, accelerometer, encoder, transmitter, receiver, motor driver and motors. Programming is done using AVR studio to flash the microcontroller. Advantages include compact size and wireless control using natural hand gestures. Future enhancements could include onboard controls, image processing for improved sensitivity and gyro sensors.
The document discusses autonomous vehicles and their potential benefits and challenges. It defines autonomous vehicles as vehicles that can travel from one point to another without human supervision. It notes that human error causes over 90% of automobile accidents and that autonomous vehicles could help reduce accidents by taking human error out of driving. The document outlines some of the key technologies used in autonomous vehicles, such as LIDAR, GPS, radar, ultrasonic sensors, video cameras, and a central computer. It discusses companies working on autonomous vehicle technologies like Google, Mercedes Benz, and Tesla. It also discusses some of the pros and cons of autonomous vehicles.
This document describes a density-based traffic light controller system that uses sensors to measure traffic load and detect emergency vehicles. The system uses a microcontroller to automatically adjust signal timing based on traffic density during normal operation. When an emergency vehicle is detected, the system overrides normal timing to provide a green light in the direction of the emergency vehicle while blocking other lanes. The system is intended to help reduce traffic jams by adapting to current traffic conditions. Key components include IR sensors to detect vehicles, a microcontroller to control signal timing, and an RF receiver to detect emergency vehicles like ambulances.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document describes the implementation of a smart helmet system using solar power. The system includes an alcohol sensor, temperature sensor, vibration sensor, GPS module, GSM module, and microcontroller to provide safety features like detecting accidents, monitoring alcohol levels, tracking location, and regulating temperature. The smart helmet aims to reduce accidents by preventing intoxicated riding and automatically alerting emergency contacts in the event of a crash. It can also be used to remotely immobilize a vehicle if the helmet is removed or stolen. The system design uses low power components like a solar cell to allow for portable operation.
This document describes an android-based automated smart wheelchair that can be controlled via smartphone. Key points:
1) It uses a smartphone's built-in accelerometer sensors and Bluetooth technology to transmit control signals to a microcontroller connected to DC motors that power the wheelchair's wheels.
2) The microcontroller receives the Bluetooth signals and controls the wheelchair motion.
3) It allows for easier mobility for disabled users by automating the wheelchair and controlling it remotely with a smartphone.
soldier tracking and health monitoring systemJoshpin Bala.B
This document describes a project to track soldiers and monitor their health status during war using sensors, GPS, and GSM. The system includes sensors to monitor a soldier's pulse rate and temperature, along with a GPS module to track location. If the soldier's pulse drops below 60 or their coordinates exceed a certain range, the system will automatically make an emergency call through the GSM module to alert others. The goal is to allow army personnel to plan strategies based on real-time soldier location and health data. The system uses an ARM7 microcontroller to process data from the sensors and GPS and communicate through GSM when needed.
This document describes a direction controlled wheelchair for physically disabled people using voice control and an RF module. The wheelchair can be controlled through voice commands to a voice recognition module or through an RF remote. It also monitors the user's temperature and detects obstacles using sensors. The system uses a PIC microcontroller, voice recognition module, RF transmitter/receiver, temperature sensor, IR obstacle sensor, motor driver, and LCD display. The goal is to allow disabled individuals to move independently through voice or remote control while also monitoring their health conditions.
As Digital Still Cameras (DSC) become smaller, cheaper and higher in resolution, photographs are increasingly prone to blurring from shaky hands. Optical image stabilization (OIS) is an effective solution that addresses the quality of images, and is an idea that has been around for at least 30 years. It has only recently made its way into the low-cost consumer camera market, and will soon be migrating to the higher end camera phones. This paper provides an overview of common design practices and considerations for optical image stabilization and how silicon-based MEMS dual-axis gyroscopes with their size, cost and performance advantages are enabling this vital function for image capturing devices
Iot operated wheel chair / smart wheelchair YOGEESH M
This document describes a project to create an IOT operated wheelchair that can be controlled through hand gestures using an accelerometer sensor. The wheelchair is intended to help elderly and physically disabled users navigate inside their home independently. It uses a Raspberry Pi microcontroller along with sensors like an IR sensor for obstacle detection and a camera for live video streaming. The wheelchair's motion in four directions is controlled by tilting the hand in those directions which is detected by the accelerometer sensor. This provides independent mobility assistance without requiring another person's help.
This document describes an eye-controlled wheelchair that allows quadriplegic users to move independently. A webcam detects eye movements using MATLAB software and sends signals to a microcontroller. The microcontroller then drives motors to move the wheelchair left, right or straight. Key aspects include using eye detection algorithms to determine direction, an ATMega32A microcontroller to control motors based on eye signals, and safety features like speed control and blink detection to halt movement. The system aims to improve mobility for quadriplegic individuals but requires further refinement for commercial use, such as improving movement detection during casual eye movements.
The document describes an eye movement controlled powered wheelchair for people with physical disabilities. It uses an optical eye tracking system to detect eye movements and translate them to commands to control the wheelchair's movement and direction. Sensors are also included for obstacle detection. The system aims to provide an alternative mobility option for those unable to use traditional interfaces. It consists of a wireless camera, computer for processing eye images, microcontrollers to transmit commands and control motors, and motors attached to the wheelchair. Eye movements are detected using computer vision algorithms and translated to forward, left, or right motions. Additional safety features like obstacle detection and a manual joystick mode are included. The wheelchair aims to improve mobility for quadriplegics and others through a non-invasive
The aim of this project is to controlling a wheel chair and electrical devices by using MEMS accelerometer sensor (Micro Electro-Mechanical Systems) technology. MEMS accelerometer sensor is a Micro Electro Mechanical Sensor which is a highly sensitive sensor and capable of detecting the tilt. This sensor finds the tilt and makes use of the accelerometer to change the direction of the wheel chair depending on tilt. For example if the tilt is to the right side then the wheel chair moves in right direction or if the tilt is to the left side then the wheel chair moves in left direction. Wheel chair movement can be controlled in Forward, Reverse, and Left and Right direction.
This ppt describes how to detect vehicle movement on highways to switch ON only a block of street lights ahead of it (vehicle), and to switch OFF the trailing lights to save energy.
Edgefxkits.com has a wide range of electronic projects ideas that are primarily helpful for ECE, EEE and EIE students and the ideas can be applied for real life purposes as well.
http://www.edgefxkits.com/
Visit our page to get more ideas on popular electronic projects developed by professionals.
Edgefx provides free verified electronic projects kits around the world with abstracts, circuit diagrams, and free electronic software. We provide guidance manual for Do It Yourself Kits (DIY) with the modules at best price along with free shipping.
The document describes a smart note taker product that allows users to take notes by writing in the air. The notes are sensed and stored digitally. Key features include allowing blind users to write freely, and enabling instructors to write notes during presentations that are broadcast to students. It works using sensors to detect 3D writing motions, which are processed, stored, and can be viewed on a display or sent to other devices. An applet program and database are used to recognize words written in the air and print them. The smart note taker offers advantages over digital pens like ease of use and time savings.
ACCIDENT DETECTION AND VEHICLE TRACKING USING GPS,GSM AND MEMSKrishna Moparthi
This document describes a vehicle accident detection and tracking system using GPS, GSM, and MEMS sensors. The system detects accidents using a MEMS sensor and then uses GPS to determine the vehicle's location. The location is sent via GSM to emergency services and authorized contacts to provide rapid response. The system aims to quickly locate accident sites and notify help in remote areas with limited communication infrastructure.
Smart note taker is a pen that can write in air and store the information in an internal memory chip. It uses displacement sensors to sense the pen's movement and compare the handwriting to letters in its database to store what is written. Notes can then be uploaded and edited on a PC by docking the pen. The smart note taker allows paperless note taking anywhere and saves time over traditional notetaking. However, it has a very high cost which limits its accessibility. It finds applications in presentations, document editing and signatures.
Vehicle-to-vehicle communication can help reduce traffic accidents and increase road safety. The document discusses a vehicle-to-vehicle communication protocol for cooperative collision warning. It proposes that abnormal vehicles generate and transmit emergency warning messages to surrounding vehicles. The protocol aims to deliver these messages with low latency while supporting multiple abnormal vehicles. It uses a rate decreasing transmission algorithm and state transitions to prioritize warnings and eliminate redundant messages. The approach seeks to warn endangered vehicles in milliseconds while enabling communication from many abnormal vehicles.
PROJECT REPORT ON Home automation using by BluetoothAakashkumar276
This document summarizes a student project on developing a home automation system using an Arduino board and Bluetooth. The system allows users to control electrical appliances like fans and lights in their home remotely using an Android phone app. The app communicates with an Arduino Uno microcontroller via HC-05 Bluetooth module. The Arduino is connected to a 4-channel relay board to switch appliances on and off. The project aims to provide a low-cost solution for remote home control without needing physical switches or remote controls.
Gesture control robot using accelerometer pptRajendra Prasad
This document describes a gesture control robot project that uses an accelerometer. The aim of the project is to control the movements and directions of vehicles like airplanes, trains and cars using MEMS technology. The transmitter module uses an accelerometer, comparator, encoder and RF transmitter. The receiver module uses an RF receiver, decoder, microcontroller and actuator motor driver. The accelerometer provides analog data about movement in the X, Y and Z directions. The comparator and encoder convert the analog data for transmission. The RF modules transmit and receive the signals. The microcontroller processes the received data and the actuator converts it to control vehicle movements based on hand gestures detected by the accelerometer.
This document describes a hand gesture controlled wireless land rover. The project uses an accelerometer to detect hand gestures which are transmitted via RF to control motors and move the land rover in four directions. The key components are a microcontroller, accelerometer, encoder, transmitter, receiver, motor driver and motors. Programming is done using AVR studio to flash the microcontroller. Advantages include compact size and wireless control using natural hand gestures. Future enhancements could include onboard controls, image processing for improved sensitivity and gyro sensors.
The document discusses autonomous vehicles and their potential benefits and challenges. It defines autonomous vehicles as vehicles that can travel from one point to another without human supervision. It notes that human error causes over 90% of automobile accidents and that autonomous vehicles could help reduce accidents by taking human error out of driving. The document outlines some of the key technologies used in autonomous vehicles, such as LIDAR, GPS, radar, ultrasonic sensors, video cameras, and a central computer. It discusses companies working on autonomous vehicle technologies like Google, Mercedes Benz, and Tesla. It also discusses some of the pros and cons of autonomous vehicles.
This document describes a density-based traffic light controller system that uses sensors to measure traffic load and detect emergency vehicles. The system uses a microcontroller to automatically adjust signal timing based on traffic density during normal operation. When an emergency vehicle is detected, the system overrides normal timing to provide a green light in the direction of the emergency vehicle while blocking other lanes. The system is intended to help reduce traffic jams by adapting to current traffic conditions. Key components include IR sensors to detect vehicles, a microcontroller to control signal timing, and an RF receiver to detect emergency vehicles like ambulances.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document provides examples of common idioms and their meanings in context sentences. It includes idioms such as "all things to all people" meaning one cannot be everything to everyone, "against the clock" meaning in a hurry, and "all ears" meaning listening attentively. Other idioms exemplified are "all hands on deck" meaning needing all available help, "all in your head" meaning memorized or known, and "add fuel to the flames" meaning make a bad situation worse. Further idioms explained are "have an ace up your sleeve" meaning having a secret advantage, "Achilles' heel" meaning a weakness, "get your act together" meaning get organized, and "accidentally on
The document is a list of words with definitions provided in short sentences that use each word. It provides definitions and examples for words like "dingy", "mesmerized", "reluctantly", and "tattered" by using the words in simple sentences about rooms, flags, agreements and other everyday contexts.
This was my final year project based on embedded system
this is the code
http://downloads..com/download/24001476/code.rar.html
and the pcb are
http://downloads..com/download/24001498/pcb.rar.html
Speech recognition converts spoken words to text. The term "speech recognition" is used to refer to recognition systems that must be trained to any speaker—as is the case for most desktop recognition software.
1) The document describes an advanced wheelchair system that uses a sensor glove and voice recognition to allow disabled users to control the wheelchair and communicate through gestures and synthesized speech.
2) The sensor glove uses flex sensors and an accelerometer to detect finger positions and gestures, which are wirelessly transmitted to control the wheelchair's movement and display text and speech.
3) The system is intended to help physically disabled and deaf/mute users move independently and communicate more easily.
it is a smart wheelchair which uses voice and bluetooth commands . Also consists of temperature and heartbeat sensors for continuous monitoring by the doctor.
The document describes a voice-controlled wheelchair system that allows users to control the wheelchair's movements through spoken commands. The system uses a microphone to receive voice commands, which are sent to a voice recognition processor linked to a PIC microcontroller. The microcontroller controls motors to move the wheelchair forward, backward, left, right, and stop based on the commands. An infrared sensor is also used for obstacle avoidance. The system was prototyped using a small vehicle and further additions are needed for completion.
This document discusses touch screen technology. It provides a brief history, describing the development of early touch sensors in the 1970s and the growing popularity and use of touch screens. It then describes the main touch screen technologies - resistive, capacitive, and interruptive - and explains the basic components of a touch screen system, including the touch sensor, controller, and software driver. Finally, it outlines some key advantages of touch screen technology, such as its usefulness for public displays, retail/restaurant systems, customer self-service, control systems, computer-based training, and assistive technology applications.
Touchscreen technology has evolved significantly over the past few decades and become widespread. There are several main touchscreen technologies including resistive, surface acoustic wave, infrared, and capacitive. Each technology has advantages and disadvantages related to durability, transparency, response time, and sensitivity to environmental factors. Touchscreens are now commonly used in applications such as public kiosks, point-of-sale systems, mobile devices, and more to provide an intuitive user interface.
Touch screens use pressure sensitivity to detect touch locations on a display. Neonode developed zForce touch technology using infrared light beams to detect touches without needing glass overlays. zForce can recognize touches from fingers, gloves, and styluses. The main touch screen technologies are resistive, capacitive, projected capacitance, infrared, and Neonode's zForce technology. zForce is a lower-cost alternative to capacitive screens that can also recognize multi-touch inputs.
The document discusses the working of touchscreen technology. It describes four main types of touchscreen technologies: resistive, capacitive, surface acoustic wave, and infrared. It provides details on resistive touchscreens, including four-wire, eight-wire, six-wire, and seven-wire variations. It also explains the basic components and working of a touchscreen, including the touch sensor, controller, and software driver.
Green technology aims to develop and apply technologies that are environmentally friendly and resource efficient. It covers areas like green chemistry, green nanotechnology, green building, green IT, and green energy. The goals are sustainability, reducing waste and pollution, innovation, and economic viability. Green chemistry uses principles like prevention of waste, safer solvents and materials. Green nanotechnology minimizes environmental risks of nanotechnology. Green buildings use renewable materials and energy generation. Green IT improves energy efficiency of computing. Green energy develops power from renewable sources like solar and wind. Green marketing considers environmental impacts in the 4Ps of product, price, place and promotion. The triple bottom line model evaluates financial, social and environmental impacts and is linked to corporate social responsibility
Voice morphing is a technique that modifies a source speaker's speech to sound like a target speaker. It does this by changing the pitch from the source speaker, like a male voice, to the target speaker, like a female voice. This is done by interpolating the linear predictive coding coefficients of the source and target signals. The pitch of the morphed signal can be positioned between the source and target by varying a constant value between 0 and 1. Applications include changing voices for security or entertainment purposes, but limitations include difficulties with voice detection and requiring extensive sound libraries.
White LEDs will revolutionize lighting in the next few years. They work using electroluminescence, where current passing through a semiconductor diode causes it to emit light. White light can be generated using blue LEDs coated with phosphor, or by combining red, green, and blue LEDs. Challenges include thermal management and electrical compatibility with mains power. Compared to other lights, LEDs have longer lifetimes, lower energy usage, and no toxic materials like mercury. While prices are currently high, LEDs are being used for applications like street lights and retail displays.
This document presents a project on developing a touch screen controlled wheelchair. The main objective is to design a microcontroller-based wheelchair that can be controlled via a touch screen for speed and direction. The wheelchair would move using a geared DC motor. It describes the components used, including an ATMega32 microcontroller, L293D motor driver, 50 RPM geared DC motor, and touch sensor. It provides details on how these components interface and work together to allow touch screen control of the wheelchair's movement and display status on an LCD screen. Potential applications are also discussed.
Wheelchair is truly is mobility orthosis.
A properly prescribed wheelchair can be useful device in reintegrating a person with a disability into the community.
Artificial Intelligence for Speech RecognitionRHIMRJ Journal
Speech recognition software uses artificial intelligence techniques to transform spoken words into text. It has various applications, such as legal and medical transcription. Automatic speech recognition involves mapping acoustic speech signals to text. However, speech recognition also faces technical challenges, such as differentiating words in continuous speech and accounting for variations in accents and pronunciations. The document discusses the history and various applications of speech recognition technology.
This document provides an overview of a seminar on AI for speech recognition. It includes an introduction to AI and speech recognition, different models for speech recognition including HMM and DTW, applications of speech recognition in various domains, and challenges. The content list covers topics like performance of speech recognition systems, applications, and failures of speech recognition. Statistical models are important for decoding speech accurately. AI is recognized as an efficient method for speech recognition.
A survey on Enhancements in Speech RecognitionIRJET Journal
This document discusses enhancements in speech recognition and provides an overview of the history and basic model of speech recognition. It summarizes key enhancements researchers have made to improve speech recognition, especially in noisy environments. The basic model of speech recognition involves speech input, preprocessing using techniques like MFCCs, classification models like RNNs and HMMs, and output of a transcript. Researchers are working to develop robust speech recognition that can understand speech in any environment.
A Translation Device for the Vision Based Sign Languageijsrd.com
The Sign language is very important for people who have hearing and speaking deficiency generally called Deaf and Mute. It is the only mode of communication for such people to convey their messages and it becomes very important for people to understand their language. This paper proposes the method or algorithm for an application which would help in recognizing the different signs which is called Indian Sign Language. The images are of the palm side of right and left hand and are loaded at runtime. The method has been developed with respect to single user. The real time images will be captured first and then stored in directory and on recently captured image and feature extraction will take place to identify which sign has been articulated by the user through SIFT(scale invariance Fourier transform) algorithm. The comparisons will be performed in arrears and then after comparison the result will be produced in accordance through matched key points from the input image to the image stored for a specific letter already in the directory or the database the outputs for the following can be seen in below sections. There are 26 signs in Indian Sign Language corresponding to each alphabet out which the proposed algorithm provided with 95% accurate results for 9 alphabets with their images captured at every possible angle and distance.
Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial defines "man-made," and intelligence defines "thinking power", hence AI means "a man-made thinking power.“
Artificial Intelligence - An Introduction acemindia
Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial defines "man made," and intelligence defines "thinking power", hence AI means "a man-made thinking power.“
Artificial Intelligence exists when a machine can have human based skills such as learning, reasoning, and solving problems.
Our speech to text conversion project aims to help the nearly 20% of people worldwide with disabilities by allowing them to control their computer and share information using only their voice. The system uses acoustic and language models with a speech engine to recognize speech and convert it to text. It can perform operations like opening calculator and wordpad. Speech recognition has applications in areas like cars, healthcare, education and daily life. Accuracy depends on factors like vocabulary size, speaker dependence, and speech type (isolated, continuous). The system aims to improve accessibility while reducing costs.
Our speech to text conversion project aims to help the nearly 20% of people worldwide with disabilities by allowing them to control their computer and share information using only their voice. The system uses acoustic and language models along with hidden Markov models to recognize speech and convert it to text. Accuracy can vary based on factors like vocabulary size, speaker dependence, and speech type, but the technology has applications in fields like car systems, healthcare, education and more.
Speech Recognition in Artificail InteligenceIlhaan Marwat
Speech recognition, also known as automatic speech recognition, allows a computer to understand human voice commands. It works by converting analog audio to digital signals, separating speech from background noise, and analyzing phonetic patterns to recognize words. There are two main types - speaker-dependent software requires training a user's voice, while speaker-independent software can recognize any voice without training but is generally less accurate. Speech recognition has applications in fields like military operations, navigation systems, radiology, and call centers. It offers advantages for people with disabilities but also faces challenges from variations in human speech and filtering noise. The technology continues to improve with advances in processing power and algorithms.
Voice Recognition Based Automation System for Medical Applications and for Ph...IRJET Journal
This document describes a voice recognition-based automation system for medical applications and physically challenged patients. The system uses a voice recognition model, Arduino microcontroller, relays, LEDs, buzzers, and a motor to control an adjustable bed. Voice commands are recognized using techniques like MFCC and HMM and used to control devices via the Arduino. The system is intended to allow paralyzed patients to control devices like lights, alarms, and their bed using only voice commands for increased independence. Testing showed the system provided accurate voice recognition under various conditions.
Voice Recognition Based Automation System for Medical Applications and for Ph...IRJET Journal
This document describes a voice recognition-based automation system for medical applications and physically challenged patients. The system uses a voice recognition model, Arduino microcontroller, relays, LEDs, buzzers, and a motor to control an adjustable bed. Voice commands are recognized using techniques like MFCC and HMM and used to control devices via the Arduino. The system is intended to allow paralyzed patients to control devices like lights, alarms, and their bed using only voice commands for increased independence. Testing showed the system can accurately recognize commands and control devices with 99% accuracy under suitable conditions.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses developments in voice recognition technology. It begins by introducing voice recognition software and its goal of allowing users to efficiently control computers through speech. It then outlines the objectives of the paper, which are to explain the importance of voice recognition, detail research and development, discuss existing problems and solutions, and analyze the impact on engineering and society. The document proceeds to describe how voice recognition systems work, including acoustic and language models. It discusses applications and importance in learning, consumer, corporate, and government uses. Finally, it outlines current flaws in voice recognition software and discusses improvement solutions being developed.
This document discusses feature extraction techniques for isolated word speech recognition. It begins with an introduction to digital speech processing and speech recognition models. The main part of the document compares two common feature extraction techniques: Mel Frequency Cepstral Coefficients (MFCC) and Relative Spectral (RASTA) filtering. MFCC allows signals to extract feature vectors and provides high performance but lacks robustness. RASTA filtering reduces the impact of noise in signals and provides high robustness by band-passing feature coefficients in both log spectral and spectral domains. The document provides details on the process of MFCC feature extraction, which involves steps like framing, windowing, fast Fourier transform, mel filtering, discrete cosine transform, and calculating
Speech recognition, also known as automatic speech recognition or computer speech recognition, allows computers to understand human voice. It has various applications such as dictation, system control/navigation, and commercial/industrial uses. The process involves converting analog audio of speech into digital format, then using acoustic and language models to analyze the speech and output text. There are two main types: speaker-dependent which requires training a model for each user, and speaker-independent which can recognize any voice without training. Accuracy is improving over time as technology advances.
Developing a hands-free interface to operate a Computer using voice commandMohammad Liton Hossain
The main focus of this study is to help a handicap person to operate a computer by voice command. It can be used to operate the entire computer functions on the user’s voice commands. It makes use of the Speech Recognition technology that allows the computer system to identify and recognize words spoken by a human using a microphone. This Software will be able to recognize spoken words and enable user to interact with the computer. This interaction includes user giving commands to his computer which will then respond by performing several tasks, actions or operations depending on the commands they gave. For Example: Opening /closing a file in computer, YouTube automation using voice command, Google search using voice command, make a note using voice command, calculation by calculator using voice command etc.
In this paper we present the implementation of speaker identification system using artificial neural network
with digital signal processing. The system is designed to work with the text-dependent speaker
identification for Bangla Speech. The utterances of speakers are recorded for specific Bangla words using
an audio wave recorder. The speech features are acquired by the digital signal processing technique. The
identification of speaker using frequency domain data is performed using backpropagation algorithm.
Hamming window and Blackman-Harris window are used to investigate better speaker identification
performance. Endpoint detection of speech is developed in order to achieve high accuracy of the system.
Utterance Based Speaker Identification Using ANNIJCSEA Journal
This document summarizes a research paper on speaker identification using artificial neural networks. The paper presents a speaker identification system that uses digital signal processing and ANN techniques. Speech features are extracted from utterances using FFT and windowing. These features are used to train a multi-layer perceptron network to classify speakers. The system was tested on Bangla speech and achieved accurate identification of speakers from their utterances.
Utterance Based Speaker Identification Using ANNIJCSEA Journal
In this paper we present the implementation of speaker identification system using artificial neural network with digital signal processing. The system is designed to work with the text-dependent speaker identification for Bangla Speech. The utterances of speakers are recorded for specific Bangla words using an audio wave recorder. The speech features are acquired by the digital signal processing technique. The identification of speaker using frequency domain data is performed using back propagation algorithm. Hamming window and Blackman-Harris window are used to investigate better speaker identification performance. Endpoint detection of speech is developed in order to achieve high accuracy of the system.
The document provides an overview of automatic speech recognition, including: describing the process of speech recognition which involves feature extraction from voice and using acoustic and language models; listing common types like speaker-dependent and independent; discussing applications in areas like dictation, in-car systems, and voice security; and noting both advantages like reducing errors but also challenges involving filtering noise and accommodating various speaking styles.
1. VOICE OPERATED WHEELCHAIR
1
ABSTRACT
Many disabled people usually depend on others in their daily life especially in
getting from one place to another. For the wheelchair users, they need continuously
someone to help them in going the wheelchair moving. By having a wheelchair
control system will help handicapped persons become independent. The system is
a wireless wheelchair control system which employs a voice recognition system for
triggering and controlling all its movements. The wheelchair responds to the voice
command from its user to perform any movements functions. It integrates a
microcontroller, wireless microphone, voice recognition processor, motor control
interface board to move the wheelchair. By using the system, the users are able to
operate the wheelchair by simply speak to the wheelchair microphone. The basic
movement functions includes forward and reverse direction, left and right turns
and stop It utilizes a PIC controller microchip 16f877a to control the system
operations. It communicates with the voice recognition processor to detect word
spoken and then determines the corresponding output command to drive the left
and right motors. To accomplish this task, an assembly language program is
written and stored in the controller's memory .In order to recognize the spoken
words, the voice recognition processor HM 2007 must be trained with the word
spoken out by the user who is going to operate the wheelchair.
3. VOICE OPERATED WHEELCHAIR
3
1.1 GENERALOVERVIEW:
A wheelchair is a wheeled mobility device in which the user sits. The device is
propelled either manually by pushing the wheels with the hands or via various
automated systems. Wheelchairs are used by people for whom walking is
difficult or impossible due to illness, injury, or disability. People with walking
disability often need to use a wheelchair “World report on disability" jointly
presented by World Health Organization (WHO) and World Bank says that there
are 70 million people are handicapped in the world. Unfortunately day by day the
number of handicapped people is going on increasing due to road accidents as well
as disease like paralysis. If a person is handicapped he is dependent on other
person for his day to day work like transport, food, orientation etc. So a voice
operated wheel chair is developed which will operate automatically on the
commands from the handicapped user for movement purpose.
4. VOICE OPERATED WHEELCHAIR
4
1.2 LITERATURE SURVEY:
There are many scientists and researchers who develop computer software
that can recognize human voice commands in so many languages such as
English, Japanese and Thai. There are many techniques that are used to
recognize voice commands .[1]
Researchers transform sound wave into digital wave by a computer. After
that they use digital signal to manage different electronic equipments, for
example 1)controlling robot arm movement 2)helping the handicapped to
move a wheel chair etc.[2]
According to “ IJRET” In the paper on “Voice Operated Intelligent
Wheelchair” , Mat lab software is used for input signal processing and that
signal given to the ARM Processor LPC2138.[3]
In recent paper of “ IJRET”, input is given to IC HM2007.HM 2007 IC is
used for the voice recognition purpose. HM 2007 generates the output signal
depending on the input from the user.[4]
5. VOICE OPERATED WHEELCHAIR
5
1.3 THEROTICALBACKGROUND:
Voice enabled devices basically use the principal of speech recognition. It is
the process of electronically converting a speech waveform (as the realization of a
linguistic expression) into words (as a best-decoded sequence of linguistic units).
Converting a speechwaveform into a sequence of words involves several essential
steps:
i. A microphone picks up the signal of the speech to be recognized and converts it
into an electrical signal. A modern speech recognition system also requires that the
electrical signal be represented digitally by means of an analog-to-digital(A/D)
conversion process, so that it can be processed with a digital computer or
microprocessor.
ii. This speech signal is then analyzed (in the analysis block) to produce a
representation consisting of salient features of the speech. The most prevalent
feature of speech is derived from its short-time spectrum, measured successively
over short-time windows of length 20–30 milliseconds overlapping at intervals
of10–20 ms.Each short-time spectrum is transformed into a feature vector, and the
temporal sequence of such feature vectors thus forms a speech pattern.
iii. The speech pattern is then compared to a store of phoneme patterns or models
through a dynamic programming process in order to generate a hypothesis (or a
number of hypotheses) of the phonemic unit sequence. (A phoneme is a basic unit
of speech and a phoneme model is a succinct representation of the signal that
corresponds to a phoneme, usually embedded in an utterance.) A speech signal
inherently has substantial variations along many dimensions.Before we understand
the design of the project let us first understand speech recognition types and styles.
Speech recognition is classified into two categories, speaker dependent and speaker
independent.
6. VOICE OPERATED WHEELCHAIR
6
Speaker dependent systems are trained by the individual who will be using the
system. These systems are capable of achieving a high command count and better
than 95% accuracy for word recognition. The drawback to this approach is that the
system only responds accurately only to the individual who trained the system.
This is the most common approach employed in software for personal computers.
Speaker independent is a system trained to respond to a word regardless of who
speaks. Therefore the system must respond to a large variety of speech patterns,
inflections and enunciation's of the target word. The command word count is
usually lower than the speaker dependent however high accuracy can still be
maintain within processing limits. Industrial requirements more often need speaker
independent voice systems, such as the AT&T system used in the telephone
systems. A more general form of voice recognition is available through feature
analysis and this technique usually leads to "speaker-independent" voice
recognition.
RecognitionStyle
Speech recognition systems have another constraint concerning the style of speech
they can recognize. They are three styles of speech: isolated, connected and
continuous. Isolated speech recognition systems can just handle words that are
spoken separately. This is the most common speech recognition systems available
today. The user must pause between each word or command spoken. The speech
recognition circuit is set up to identify isolated words of .96 second lengths.
Connected is a half way point between isolated word and continuous speech
recognition. Allows users to speak multiple words. The HM2007 can be set up to
identify words or phrases 1.92 seconds in length. This reduces the word
recognition vocabulary number to20.
7. VOICE OPERATED WHEELCHAIR
7
Approaches of Statistical Speech Recognition
a. Hidden Markovmodel (HMM)-based speechrecognition
Modern general-purpose speech recognition systems are generally based on
hidden Markov models (HMMs). This is a statistical model which outputs a
sequence of symbols or quantities. One possible reason why HMMs are used
in speech recognition is that a speech signal could be viewed as a piece-wise
stationary signal or a short-time stationary signal. That is, one could assume
in a short-time in the range of 10 milliseconds, speech could be
approximated as a stationary process. Speech could thus be thought as a
Markov model for many stochastic processes (known as states). Another
reason why HMMs are popular is because they can be trained automatically
and are simple and computationally feasible to use.
b. Neuralnetwork-basedspeechrecognition
Another approach in acoustic modeling is the use of neural networks. They
are capable of solving much more complicated recognition tasks, but do not
scale as well as HMMs when it comes to large vocabularies. Rather than
being used in general-purpose speech recognition applications they can
handle low quality, noisy data and speaker independence. Such systems can
achieve greater accuracy than HMM based systems, as long as there is
training data and the vocabulary is limited. A more general approach using
neural networks is phoneme recognition.
c. Dynamic time warping (DTW)-basedspeechrecognition
Dynamic time warping is an algorithm for measuring similarity between two
sequences which may vary in time or speed. For instance, similarities in
walking patterns would be detected, even if in one video the person was
walking slowly and if in another they were walking more quickly, or even if
8. VOICE OPERATED WHEELCHAIR
8
there were accelerations and decelerations during the course of one
observation. DTW has been applied to video, audio, and graphics -- indeed,
any data which can be turned into a linear representation can be analyzed
with DTW.
1.4 NATURE OF PROBLEM:
Speech recognition is the process of finding a interpretation of a spoken utterance;
typically, this means finding the sequence of words that were spoken. This
involves preprocessing the acoustic signals to parameterize it in a more usable and
useful form. The input signal must be matched against a stored pattern and then
makes a decision of accepting or rejecting a match
The different types of problems we are going to face in our project have been
enumerated below: -
DIFFERENCES IN THE VOICES OF DIFFERENT PEOPLE:-
The voice of a man differs from the voice of a woman that again differs from the
voice of a baby. Different speakers have different vocal tracts and source
physiology. Electrically speaking, the difference is in frequency. Women and
babies tend to speak at higher frequencies from that of men.
DIFFERENCES IN THE LOUDNESS OF SPOKEN WORDS:-
No two persons speak with the same loudness. One person will constantly go on
speaking in a loud manner while another person will speak in a light tone. Even if
the same person speaks the same word on two different instants, there is no
guarantee that he will speak the word with the same loudness at the different
instants. The problem of loudness also depends on the distance the microphone is
held from the user's mouth. Electrically speaking, the problem of difference is
reflected in the amplitude of the generated digital signal.
9. VOICE OPERATED WHEELCHAIR
9
DIFFERENCEIN THE TIME:-
Even if the same person speaks the same word at two different instants of time,
there is no guarantee that he will speak exactly similarly on boththe occasions.
Electrically speaking there is a problem of difference in time i.e. indirectly
frequency.
DIFFERENCES IN THE PROPERTIESOF MICROPHONES:-
There may be problems due to differences in the electrical properties of different
mikes and transmission channels.
DIFFERENCES IN THE PITCH:-
Pitch and other source features such as breathiness and amplitude can be varied
independently.
OTHER PROBLEMS:-
We have to make sure that robot does not go out of reach of our voice. Output of
microphone is very small. Output of Voice recognition chip is not compatible
with input required at motors.
10. VOICE OPERATED WHEELCHAIR
10
1.5 PROJECT OBJECTIVES:
To equip the present motorized wheelchair control system with a voice
command system. By having this features, disabled people especially with a
severe disabilities that is unable to move their hand or other parts of a body
are able to move their wheelchair around independently.
To simplify the operations of the motorized wheelchair as to make it easier
and simpler for the disabled person to operate. With this simplified
operation, many disabled people have a chance to use the system with little
training on how to use it.
To build a wheelchair control module and interface it with the speech
recognition board as well as a wireless microphone unit.
To build a motor control circuit, and add a motor driving mechanism to an
ordinary wheelchair
To integrate all the modules together to produce a wireless controlled
Motorized wheelchair.
13. VOICE OPERATED WHEELCHAIR
13
2.1:BLOCK DIAGRAM OF V.O.W.
Fig 2.2
2.2 DESCRIPTION OF BLOCKDIAGRAM:
HARDWARE:
Block diagram of voice operated wheelchair consist of following blocks.
1) PIC microcontroller
2)Voice recognition block
3) Driver IC block
4) DC motors block
5) Battery
6) Battery charger
The description of this blocks is as follows.
1)MICROCONTROLLER PIC16F877
This is a 40 pin programmable interrupt microcontroller. It is a high performance
RISC CPU. This is used for controlling the movement and direction of wheel chair
14. VOICE OPERATED WHEELCHAIR
14
by controlling the two DC motors. The details of microcontroller are given in
Following section. The microcontroller unit is the core of intelligent wheelchair. It
interfaces the voice recognition unit and the motor driver circuit. The main
function of this unit is to receive the data from the HM2007 IC through (D0-D7)
and determine the right command to be given to the driver circuit.PIC16F877A
microcontroller with 33 I/O lines covers all the requisites for this wheelchair.
2)VOICE RECOGNITION IC HM2007
The voice recognition unit consists of the HM2007 IC. It is a Large Scale
Integration (LSI) circuit with analog front end, voice analyzer, voice recognition
processor and functional control system embedded in a single chip Complementary
Metal Oxide Semi conductor(CMOS). It also consists of HM6264B IC which is a
64k external static RAM used by the HM2007 IC to store the trained words that are
used at the recognition phase , a 4*3 keyboard , external microphone and some
other components assembled together to build a 40 isolated sound word
recognition system. The voice recognition IC HM2007 is operated in speaker
dependent recognition mode. In this mode, the unit responds only to the current
user. If another person needs to use the same system, a new training phase must be
applied. This mode reaches a high accuracy of more than 95% for voice command
recognition.
3) MOTOR DRIVER CIRCUIT
The L293 and L293D are quadruple high-current half-H drivers. The L293 is
designed to provide bidirectional drive currents of up to 1 A at voltages from 4.5 V
to 36 V. The L293D is designed to provide bidirectional drive currents of up to
600-mA at voltages from 4.5 V to 36 V. Both devices are designed to drive
15. VOICE OPERATED WHEELCHAIR
15
inductive loads such as relays, solenoids, dc and bipolar stepping motors, as well
as other high-current/high-voltage loads in positive-supply applications.
4)MOTORS(DC):
Two 12V dc motors are used in this experiment.
5)POWER SUPPLY SECTION
This section is consisting of a rechargeable battery. This section deals with the
power requirements of the wheel chair for DC motors, Microcontroller and other
Section. Battery is used to provide the power supply to L298 driver IC (supply)
which drives the DC motors, Microcontroller and IR section operates on 5V supply
which is provided with the help of LM7805 which is a 5V regulator IC by
converting 12V into 5V
SOFTWAREREQUIRD:
MP LAB compiler is used for Programming the Microcontrollers.
Embedded C is the Programming language used .
Proteus 7 is used for simulation of the circuit .
16. VOICE OPERATED WHEELCHAIR
16
2.3 SPECIFICATIONS:
Components:
Parts list for speech-recognitioncircuit
1. IC1 HM2007 IC
2. IC3 74LS373
3. IC4 and IC5 7448
4. XTAL 3.57 MHz
5. Speech-recognition PCB
6. 12-contact keypad
7. 7-segment displays
8. Microphone
9. 12V battery clip
Parts list for interface circuit
1. Microcontroller PIC16f877A
2. L293D
3.40 MHz crystal
4. DC motors
5.7pin connectors
COMPONENT SPECIFICATIONS:
1)HM2007 IC :
It is a 48 pin DIP IC.
Speaker independent mode was used.
Maximum of 40 words can be recognized
Each word can be maximum 1.92sec long.
Microphone can be connected directly to the analog input.
64K SRAM, two 7 segment displays and their drivers were connected.
2)L293ddriver Ic:
Output Current Capability per driver:600 mA
Pulse Current:1.2A per driver
Package:16 pin DIP
17. VOICE OPERATED WHEELCHAIR
17
3)Pic Microcontroller16F877a:
Instructions:32
Operating speed:DC 20 MHz
Flash program memory:upto 8k *14 words
Data memory:upto 368*8 bytes
EPROM: upto 856*8 bytes
Timer/Counter:3(2-8 bit ,1-16 bit)
Operating voltage:2.0 to5.5
A/D convertor: 10 bit 8 channel
4)DC Motors:
Operating voltage:12 v
Speed:100 rpm
Current rating: upto 2 amp
18. VOICE OPERATED WHEELCHAIR
18
SOFTWARE:
a)Flow Chart for voice training and recognition
Turn off LED
TrainingMode
Y Word accepted
Fig 2.3
START
Press any no.on keypad
Memory no. displayed on 7-
seg.display
Press train(#) key
Speak the word
Next Word
End
LED Blinking
19. VOICE OPERATED WHEELCHAIR
19
2.4 VOICE TRAINING AND RECOGNITIONALGORITHM:
Clear the memory by pressing 99 *.
Enter the location number to be trained.
After entering the number the LED will turn off.
Number will be displayed on the display.
Next press # to train.
The chip will now listen to the voice input and LED will turn ON.
Now, speak the word you want to train into the microphone.
The LED should blink momentarily.
This is the sign that the voice has been accepted.
Continue doing this for different words.
Repeat the trained word into the microphone.
If word is rightly recognized, the correct location is displayed.
The error codes are:
55- word too long.
66-word too short.
77-word no match.
20. VOICE OPERATED WHEELCHAIR
20
Fig 2.4
DESCRIPTION OF FLOW CHART FOR V.O.W
Start the process.
Select the mode of operating.
For voice mode, give the voice input command.
If voice input is ‘FORWARD’, then execute ‘FORWARD’ loop, wheelchair
will move in forward motion otherwise go to next loop.
If voice input is ‘BACKWARD’, then execute ‘BACKWARD’ loop,
wheelchair will move in backward motion otherwise go to next loop.
If voice input is ‘RIGHT’, then execute ‘RIGHT’ loop, wheelchair will
move in right motion otherwise go to next loop.
If voice input is ‘LEFT’, then execute ‘LEFT’ loop, wheelchair will move in
left motion otherwise go to next loop.
Execute the stop loop, wheelchair will stop
For manual mode, by using keypad press 01for FORWARD,02 for
BACKWARD,03 for RIGHT,04 for LEFT,05 for STOP.
22. VOICE OPERATED WHEELCHAIR
22
3.1 Pin diagram of HM2007:
Fig 3.1
Descriptionof pin diagram:
The pin diagram of the speech processor HM2007 is shown in figure. The heart of
this module is the voice processor HM2007 IC manufactured by Hualon
Microelectronic Corporation, USA which controls the overall voice recognition
process. The data sheet is in the Appendix A. This processor is a 48-pin single chip
CMOS voice recognition LSI circuit with on chip analog front end, voice analysis,
recognition process and system control functions. It uses a 3.57 MHz crystal as a
clock to synchronize its operation. A maximum of 40 isolated-word voice
recognition system can be composed of external microphone, keyboard, external
memory 64K SRAM and some other components. This means it
has two selections of command word length capabilities:
23. VOICE OPERATED WHEELCHAIR
23
1)40 words vocabulary which has a maximum limit of 0.96 second. Length for
each word.
2) 20 words vocabulary which has a maximum limit of 1.92 second. length for
each word Other features includes 'dependent' and 'independent' mode voice
recognition capabilities. The speaker-dependent system is trained by the individual
who will be using the system [5]. It is capable of achieving a high command count
and better than 95% accuracy for word recognition. The disadvantage of this
approach is that the system only responds accurately to the individual who trained
the system. Speaker-independent system is a system trained to respond to a word
regardless of who speaks. Therefore the system must respond to a large variety of
speech patterns, inflections, and enunciations of the target words. The command
word count is usually lower than that of the speaker dependent system, however,
high accuracy can still be maintained within processing limits. Combined with a
microprocessor, an intelligent recognition system can be built.
24. VOICE OPERATED WHEELCHAIR
24
3.2 Voice RecognitionModule:
Fig3.2Voice recognition module
Description:
General definition of a voice recognition or speech recognition is that, a process of
converting a speech or voice signal into a sequence of words, by means of an
algorithm implemented as a-computer program. It is the ability of a machine or
program to recognize 'spoken words, by comparing the spoken commands with a
sound sample. In this technology analog signal (voice) is converted into a digital
signal by using an analog-to-digital converter. This digital signal is then compared
to the digital database of the system which has
25. VOICE OPERATED WHEELCHAIR
25
been stored with digital speech patterns. The voice recognition used in this thesis is
SR-06 which is from Images SI Incorporation, USA. It converts the voice analog
signal to digital output. The circuit is made up by four main blocks:
1. Speechrecognition processorIC HM2007
2. Input device which is a keypad for word training purpose
3. Digital display board which is used to display the word number
4. External memory SRAM IC
27. VOICE OPERATED WHEELCHAIR
27
3.4 DESCRIPTION OF SYSTEM CIRCUIT DIAGRAM:
CONNECTIONS:
For voice recognition purpose, IC HM2007 is used. D-bus of IC HM2007 is
connected to port B of pic -microcontroller16F877.Port B of pic microcontroller is
made as input port. Port D of pic microcontroller16F877 is made as output port.
The pins RD0/PS0(19),RD1/PSP1(20),RD2/PSP2(21),RD3PSP3(22) are connected
to pins 3A(10),1A(2),4A(15),2A(7) of L293D respectively. Pins
1Y(3),2Y(6),3Y(11) and 4Y(14) are output pins of L293D.The two DC motors are
connected connected to this pins.
WORKING:
There are two mode provided by HM 2007
1)MANUAL MODE:
In this operation mode keypad, SRAM and other component connected to HM2007
to built simple recognition circuit. SRAM is of 8kb.
(a)Power on: When power is on HM2007 starts its initialization process. If
WAIT pin is low, IC will do memory check whether the SRAM is perfect or not. If
pin is high, skip the memory check process. After initial process is done IC will
move to recognition mode.
(b)Recognition mode
WAIT pin is high: In this mode, RDY set to low and HM2007 is ready to
accept voice input. When voice input is detected, RDY goes to high and IC begins
its recognition process. After recognition process, the result will appear on D-Bus
of HM2007 with pin DEN active. The result is in binary form of memory location
of voice input.
This binary output is given to Port B of pic microcontroller 16F877.In
microcontroller it compares the output from HM2007 with specified value in the
program. If the two values are same, then microcontroller executes the
corresponding subroutine. The four Port D pins are connected to input pins of
driver IC L293D and accordingly motor rotates.
28. VOICE OPERATED WHEELCHAIR
28
COMBINATIONSFOR MOTOR DRIVER IC L293D:
Pin 2=logic 1and pin 7=logic0, CLOCKWISEDIRECTION
Pin 2=logic 0and pin 7=logic1,ANTI CLOCKWISE DIRECTION
Pin 2=logic 0and pin 7=logic0, NO ROTATION
Pin 2=logic 1and pin 7=logic1, NO ROTATION
In very similar way, motor can be operated across pin 15 and pin 10 for motor on
right hand side.
31. VOICE OPERATED WHEELCHAIR
31
3.6 MOTION OF WHEELCHAIR:
The main part of the design is to control the motion of the wheelchair. There
are four condition of motions are considered, moving forward, moving in
reverse direction, moving to the left and moving to the right. For the speed,
the user may use slow or fast speed command.
The system starts by applying the supply voltage to the speech recognition
circuit. For fast condition the system will supply higher current to the
motors.
If the user does not want the wheelchair move in high speed, the slow speed
command can be set by applying low current supply to the motors. The
wheel chair directions and movement possible are as given below.
Forward:Both motors are in forward direction.
Reverse:Both motors are in reverse direction.
Left: Left motor stopped and right motor in forward direction.
Right: Right motor stopped and left motor in forward direction.
Stop: Both motors are stopped.
37. VOICE OPERATED WHEELCHAIR
37
EXPERIMENTAL OBSERVATION FOR TESTING:
Table 2
COMMAND
GIVEN
OBSERVED MOTION ACCURACY
OF
RESPONSESPEAKER1 SPEAKER2 SPEAKER3
FORWARD FORWARD FORWARD FORWARD 100%
BACKWARD BACKWARD BACKWARD BACKWARD 100
RIGHT RIGHT NO MOTION RIGHT 66.66%
LEFT FORWARD LEFT LEFT 66.66%
STOP STOP STOP STOP 100%
DESCRIPTION OF RESULT:
1)Table no 1shows the output result present on the Port B of the PIC
microcontroller 16F877.When we give the voice input as ‘FORWARD’ for the
memory location 01H,then IC HM2007 assigns this voice input to that memory
location and provides its binary form which is appears on Port B of
microcontroller. As per this, for all other voice inputs, we get the output.
2)Table no 2 shows the actual testing results of the voice operated wheelchair. The
output is speaker independent. For ‘FORWARD’ command, all the three speakers
got the result as ‘FORWARD’ motion of wheelchair. For ‘BACKWARD’
command, all speakers got ‘BACKWARD ’motion of wheelchair. So accuracy of
result is 100% for FORWARD and BACKWARD. For command ‘RIGHT
‘speaker 2 got result as ‘NO MOTION ’.For ‘LEFT’ command, speaker1 got
response as ’FORWARD’ motion. So accuracy of result for ’LEFT’ and
‘RIGHT’is66.66%.For‘STOP’ command got the accuracy of result as100%.
38. VOICE OPERATED WHEELCHAIR
38
3.9 ADVANTAGES
1) A handicapped person without Legs can use this and become Independent.
2) Reduce manpower.
3) User friendly
3.10 CONCLUSION:
HM2007 the efficiency to detect voice command and control the wheel chair is
significantly increased. This voice operated wheel chair will assist the handicapped
persons to make them self dependent for the purposeof movement for which these
people are dependent on other most of the times. A person with disabled with legs
and arms can use this wheel chair efficiently if he is able to speak.
3.101FUTURE SCOPE:
The wheelchair speed control system is targeted to be operated in both indoor and
outdoor. This means it has to be noise-proofed and weather-proofed.It must has the
ability to recognize the command word even in the presence of the background
noise.
40. VOICE OPERATED WHEELCHAIR
40
4.1 REFERENCES:
[1] “Voice Operated Intelligent Wheelchair” by Ms. S. D. Suryawanshi , Mr. J. S.
Chitode , Ms. S. S. Pethakar, “ International Journal of Advanced Research in
Computer Science and Software Engineering
[2] “Voice Based Direction and Speed Control of Wheel Chair for Physically
Challenged by M.Prathyusha, K. S. Roy, Mahaboob Ali Shaik, “ International
Journal of Engineering Trends and Technology (IJETT)” ,
[3] “A Wheelchair Steered through Voice Commands by Gabriel Pires and
UrbanoNunes “Journal of Intelligent and Robotic
[4] “Smart Wheelchairs: A literature Survey”, by Richard Simpson “Journal of
Rehabilitation Research & Development