The document describes an integrated speech-based perception system for an assistive robotic bathing system. The system uses speech recognition with modules like voice activity detection and beamforming to achieve robust communication with the user. It also uses a depth camera for motion planning of the robotic manipulator. The system can be adapted to a user's preferences and needs through spoken commands. It was experimentally tested with different users and shown to successfully execute bathing scenarios.
This document discusses a framework for improving access to virtual reality (VR) environments for citizens with disabilities. It proposes techniques ranging from simple additions to VRML files to scripts that can aid in creating more accessible VR worlds. These techniques aim to improve the usability and accessibility of VR technologies for people with sensory, physical, or cognitive impairments. The framework also provides initial authoring strategies to help make VRML content more accessible. The goal is to leverage VR to enhance the quality of life and independence of citizens with disabilities.
Research has shown for some age groups, quality of fingerprints can impact the performance of biometric systems. A
desirable feature of biometrics is that they are suitable for use across the population. This applied study examines the performance of a fingerprint recognition system in a healthcare environment. Anecdotal evidence suggested front line healthcare workers may have lower image quality due to continued hand washing which may remove oils from their skin. During training, individuals are told to add oil to their fingers by wiping oil from their foreheads to improve the resulting quality of the
fingerprints. In the healthcare population the authors tested, compared to two general populations (collected on optical and
capacitance sensors) there was a significant difference in skin oiliness, but not in image quality. There was a difference across
healthcare and non-healthcare groups in the performance of the fingerprint algorithm when compared against the capacitance
dataset.
Acoustic event characterization for service robot using convolutional networksIJECEIAES
This paper presents and discusses the creation of a sound event classification model using deep learning. In the design of service robots, it is necessary to include routines that improve the response of both the robot and the human being throughout the interaction. These types of tasks are critical when the robot is taking care of children, the elderly, or people in vulnerable situations. Certain dangerous situations are difficult to identify and assess by an autonomous system, and yet, the life of the users may depend on these robots. Acoustic signals correspond to events that can be detected at a great distance, are usually present in risky situations, and can be continuously sensed without incurring privacy risks. For the creation of the model, a customized database is structured with seven categories that allow to categorize a problem, and eventually allow the robot to provide the necessary help. These audio signals are processed to produce graphical representations consistent with human acoustic identification. These images are then used to train three convolutional models identified as high-performing in this type of problem. The three models are evaluated with specific metrics to identify the best-performing model. Finally, the results of this evaluation are discussed and analyzed.
Scheme for motion estimation based on adaptive fuzzy neural networkTELKOMNIKA JOURNAL
Many applications of robots in collaboration with humans require the robot to follow the person autonomously. Depending on the tasks and their context, this type of tracking can be a complex problem. The paper proposes and evaluates a principle of control of autonomous robots for applications of services to people, with the capacity of prediction and adaptation for the problem of following people without the use of cameras (high level of privacy) and with a low computational cost. A robot can easily have a wide set of sensors for different variables, one of the classic sensors in a mobile robot is the distance sensor. Some of these sensors are capable of collecting a large amount of information sufficient to precisely define the positions of objects (and therefore people) around the robot, providing objective and quantitative data that can be very useful for a wide range of tasks, in particular, to perform autonomous tasks of following people. This paper uses the estimated distance from a person to a service robot to predict the behavior of a person, and thus improve performance in autonomous person following tasks. For this, we use an adaptive fuzzy neural network (AFNN) which includes a fuzzy neural network based on Takagi-Sugeno fuzzy inference, and an adaptive learning algorithm to update the membership functions and the rule base. The validity of the proposal is verified both by simulation and on a real prototype. The average RMSE of prediction over the 50 laboratory tests with different people acting as target object was 7.33.
The document proposes using the Leap Motion controller and a 6-DOF Jaco robotic arm to allow for intuitive and adaptive robotic arm manipulation. An algorithm would allow optimum mapping between a user's hand movements tracked by the Leap Motion controller and movements of the Jaco arm. This would allow a more natural human-computer interaction and smooth manipulation of the robotic arm, adapting to hand tremors or shakes. The goal is to enhance quality of living for people with upper limb problems and support them in performing essential daily tasks.
Identifier of human emotions based on convolutional neural network for assist...TELKOMNIKA JOURNAL
This paper proposes a solution for the problem of continuous prediction in real-time of the emotional state of a human user from the identification of characteristics in facial expressions. In robots whose main task is the care of people (children, sick or elderly people) is important to maintain a close relationship man-machine, anld a rapid response of the robot to the actions of the person under care. We propose to increase the level of intimacy of the robot, and its response to specific situations of the user, identifying in real time the emotion reflected by the person's face. This solution is integrated with algorithms of the research group related to the tracking of people for use on an assistant robot. The strategy used involves two stages of processing, the first involves the detection of faces using HOG and linear SVM, while the second identifies the emotion in the face using a CNN. The strategy was completely tested in the laboratory on our robotic platform, demonstrating high performance with low resource consumption. Through various controlled laboratory tests with different people, which forced a certain emotion on their faces, the scheme was able to identify the emotions with a success rate of 92%.
Technology and Disability 24 (2012) 303–311 303DOI 10.3233T.docxmattinsonjanel
Technology and Disability 24 (2012) 303–311 303
DOI 10.3233/TAD-120361
IOS Press
Service robots in elderly care at home: Users’
needs and perceptions as a basis for concept
development
Lucia Piginia,∗, David Facalb, Lorenzo Blasic and Renzo Andricha
aFondazione Don Carlo Gnocchi Onlus, Milano, Italy
bFundación Instituto Gerontológico Matia – INGEMA, San Sebastian, Spain
cHewlett-Packard Italiana S.r.l., Milano, Italy
Abstract. Background: Service robots may offer an innovative assistive solution to improve the quality of life of frail elderly
people, by assisting them in specific situations identified as relevant to maintain independence.
Objective: This paper describes the results of a qualitative and quantitative research based on a user-centered methodology carried
out within the EU-funded project “Multi-Role Shadow Robotic System for Independent Living” (SRS), aiming to generate user
requirements and realistic usage scenarios maximizing the alignment with users’ needs, perceptions, feelings and rights.
Methods: A qualitative and quantitative research – based on focus groups (59 participants) and questionnaires (129 respondents) –
was carried out in three countries: Italy, Spain and Germany. The survey involved prospective end-users (elderly people and
family members who care for them), caregivers, and geriatric experts.
Results: Results show that despite elderly people encounter difficulties in many activities of daily life, a semi-autonomous
remotely-controlled and self-learning service robot has been judged an interesting solution only in some circumstances. Moni-
toring and managing emergency situations, helping with reaching, fetching and carrying objects that are too heavy or positioned
in unreachable places: these are tasks for which robotic support has been widely accepted, while tasks involving direct physical
contact between the person and the robot are not appreciated instead. Relatives of the elderly could act as remote operators;
however, family psychological burden and time restrictions should be considered too.
Conclusions: A tele-operated robotic system may be of help for frail elderly people. In certain cases this solution may be effective
only in conjunction with a 24-hour professional Service Centre able to manage tele-operation when relatives are not available.
This survey adds further tokens of knowledge to previous literature studies on this subject; it compares the potential users’ and the
professionals’ views; it helps identifying potentially successful applications of tele-operated robots in the care of elderly people
living at home. The results obtained by the present study, generated specific requirements and the first versions of concrete usage
scenarios, enabling designers and technologists to start with a first development phase of the SRS concept.
Keywords: Service robots, tele-operation, elderly people, caregivers, user requirements, user centered design
1. Introduction
Several robotic research proje ...
The document discusses using an interactive robot to intelligently monitor patients' health at home. It describes developing a system for the robot to understand contexts in order to determine the right time each day to ask a patient to complete a symptoms questionnaire. The system considers data from sensors and parameters like a patient's presence and facial recognition. Implementing this system on a NAO robot and enhancing its features improved the system's average precision from 76.8% to 18.2% higher. Future research will focus on understanding context within the semantic web to further increase effectiveness.
This document discusses a framework for improving access to virtual reality (VR) environments for citizens with disabilities. It proposes techniques ranging from simple additions to VRML files to scripts that can aid in creating more accessible VR worlds. These techniques aim to improve the usability and accessibility of VR technologies for people with sensory, physical, or cognitive impairments. The framework also provides initial authoring strategies to help make VRML content more accessible. The goal is to leverage VR to enhance the quality of life and independence of citizens with disabilities.
Research has shown for some age groups, quality of fingerprints can impact the performance of biometric systems. A
desirable feature of biometrics is that they are suitable for use across the population. This applied study examines the performance of a fingerprint recognition system in a healthcare environment. Anecdotal evidence suggested front line healthcare workers may have lower image quality due to continued hand washing which may remove oils from their skin. During training, individuals are told to add oil to their fingers by wiping oil from their foreheads to improve the resulting quality of the
fingerprints. In the healthcare population the authors tested, compared to two general populations (collected on optical and
capacitance sensors) there was a significant difference in skin oiliness, but not in image quality. There was a difference across
healthcare and non-healthcare groups in the performance of the fingerprint algorithm when compared against the capacitance
dataset.
Acoustic event characterization for service robot using convolutional networksIJECEIAES
This paper presents and discusses the creation of a sound event classification model using deep learning. In the design of service robots, it is necessary to include routines that improve the response of both the robot and the human being throughout the interaction. These types of tasks are critical when the robot is taking care of children, the elderly, or people in vulnerable situations. Certain dangerous situations are difficult to identify and assess by an autonomous system, and yet, the life of the users may depend on these robots. Acoustic signals correspond to events that can be detected at a great distance, are usually present in risky situations, and can be continuously sensed without incurring privacy risks. For the creation of the model, a customized database is structured with seven categories that allow to categorize a problem, and eventually allow the robot to provide the necessary help. These audio signals are processed to produce graphical representations consistent with human acoustic identification. These images are then used to train three convolutional models identified as high-performing in this type of problem. The three models are evaluated with specific metrics to identify the best-performing model. Finally, the results of this evaluation are discussed and analyzed.
Scheme for motion estimation based on adaptive fuzzy neural networkTELKOMNIKA JOURNAL
Many applications of robots in collaboration with humans require the robot to follow the person autonomously. Depending on the tasks and their context, this type of tracking can be a complex problem. The paper proposes and evaluates a principle of control of autonomous robots for applications of services to people, with the capacity of prediction and adaptation for the problem of following people without the use of cameras (high level of privacy) and with a low computational cost. A robot can easily have a wide set of sensors for different variables, one of the classic sensors in a mobile robot is the distance sensor. Some of these sensors are capable of collecting a large amount of information sufficient to precisely define the positions of objects (and therefore people) around the robot, providing objective and quantitative data that can be very useful for a wide range of tasks, in particular, to perform autonomous tasks of following people. This paper uses the estimated distance from a person to a service robot to predict the behavior of a person, and thus improve performance in autonomous person following tasks. For this, we use an adaptive fuzzy neural network (AFNN) which includes a fuzzy neural network based on Takagi-Sugeno fuzzy inference, and an adaptive learning algorithm to update the membership functions and the rule base. The validity of the proposal is verified both by simulation and on a real prototype. The average RMSE of prediction over the 50 laboratory tests with different people acting as target object was 7.33.
The document proposes using the Leap Motion controller and a 6-DOF Jaco robotic arm to allow for intuitive and adaptive robotic arm manipulation. An algorithm would allow optimum mapping between a user's hand movements tracked by the Leap Motion controller and movements of the Jaco arm. This would allow a more natural human-computer interaction and smooth manipulation of the robotic arm, adapting to hand tremors or shakes. The goal is to enhance quality of living for people with upper limb problems and support them in performing essential daily tasks.
Identifier of human emotions based on convolutional neural network for assist...TELKOMNIKA JOURNAL
This paper proposes a solution for the problem of continuous prediction in real-time of the emotional state of a human user from the identification of characteristics in facial expressions. In robots whose main task is the care of people (children, sick or elderly people) is important to maintain a close relationship man-machine, anld a rapid response of the robot to the actions of the person under care. We propose to increase the level of intimacy of the robot, and its response to specific situations of the user, identifying in real time the emotion reflected by the person's face. This solution is integrated with algorithms of the research group related to the tracking of people for use on an assistant robot. The strategy used involves two stages of processing, the first involves the detection of faces using HOG and linear SVM, while the second identifies the emotion in the face using a CNN. The strategy was completely tested in the laboratory on our robotic platform, demonstrating high performance with low resource consumption. Through various controlled laboratory tests with different people, which forced a certain emotion on their faces, the scheme was able to identify the emotions with a success rate of 92%.
Technology and Disability 24 (2012) 303–311 303DOI 10.3233T.docxmattinsonjanel
Technology and Disability 24 (2012) 303–311 303
DOI 10.3233/TAD-120361
IOS Press
Service robots in elderly care at home: Users’
needs and perceptions as a basis for concept
development
Lucia Piginia,∗, David Facalb, Lorenzo Blasic and Renzo Andricha
aFondazione Don Carlo Gnocchi Onlus, Milano, Italy
bFundación Instituto Gerontológico Matia – INGEMA, San Sebastian, Spain
cHewlett-Packard Italiana S.r.l., Milano, Italy
Abstract. Background: Service robots may offer an innovative assistive solution to improve the quality of life of frail elderly
people, by assisting them in specific situations identified as relevant to maintain independence.
Objective: This paper describes the results of a qualitative and quantitative research based on a user-centered methodology carried
out within the EU-funded project “Multi-Role Shadow Robotic System for Independent Living” (SRS), aiming to generate user
requirements and realistic usage scenarios maximizing the alignment with users’ needs, perceptions, feelings and rights.
Methods: A qualitative and quantitative research – based on focus groups (59 participants) and questionnaires (129 respondents) –
was carried out in three countries: Italy, Spain and Germany. The survey involved prospective end-users (elderly people and
family members who care for them), caregivers, and geriatric experts.
Results: Results show that despite elderly people encounter difficulties in many activities of daily life, a semi-autonomous
remotely-controlled and self-learning service robot has been judged an interesting solution only in some circumstances. Moni-
toring and managing emergency situations, helping with reaching, fetching and carrying objects that are too heavy or positioned
in unreachable places: these are tasks for which robotic support has been widely accepted, while tasks involving direct physical
contact between the person and the robot are not appreciated instead. Relatives of the elderly could act as remote operators;
however, family psychological burden and time restrictions should be considered too.
Conclusions: A tele-operated robotic system may be of help for frail elderly people. In certain cases this solution may be effective
only in conjunction with a 24-hour professional Service Centre able to manage tele-operation when relatives are not available.
This survey adds further tokens of knowledge to previous literature studies on this subject; it compares the potential users’ and the
professionals’ views; it helps identifying potentially successful applications of tele-operated robots in the care of elderly people
living at home. The results obtained by the present study, generated specific requirements and the first versions of concrete usage
scenarios, enabling designers and technologists to start with a first development phase of the SRS concept.
Keywords: Service robots, tele-operation, elderly people, caregivers, user requirements, user centered design
1. Introduction
Several robotic research proje ...
The document discusses using an interactive robot to intelligently monitor patients' health at home. It describes developing a system for the robot to understand contexts in order to determine the right time each day to ask a patient to complete a symptoms questionnaire. The system considers data from sensors and parameters like a patient's presence and facial recognition. Implementing this system on a NAO robot and enhancing its features improved the system's average precision from 76.8% to 18.2% higher. Future research will focus on understanding context within the semantic web to further increase effectiveness.
This document proposes incorporating a control system into an existing ubiquitous computing environment (SEDINU system) to allow for manipulation of physical objects. It describes modifying the SEDINU system's database (RBACSoft) to include a control system entity with attributes linking it to autonomous areas and physical objects. A communication protocol is proposed using the USB 2.0 standard for the host services and control system to interact. The control system would allow changing the states of physical objects from mobile devices based on requests from the host services. Future work could include handling multiple concurrent mobile access and migrating to USB 3.0.
DYNAMIC AND REALTIME MODELLING OF UBIQUITOUS INTERACTIONcscpconf
This document discusses modeling real-time interaction between a user and a ubiquitous system using dynamic Petri net models. It proposes using Petri nets to model a user's activity as a set of elementary actions. Elementary actions are modeled as Petri net structures that are then composed together through techniques like sequence, parallelism, etc. to form an overall model of user-system interaction. The models can be dynamically adapted based on changes to the user's context. OWL-S ontology is used to describe the dynamic aspects of the Petri net models, especially real-time composition of models. Simulation results validate the approach of dynamically modeling user-system interaction through mutation of Petri net models.
The document summarizes a research project that developed Skinput, a technique for using the human body as an input surface. A wearable armband containing piezoelectric sensors is used to capture vibrations from finger taps on the arm. The sensors are tuned to specific resonant frequencies to detect relevant low-frequency signals transmitted through soft tissues and bones. Machine learning classifiers analyze the acoustic features from the sensors to determine the location and timestamps of finger taps, allowing the arm to serve as a touch input surface. Initial proof-of-concept applications are demonstrated.
Increasingly sophisticated biometric methods are being used for a
variety of applications in which accurate authentication of people is necessary.
Because all biometric methods require humans to interact with a device of some
type, effective implementation requires consideration of human factors issues.
One such issue is the training needed to use a particular device appropriately.
In this paper, we review human factors issues in general that are associated with
biometric devices and focus more specifically on the role of training.
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
This paper represents a comparative study of exiting hand gesture recognition systems and gives the new approach for the gesture recognition which is easy cheaper and alternative of input devices like mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-coloured glove or need lots of training data. The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). This interface is simple enough to be run using an ordinary webcam and requires little training.
Telemedicine is a new and hopeful application in the field of medical science. Medical consulting and remote medical procedures or examinations became more successful with the development of interactive audiovisual systems and distant movable robotic platforms. Tele-presence requires real time video communication. Fixed mounting of video device on mobile robotic platform limits the video projection at positive vision angle, thus restricting the remote doctor in obtaining the better and correct visual information of the patient. To avoid the problem, this paper presents a flexible robotic arm with vision system. A thirteen degree of freedom robotic arm with extending and retracting quality enables the remote doctor to clear the attached video device on robotic arm to object area on patient. The rising necessity for robotic applications in dynamic unstructured environments is inspiring the need for dexterous end-effectors which have the wide variety of tasks and objects encountered in these environments. The human hand is a very complex grasping tool that can handle objects of different sizes and shapes. This hand gripper is the wide working space compared with its physical dimensions and the capability to deal with objects in working environment conditions. This capability is achieved by using force/torque sensor and by properly controlling and coordinating the gripper and the carrying arm Note that with this control structure, it is also very simple to connect the arm/gripper system using Internet to other computational resources or robotic devices, for example to emulate tele-operation tasks.
Robotic assistance for the visually impaired and elderly peopletheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Robotic assistance for the visually impaired and elderly peopletheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Intelligent Robotics Navigation System: Problems, Methods, and Algorithm IJECEIAES
This paper set out to supplement new studies with a brief and comprehensible review of the advanced development in the area of the navigation system, starting from a single robot, multi-robot, and swarm robots from a particular perspective by taking insights from these biological systems. The inspiration is taken from nature by observing the human and the social animal that is believed to be very beneficial for this purpose. The intelligent navigation system is developed based on an individual characteristic or a social animal biological structure. The discussion of this paper will focus on how simple agent’s structure utilizes flexible and potential outcomes in order to navigate in a productive and unorganized surrounding. The combination of the navigation system and biologically inspired approach has attracted considerable attention, which makes it an important research area in the intelligent robotic system. Overall, this paper explores the implementation, which is resulted from the simulation performed by the embodiment of robots operating in real environments.
Wearable sensor network for lower limb angle estimation in robotics applicationsTELKOMNIKA JOURNAL
In human-robot interaction, sensors are relevant in guaranteeing stability and high performance in real-time applications. Nonetheless, accuracy and portable sensors for robots usually have high costs and little flexibility to process signals with free software. Therefore, we propose a wearable sensor network to measure lower limb angular position in human-robot interaction systems. The methodology employed to achieve the aim consisted in implementing a wireless network using low-cost devices, verifying design requirements, and making a validation via a proof of concept. The requirements to design the network include low loss of information, real-time communication, and sensor fusion to estimate the angular position using a gyroscope and accelerometer. Hence, the sensor network developed has a client-server architecture based on ESP8266 microcontrollers. In addition, this network uses the standard 802.11 b/g/n to transmit angular velocity and acceleration measures. Furthermore, we implement the user datagram protocol (UDP) protocol to operate in real-time with a sample time of 10 ms. Finally, we implement a proof of concept to show the system’s effectiveness. Thus, we use the Kalman filter to estimate the angular position of the foot, shin, thigh, and hip. Results indicate that the implemented sensor network is suitable for real-time robotic applications.
Low Cost Self-assistive Voice Controlled Technology for Disabled PeopleIJMER
The document describes a proposed voice-controlled wheelchair and home automation system for disabled individuals. The system uses a microcontroller connected to a voice recognition module to recognize spoken commands. The commands control the motion of an electric wheelchair and operation of home appliances like lights. The system was tested for accuracy of wheelchair motion and home appliance control in response to voice commands, achieving 80% accuracy in a silent environment. The goal is to allow disabled people to control a wheelchair and home devices independently using only their voice.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcscpconf
This paper reports results of artificial neural network for robot navigation tasks. Machine learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”. In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the PNN is the best classification accuracy with 99,635% accuracy using same dataset.
IRJET- Human Hand Movement Training with Exoskeleton ARMIRJET Journal
This document describes a proposed system to develop a humanoid robotic hand that can mimic the motions of a human hand in real-time. The system uses a MEMS sensor to capture motions of the human hand. This data is sent to an Arduino microcontroller via wireless sensor networks. The microcontroller then sends commands to the actuators in the robotic hand to replicate the motions. The goal is to create an inexpensive, automatically controlled robotic hand that is wirelessly interfaced to mimic natural human finger and hand motions for applications such as assistance for stroke patients. Hardware components include the MEMS sensor, Arduino, wireless modules, motors, and a robotic hand model while software includes programs for the Arduino.
Internet of Things-based telemonitoring rehabilitation system for knee injuriesjournalBEEI
Rehabilitation engineering, as one of the active research areas in biomedical science, needs further investigations and improvements. The process of rehabilitation, whether after a stroke, ligament, or accident-related injuries, is commonly based on clinical assessment tools, which can be executed, either by self-reported (home-based) treatment or through observer-rated therapy. However, people with reduced mobility (e.g., stroke, surgical, and ligament patients) can benefit from rehabilitation programs only if effective and appropriate assistive tools are provided. In this paper, a new Internet of Things (IoT)-based telemonitoring system is introduced for knee injuries’ rehabilitation (Knee-Rehab). The proposed system is a real-time rehabilitation and monitoring framework designed to be a portable, home-based, and online-based instrument comprised of bio-mechanical, bio-instrumentation and IoT-based elements. The system helps patients to rest at home after surgeries or physical treatment, do their rehab-exercises, and receive suggestions form their advisors, which gain the ability to monitor the situation over the exercising time and propose necessary medication/activities to be followed by the patients accordingly, based on their current status. The experimental measurements show the high accuracy achieved by the developed system in terms of monitored knee joint angle, where the maximum error is 3.5° compared to manual goniometer measurements.
This paper reports results of artificial neural network for robot navigation tasks. Machine
learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”.
In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer
Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the
PNN is the best classification accuracy with 99,635% accuracy using same dataset.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcsandit
This paper reports results of artificial neural network for robot navigation tasks. Machine
learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”.
In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer
Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the
PNN is the best classification accuracy with 99,635% accuracy using same dataset.
A Methodological Variation For Acceptance Evaluation Of Human-Robot Interacti...Heather Strinden
The document proposes using a breaching experiment combined with ethnographic observation to study social acceptance of robots in public spaces. This method involves introducing a robot into a pedestrian area to observe people's natural first reactions. The researchers conducted a field trial in a public place using this method, gathering feedback via questionnaires. They found that breaching experiments can be a useful way to investigate social acceptance of robots in real-world settings.
This document discusses using virtual reality technology to help blind and visually impaired people. It reviews mobility systems that use sensors and haptic feedback to navigate obstacles. It also examines accessibility of e-learning, wearable finger-braille interfaces, audio/tactile computer games, and web-based information systems. The goal is to enhance computer skills and independent living for disabled individuals through innovative virtual reality techniques.
Human muscle rigidity identification by human-robot approximation characteris...Venu Madhav
In the health care system and Internet of Things (IoT) platform, medical care robotics is
becoming one of the quickest expanding areas of robot technology. The integration of
robotics and human knowledge identifies human muscle rigidity from the healthcare
data obtained from the wearable sensor. In an IoT platform, Electromyography is a
method used for evaluating and tracking the electrical activity of muscles. The transferring
of human muscle rigidity to a robot facilitates the robot to obtain resistive management
initiatives in a useful and effective way while carrying out physical interaction
activities in unstructured surroundings. The major challenges to overcome the
unpredictability during physical interaction allow a robot to realize the individual behaviour
with adaptability and versatility of muscles. Therefore, in this article, Human-Robot
Approximation Characteristics Framework (HRACF) has been proposed for developing
physiological communication between humans and robots. HRACF permits robots to
understand differential resistive abilities of muscles from human presentations. The
pulses collected from Electromyography are used to retrieve human arm muscle rigidity
during activity presentation. The characteristics of motion and rigidity are concurrently
modelled using an estimation and approximation model with a logistic regression
obtained by IoT devices. The analysed human arm muscle rigidity is then connected to
the robot impedance regulator. HR model uses an optimized resistive approximator to
measure the creative variables of the robot and continue driving to monitor the quoted
pathways at the time of interaction. The relationship between motion data and rigidity
data is systematically coded in the HR model. HRACF makes it possible to detect uncertainties
through space and time that facilitates the robot to meet rigidity specification
to 98[Nm/Rad] and error rate to 0.15% during physical interaction.
This article proposes a Human-Robot Approximation Characteristics Framework (HRACF) to enable communication between humans and robots to identify human muscle rigidity. The framework uses electromyography data from wearable sensors to detect electrical activity in muscles and extract rigidity data during physical activities. It then transfers this rigidity data to robots to help them better understand resistive capabilities of muscles. The HRACF models motion and rigidity characteristics simultaneously and codes their relationship to help robots meet rigidity specifications and minimize errors during physical human-robot interactions.
RoboTalk is a general robot control framework that allows different robots to be controlled using the same commands. It provides a network-enabled interface that is independent of the underlying robot controller libraries. The framework uses a client-server model with four communication modes - direct, delay, playback, and broadcast - to support controlling robots over networks with varying quality. The architecture includes specifications for robot configuration, commands, and communication that provide abstraction between the client and specific robot implementations.
The document discusses the history and development of humanoid robots. It describes how the term "robot" was coined in 1921 and provides a timeline of important milestones in humanoid robotics from the 15th century to present day. Examples of prominent humanoid robots are given for Sony, Murata, PAL Technology, and Honda. Potential applications of humanoid robots include prosthetics, entertainment, space exploration, and performing dangerous tasks. The challenges of developing humanoid robots with artificial intelligence and human-like cognition and movement are also addressed.
This document proposes incorporating a control system into an existing ubiquitous computing environment (SEDINU system) to allow for manipulation of physical objects. It describes modifying the SEDINU system's database (RBACSoft) to include a control system entity with attributes linking it to autonomous areas and physical objects. A communication protocol is proposed using the USB 2.0 standard for the host services and control system to interact. The control system would allow changing the states of physical objects from mobile devices based on requests from the host services. Future work could include handling multiple concurrent mobile access and migrating to USB 3.0.
DYNAMIC AND REALTIME MODELLING OF UBIQUITOUS INTERACTIONcscpconf
This document discusses modeling real-time interaction between a user and a ubiquitous system using dynamic Petri net models. It proposes using Petri nets to model a user's activity as a set of elementary actions. Elementary actions are modeled as Petri net structures that are then composed together through techniques like sequence, parallelism, etc. to form an overall model of user-system interaction. The models can be dynamically adapted based on changes to the user's context. OWL-S ontology is used to describe the dynamic aspects of the Petri net models, especially real-time composition of models. Simulation results validate the approach of dynamically modeling user-system interaction through mutation of Petri net models.
The document summarizes a research project that developed Skinput, a technique for using the human body as an input surface. A wearable armband containing piezoelectric sensors is used to capture vibrations from finger taps on the arm. The sensors are tuned to specific resonant frequencies to detect relevant low-frequency signals transmitted through soft tissues and bones. Machine learning classifiers analyze the acoustic features from the sensors to determine the location and timestamps of finger taps, allowing the arm to serve as a touch input surface. Initial proof-of-concept applications are demonstrated.
Increasingly sophisticated biometric methods are being used for a
variety of applications in which accurate authentication of people is necessary.
Because all biometric methods require humans to interact with a device of some
type, effective implementation requires consideration of human factors issues.
One such issue is the training needed to use a particular device appropriately.
In this paper, we review human factors issues in general that are associated with
biometric devices and focus more specifically on the role of training.
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
This paper represents a comparative study of exiting hand gesture recognition systems and gives the new approach for the gesture recognition which is easy cheaper and alternative of input devices like mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-coloured glove or need lots of training data. The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). This interface is simple enough to be run using an ordinary webcam and requires little training.
Telemedicine is a new and hopeful application in the field of medical science. Medical consulting and remote medical procedures or examinations became more successful with the development of interactive audiovisual systems and distant movable robotic platforms. Tele-presence requires real time video communication. Fixed mounting of video device on mobile robotic platform limits the video projection at positive vision angle, thus restricting the remote doctor in obtaining the better and correct visual information of the patient. To avoid the problem, this paper presents a flexible robotic arm with vision system. A thirteen degree of freedom robotic arm with extending and retracting quality enables the remote doctor to clear the attached video device on robotic arm to object area on patient. The rising necessity for robotic applications in dynamic unstructured environments is inspiring the need for dexterous end-effectors which have the wide variety of tasks and objects encountered in these environments. The human hand is a very complex grasping tool that can handle objects of different sizes and shapes. This hand gripper is the wide working space compared with its physical dimensions and the capability to deal with objects in working environment conditions. This capability is achieved by using force/torque sensor and by properly controlling and coordinating the gripper and the carrying arm Note that with this control structure, it is also very simple to connect the arm/gripper system using Internet to other computational resources or robotic devices, for example to emulate tele-operation tasks.
Robotic assistance for the visually impaired and elderly peopletheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Robotic assistance for the visually impaired and elderly peopletheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Intelligent Robotics Navigation System: Problems, Methods, and Algorithm IJECEIAES
This paper set out to supplement new studies with a brief and comprehensible review of the advanced development in the area of the navigation system, starting from a single robot, multi-robot, and swarm robots from a particular perspective by taking insights from these biological systems. The inspiration is taken from nature by observing the human and the social animal that is believed to be very beneficial for this purpose. The intelligent navigation system is developed based on an individual characteristic or a social animal biological structure. The discussion of this paper will focus on how simple agent’s structure utilizes flexible and potential outcomes in order to navigate in a productive and unorganized surrounding. The combination of the navigation system and biologically inspired approach has attracted considerable attention, which makes it an important research area in the intelligent robotic system. Overall, this paper explores the implementation, which is resulted from the simulation performed by the embodiment of robots operating in real environments.
Wearable sensor network for lower limb angle estimation in robotics applicationsTELKOMNIKA JOURNAL
In human-robot interaction, sensors are relevant in guaranteeing stability and high performance in real-time applications. Nonetheless, accuracy and portable sensors for robots usually have high costs and little flexibility to process signals with free software. Therefore, we propose a wearable sensor network to measure lower limb angular position in human-robot interaction systems. The methodology employed to achieve the aim consisted in implementing a wireless network using low-cost devices, verifying design requirements, and making a validation via a proof of concept. The requirements to design the network include low loss of information, real-time communication, and sensor fusion to estimate the angular position using a gyroscope and accelerometer. Hence, the sensor network developed has a client-server architecture based on ESP8266 microcontrollers. In addition, this network uses the standard 802.11 b/g/n to transmit angular velocity and acceleration measures. Furthermore, we implement the user datagram protocol (UDP) protocol to operate in real-time with a sample time of 10 ms. Finally, we implement a proof of concept to show the system’s effectiveness. Thus, we use the Kalman filter to estimate the angular position of the foot, shin, thigh, and hip. Results indicate that the implemented sensor network is suitable for real-time robotic applications.
Low Cost Self-assistive Voice Controlled Technology for Disabled PeopleIJMER
The document describes a proposed voice-controlled wheelchair and home automation system for disabled individuals. The system uses a microcontroller connected to a voice recognition module to recognize spoken commands. The commands control the motion of an electric wheelchair and operation of home appliances like lights. The system was tested for accuracy of wheelchair motion and home appliance control in response to voice commands, achieving 80% accuracy in a silent environment. The goal is to allow disabled people to control a wheelchair and home devices independently using only their voice.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcscpconf
This paper reports results of artificial neural network for robot navigation tasks. Machine learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”. In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the PNN is the best classification accuracy with 99,635% accuracy using same dataset.
IRJET- Human Hand Movement Training with Exoskeleton ARMIRJET Journal
This document describes a proposed system to develop a humanoid robotic hand that can mimic the motions of a human hand in real-time. The system uses a MEMS sensor to capture motions of the human hand. This data is sent to an Arduino microcontroller via wireless sensor networks. The microcontroller then sends commands to the actuators in the robotic hand to replicate the motions. The goal is to create an inexpensive, automatically controlled robotic hand that is wirelessly interfaced to mimic natural human finger and hand motions for applications such as assistance for stroke patients. Hardware components include the MEMS sensor, Arduino, wireless modules, motors, and a robotic hand model while software includes programs for the Arduino.
Internet of Things-based telemonitoring rehabilitation system for knee injuriesjournalBEEI
Rehabilitation engineering, as one of the active research areas in biomedical science, needs further investigations and improvements. The process of rehabilitation, whether after a stroke, ligament, or accident-related injuries, is commonly based on clinical assessment tools, which can be executed, either by self-reported (home-based) treatment or through observer-rated therapy. However, people with reduced mobility (e.g., stroke, surgical, and ligament patients) can benefit from rehabilitation programs only if effective and appropriate assistive tools are provided. In this paper, a new Internet of Things (IoT)-based telemonitoring system is introduced for knee injuries’ rehabilitation (Knee-Rehab). The proposed system is a real-time rehabilitation and monitoring framework designed to be a portable, home-based, and online-based instrument comprised of bio-mechanical, bio-instrumentation and IoT-based elements. The system helps patients to rest at home after surgeries or physical treatment, do their rehab-exercises, and receive suggestions form their advisors, which gain the ability to monitor the situation over the exercising time and propose necessary medication/activities to be followed by the patients accordingly, based on their current status. The experimental measurements show the high accuracy achieved by the developed system in terms of monitored knee joint angle, where the maximum error is 3.5° compared to manual goniometer measurements.
This paper reports results of artificial neural network for robot navigation tasks. Machine
learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”.
In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer
Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the
PNN is the best classification accuracy with 99,635% accuracy using same dataset.
LEARNING OF ROBOT NAVIGATION TASKS BY PROBABILISTIC NEURAL NETWORKcsandit
This paper reports results of artificial neural network for robot navigation tasks. Machine
learning methods have proven usability in many complex problems concerning mobile robots
control. In particular we deal with the well-known strategy of navigating by “wall-following”.
In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks.
The PNN result was compared with the results of the Logistic Perceptron, Multilayer
Perceptron, Mixture of Experts and Elman neural networks and the results of the previous
studies reported focusing on robot navigation tasks and using same dataset. It was observed the
PNN is the best classification accuracy with 99,635% accuracy using same dataset.
A Methodological Variation For Acceptance Evaluation Of Human-Robot Interacti...Heather Strinden
The document proposes using a breaching experiment combined with ethnographic observation to study social acceptance of robots in public spaces. This method involves introducing a robot into a pedestrian area to observe people's natural first reactions. The researchers conducted a field trial in a public place using this method, gathering feedback via questionnaires. They found that breaching experiments can be a useful way to investigate social acceptance of robots in real-world settings.
This document discusses using virtual reality technology to help blind and visually impaired people. It reviews mobility systems that use sensors and haptic feedback to navigate obstacles. It also examines accessibility of e-learning, wearable finger-braille interfaces, audio/tactile computer games, and web-based information systems. The goal is to enhance computer skills and independent living for disabled individuals through innovative virtual reality techniques.
Human muscle rigidity identification by human-robot approximation characteris...Venu Madhav
In the health care system and Internet of Things (IoT) platform, medical care robotics is
becoming one of the quickest expanding areas of robot technology. The integration of
robotics and human knowledge identifies human muscle rigidity from the healthcare
data obtained from the wearable sensor. In an IoT platform, Electromyography is a
method used for evaluating and tracking the electrical activity of muscles. The transferring
of human muscle rigidity to a robot facilitates the robot to obtain resistive management
initiatives in a useful and effective way while carrying out physical interaction
activities in unstructured surroundings. The major challenges to overcome the
unpredictability during physical interaction allow a robot to realize the individual behaviour
with adaptability and versatility of muscles. Therefore, in this article, Human-Robot
Approximation Characteristics Framework (HRACF) has been proposed for developing
physiological communication between humans and robots. HRACF permits robots to
understand differential resistive abilities of muscles from human presentations. The
pulses collected from Electromyography are used to retrieve human arm muscle rigidity
during activity presentation. The characteristics of motion and rigidity are concurrently
modelled using an estimation and approximation model with a logistic regression
obtained by IoT devices. The analysed human arm muscle rigidity is then connected to
the robot impedance regulator. HR model uses an optimized resistive approximator to
measure the creative variables of the robot and continue driving to monitor the quoted
pathways at the time of interaction. The relationship between motion data and rigidity
data is systematically coded in the HR model. HRACF makes it possible to detect uncertainties
through space and time that facilitates the robot to meet rigidity specification
to 98[Nm/Rad] and error rate to 0.15% during physical interaction.
This article proposes a Human-Robot Approximation Characteristics Framework (HRACF) to enable communication between humans and robots to identify human muscle rigidity. The framework uses electromyography data from wearable sensors to detect electrical activity in muscles and extract rigidity data during physical activities. It then transfers this rigidity data to robots to help them better understand resistive capabilities of muscles. The HRACF models motion and rigidity characteristics simultaneously and codes their relationship to help robots meet rigidity specifications and minimize errors during physical human-robot interactions.
RoboTalk is a general robot control framework that allows different robots to be controlled using the same commands. It provides a network-enabled interface that is independent of the underlying robot controller libraries. The framework uses a client-server model with four communication modes - direct, delay, playback, and broadcast - to support controlling robots over networks with varying quality. The architecture includes specifications for robot configuration, commands, and communication that provide abstraction between the client and specific robot implementations.
The document discusses the history and development of humanoid robots. It describes how the term "robot" was coined in 1921 and provides a timeline of important milestones in humanoid robotics from the 15th century to present day. Examples of prominent humanoid robots are given for Sony, Murata, PAL Technology, and Honda. Potential applications of humanoid robots include prosthetics, entertainment, space exploration, and performing dangerous tasks. The challenges of developing humanoid robots with artificial intelligence and human-like cognition and movement are also addressed.
This document summarizes a technology report on software architectures for supervised autonomy in a centaur-like humanoid robot for exploration and mobile manipulation in rough terrain. The report describes how the robot Momaro can autonomously perceive and map unknown, uneven terrain using a 3D laser scanner. It plans navigation paths and executes them using its omnidirectional drive while adapting to slopes using its four legs. Momaro can also perceive objects with cameras, estimate their pose, and manipulate them autonomously with its two arms. A ground station allows operators to specify missions, monitor progress, reconfigure the robot, and teleoperate it. The system was demonstrated at the DLR SpaceBot Camp 2015 where the robot solved all tasks.
The document provides information on software quality assurance and testing topics. It includes definitions of software quality assurance, differences between types of testing (static vs dynamic, client/server vs web applications), quality assurance activities, why testing cannot ensure quality, and more. FAQs cover topics such as prioritizing defects, establishing a QA process, and differences between QA and testing. The document is a collection of technical FAQs for software QA engineers and testers.
This document discusses content networking and the issues surrounding delivering content over the internet and intranets. It notes that organizations increasingly rely on web-enabled applications to deliver resources to users. A key issue is that users will wait no more than 5-6 seconds for a page to load before giving up. Additionally, web content is continually changing and growing in volume, with an estimated 700 million online users by 2005. The document examines technologies for reliably and quickly delivering different types of content, such as text, images, sounds and videos.
The document provides an overview of the key differences between GSM and CDMA wireless networks. It discusses differences in their radio spectrum usage, network architectures, radio channel technologies, call processing, and evolution to 3G. The network architectures for both include mobile stations, base stations, base station controllers, and mobile switching centers. However, GSM uses TDMA while CDMA uses direct sequence spread spectrum. CDMA allows frequency reuse in all cells while GSM requires frequency assignments between adjacent cells. The document also compares their historical development and technical parameters.
Improving software testing efficiency using automation methods by thuravupala...Ravindranath Tagore
This document presents a project report on developing a test automation framework for software testing of network elements. The project aims to create test automation scripts for Tellabs' 8800 series router to evaluate advantages like cost savings, increased productivity and testability. The report describes developing test automation using TCL scripts, executing tests on the 8800 router, and analyzing results to validate that automation improves efficiency over manual testing. It also covers an economic justification analysis to market the automation framework.
Srinivasan Desikan presented on automation beyond testing tools. A survey found that test automation is underutilized, with only 27% of respondents satisfied. Difficulties included lack of skilled people and time/effort required. Automation focused mainly on regression testing easy cases rather than more complex cases. Tools used were often commercial with little open source use. The quality of automated test suites was low with manual retesting often still needed. Automation could be expanded to areas beyond test execution like test management and reporting to further improve testing.
This document provides policies and procedures for configuration, change, and release management. It contains four main sections: an introduction that defines key terms and processes, policies that establish the overall policy statement and process owners, processes that describe the steps for identifying, approving, implementing, testing, and tracking changes, and procedures that provide detailed instructions for each process. The overall goal is to formally manage all changes to information systems through documentation, approval, testing, and auditing in order to maintain system integrity and stability.
This document provides an overview of software testing. It discusses different types of testing like black-box testing and white-box testing. Black-box testing treats the software as a black box without any knowledge of internal implementation, while white-box testing has access to internal data structures and algorithms. The document also covers topics like functional vs non-functional testing, defects and failures, compatibility, and the roles of different teams involved in software testing.
This document compares three subjective video quality testing methodologies: single stimulus continuous quality evaluation (SSCQE), double stimulus continuous quality scale (DSCQS), and double stimulus comparison scale (DSCS). It summarizes six original subjective experiments that used these different methodologies. It then describes a secondary SSCQE experiment that combined video clips from the original experiments to compare the ratings obtained from the different methodologies.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
1. Integrated Speech-based Perception System for User Adaptive Robot
Motion Planning in Assistive Bath Scenarios
Athanasios C. Dometios, Antigoni Tsiami, Antonis Arvanitakis, Panagiotis Giannoulis,
Xanthi S. Papageorgiou, Costas S. Tzafestas, and Petros Maragos
School of Electrical and Computer Engineering, National Technical University of Athens, Greece
{athdom,paniotis,xpapag}@mail.ntua.gr, {antsiami,ktzaf,maragos}@cs.ntua.gr
Abstract— Elderly people have augmented needs in perform-
ing bathing activities, since these tasks require body flexibility.
Our aim is to build an assistive robotic bath system, in order
to increase the independence and safety of this procedure.
Towards this end, the expertise of professional carers for
bathing sequences and appropriate motions have to be adopted,
in order to achieve natural, physical human - robot interaction.
The integration of the communication and verbal interaction
between the user and the robot during the bathing tasks is a
key issue for such a challenging assistive robotic application.
In this paper, we tackle this challenge by developing a novel
integrated real-time speech-based perception system, which will
provide the necessary assistance to the frail senior citizens. This
system can be suitable for installation and use in conventional
home or hospital bathroom space. We employ both a speech
recognition system with sub-modules to achieve a smooth and
robust human-system communication and a low cost depth
camera for end-effector motion planning. With a variety of
spoken commands, the system can be adapted to the user’s
needs and preferences. The instructed by the user washing
commands are executed by a robotic manipulator, demonstrat-
ing the progress of each task. The smooth integration of all
subsystems is accomplished by a modular and hierarchical
decision architecture organized as a Behavior Tree. The system
was experimentally tested by successful execution of scenarios
from different users with different preferences.
I. INTRODUCTION
Most advanced countries tend to be aging societies, with
the percentage of people with special needs for nursing
attention being already significant and due to grow. Health
care experts are called to support these people during the
performance of Personal Care Activities such as showering,
dressing and eating [1], [2], inducing great financial burden
both to the families and the insurance systems. During the
last years the health care robotic technology is developing
towards more flexible and adaptable robotic systems de-
signed for both in-house and clinical environments, aiming
at supporting disabled and elderly people with special needs.
There have been very interesting developments in this
field, with either static physical interaction [3]–[5], or mobile
solutions [6]–[8]. Most of these focus exclusively on a body
The first four authors contributed equally to this work.
This research work has received funding from the European Union’s
Horizon 2020 research and innovation programme under the project
“I-SUPPORT” with grant agreement No 643666 (http://www.i-support-
project.eu/).
Fig. 1: CAD design of the robotic bath system installed in a
pilot study shower room.
Fig. 2: An overview of the proposed integrated intelligent
assistive robotic system.
part e.g. the head, and support people on performing personal
care activities with rigid manipulators.
However, body care (showering or bathing) is among the
first daily life activities which incommode an elderly’s life
[1], since it is a demanding procedure in terms of effort
and body flexibility. On the other hand, soft robotic arm
technologies have already presented new ways of thinking
robot operation [9]–[11], and can be considered safe for
2. direct physical interaction with frail elderly people. Also,
applications with physical contact with human (such as
showering) are very demanding, since we have to deal with
curved and deformable human body parts and unexpected
body-part motion may occur during the robot’s operation.
Therefore safety and comfort issues during the washing
process should be taken into account. In addition, control
architectures of a hyper-redundant soft arm [11]–[13], in an
environment with human motion not only can give important
advantages in terms of safety but also is a challenging task,
since several control aspects should be considered such as
position, motion and path planning [14], stiffness [15], shape
[16], and force/impedance control [17].
Another important aspect of these assistive robotic systems
is the incorporation of intuitive human-robot interaction
strategies, that involve both proper and simple communic-
ation and perception of the user and modular interconnec-
tion of several subsystems constituting the robotic system.
Additionally, Human-Robot trust is a major issue for people
especially in contexts of physical interaction. As these people
experience this new environment they have difficulty to
adapt to the operational requirements. Trust is difficult to be
established if the Robot is not dependable, predictable and
its actions are perceived as transparent [18]. So the challenge
for the system to work safely and intuitively is to enhance to
basic properties of the system robustness and predictability.
In particular, we aim to establish the communication of the
elderly people with the robotic bathing system via spoken
commands. Since speech is the most natural way of human
communication, it is the most comfortable and natural way
also in human-system communication, and generally the
most intuitive way to ask for help. Also, the bathroom
scenario as well as the system used by elderly people raise
some challenges. Firstly, the necessary distance between the
speaker and the microphones imposes distant speech recog-
nition. Secondly, the nature of the environment, i.e. high re-
verberation (bathroom walls) and noise (coming mainly from
water) brings up the need for specific components like voice
activity detection, denoising, etc., that should be adapted to
the environment. Third, speech recognition for elderly people
is a challenging problem, let alone distant speech recognition,
due to their difficulty of speaking, tremor, and corruption of
speech characteristics. Last, due to the fact that the system
is destined to elderly people, it should perform robustly and
also confirm the elderly’s choices in order to avoid undesired
situations.
In this paper an integrated intelligent assistive robotic
system designed for bathing tasks is presented. The proposed
system consists of two real time perception systems, one
for audio instructions and communication and one for user
visual capturing, as depicted in Fig. 2. For this reason
we employ a speech recognition system with sub-modules
to achieve a smooth and robust human-system communic-
ation. The system can be adapted to the user’s specific
room/space, needs and preferences, since it is independent
of the microphone positions and the user can change the
set of interaction commands according to his needs. The
components of this recognition system are a Voice Activity
Detection module, a beamforming component for enhancing
and denoising, a distant speech recognition engine and a
speech synthesis module providing audio feedback to the
user. The commanded washing tasks are executed by a
robotic manipulator, demonstrating the progress of each task.
The smooth integration of all subsystems is accomplished by
a modular and hierarchical decision architecture organized as
a Behavior Tree.
II. SYSTEM DESCRIPTION
The robotic shower system, which is currently under
development Fig. 1, will support elderly and people with
mobility disabilities during showering activities, i.e. pouring
water, soaping, body part scrubbing, etc. The degree of
automation will vary according to the user’s preferences and
disability level. In Fig. 1, a CAD design of the configuration
of the system’s basic parts is presented. The robotic system
provides elderly showering abilities enhancement, whereas
the Kinect sensors are used for user perception and HRI
applications.
The robotic arms is constructed with soft materials (rubber,
silicon etc.) and is actuated with the aid of tendons and
pneumatic chambers, providing the required motion to the
three sections of the robotic arm. This configuration makes
the arms safer and more friendly for the user, since they will
generate little resistance to compressive forces, [19], [20].
Moreover, the combination of these actuation techniques
increases the dexterity of the soft arms and allows for
adjustable stiffness in each section of the robot. The end-
effector section, which will interact physically with the user,
will exhibit low stiffness achieving smoother contact, while
the base sections supporting the robotic structure will exhibit
higher stiffness values.
Visual information of the user is obtained from Kinect
depth cameras. These cameras will be mounted on the wall of
the shower room in a proper configuration in order to provide
full body view of the user, Fig. 1. It is important to mention
that information exclusively from depth measurements will
be used to protect the user’s personal information. Accurate
interpretation of the visual information is a prerequisite
for human perception algorithms (e.g. accurate body part
recognition and segmentation [21]). Furthermore, depth in-
formation will be used as a feedback for robot control
algorithms closing the loop and defining the operational
space of the robotic devices.
Finally, human-system interaction algorithms will include
distant speech recognition, [22], employing microphone ar-
rays in order to achieve robustness. This augmented cogni-
tion system will understand the user’s interactions, providing
the necessary assistance to the frail senior citizens.
III. HRI DECISION CONTROL SYSTEM
A. Behavior Tree Formulation
The Decision Control System (DCS) consists of distinct
modules organized as a Behavior Tree (BT). The basic goal
of such a system is to properly integrate and orchestrate
3. the sub-modules of the system, in order to achieve both
optimal performance of the system and smooth Human Robot
Interaction. The BT has two type of nodes, the Control-Flow
nodes and the Leaf nodes. The control-flow nodes are: First,
the “Selector” which executes its children until one returns
SUCCESS and can be interpreted as sequential ORs. Second
is the “Sequence” node which executes its children until
they all return SUCCESS and can be interpreted as AND. The
Leaf nodes are the ones that are exchanging information with
the external components (i.e. outside the scope of the DCS)
and are capable of changing the configuration of the system.
The formulation of a BT is described in detail in [23].
Additional nodes are the “Loop” node that repeats part
of the tree and the “Parallel” node which allows for
concurrency as in [24].
B. Robot Platform Decision Support Characteristics
Decision modules based on BT have shown to be fault
tolerant [24] because they are modular and capable of
describing complex behavior as a hierarchical collection of
elementary building blocks [25], [26]. These blocks allow
easier testing, separation of concerns per module, reusability
and the ability to compose almost arbitrary behaviors [25].
Thus a system is built that can be easily audited. Safety
auditing is required for medical devices.
The BT consists of the following submodules. First, is
the auditory module that handles the input from the speech
recognition system which enables the communication of
the user’s commands to the system. Second is the text
to speech feedback module which communicates with the
user, acknowledges the user’s commands and can request
for confirmation. Third is the robotic arm module which
according with the execution stage of the task and the
capabilities of the robot initiates, handles the changes in
the user’s requests dynamically. The robotic arm can wash
the user’s back with various patterns. For example during
scrubbing in a cyclic pattern, the user can request at any
point from the system verbally to change pattern or to stop
completely. Except for the described capabilities the user also
can request an immediate stop of the whole process for safety
reasons by issuing a “Halt” command. The user can also
increase and decrease the robot’s washing speed, and will be
able to perform additional actions as the washing system’s
capability is expanded.
C. Design of the Automation System
The system consists of modules that are easily testable by
themselves separate from the rest of the system, and func-
tionality is replicated throughout the tree due to the design’s
modularity. The pattern used is that the robot requests from
the user for a task using the speech recognition system,
gives feedback through the speaker, requests confirmation
and proceeds to execute the task based on the behavior tree.
If we need to adapt the system with new hardware then the
current functionality will be retained as long as any new
hardware conforms to the behavior tree’s protocol.
IV. DISTANT SPEECH PROCESSING SYSTEM
Our setup involves the use of multiple sensors for audio
capture. This is essential for the use cases we have employed,
because we expect far-field speech. For this reason, we em-
ploy microphone arrays aiming to also address other factors
that deteriorate performance, such as noise, reverberation and
overlap of speech with other acoustic events.
The distant speech processing system with its sub-
components is depicted in Fig. 3. After audio capture, a Voice
Activity Detection (VAD) module detects the segments that
contain human voice and it forwards this information to a
beamforming component that exploits the known speaker’s
location (on the chair) to enhance and denoise target speech.
Subsequently a distant speech recognition engine transforms
speech to text in the form of a command to the system. A
speech synthesis module is also used to feed back to the user
the recognized command in order to confirm it or not, and if
it is confirmed, the correct system prompt is spoken out to
the user, employing the same Text-To-Speech (TTS) engine.
In the next sub-sections each module is described in more
detail.
A. Voice Activity Detection
The VAD module constitutes the first step of the distant
speech processing system. Given a fast and robust VAD
module, the speech processing system can benefit in sev-
eral ways: (a) better silence modeling (less false alarms),
(b) operation of the Automatic Speech Recognition (ASR)
module in windows of variable length, (c) less computations
needed from the ASR module (operation only inside VAD
segments).
The proposed VAD system implements a multichannel
approach to determining the temporal boundaries of speech
activity in the room. The sequence of audio events (speech
or non-speech) inside a given time interval, is estimated via
the Viterbi algorithm on combined likelihood scores coming
from single-channel event models for the entire observation
sequence. These combined scores are estimated as averages
of scores based on channel-specific event models (“sum of
log-likelihoods”).
For the modeling of each event at each channel the well-
known Hidden Markov Models (HMMs) (single-state) with
Gaussian Mixture Models (GMMs) for the probability dis-
tributions are employed. The features used are baseline Mel
Frequency Cepstral Coefficients (MFCCs) with ∆’s and ∆∆’s.
The Viterbi algorithm allows the identification of the optimal
sequence of audio events for a given time interval, while the
incorporation of the multichannel score essentially leads to
a robust decision that is informed by all the microphones in
the room.
B. Beamforming
Signals recorded from a microphone array can be exploited
for speech enhancement through beamforming algorithms.
The beamformed signals can then enhance speech recogni-
tion performance due to noise and reverberation reduction.
In our specific case, noise mainly comes from water pouring,
4. Fig. 3: Distant speech processing system with its sub-components.
and reverberation is high, a usual case in bathroom environ-
ments. The offline system offers two beamforming choices:
The simple but fast Delay-and-Sum Beamformer (DSB)
and the more powerful Minimum Variance Distortionless
Response (MVDR) as implemented in [27]. The online
version of the system uses only DSB, because it is faster,
thus more suitable for real-time systems than MVDR.
C. Distant speech recognition
Several factors mentioned before such as noise, reverber-
ation and distance between system/robot and speaker [28]
render speech recognition a challenging task. Thus, we
employ Distant Speech Recognition (DSR) [29], [30] with
microphone arrays.
The DSR module is always-listening, namely it is able to
detect and recognize the utterances spoken by the user at
any time, among other speech and non-speech events. It is
grammar-based, enabling the speaker to communicate with
the system via a set of utterances adopted for the context
of each specific use case. For the particular use case that
will be described later, we have employed English language.
In order to improve recognition in noisy and reverberant
environments, we use DSB beamforming using the available
microphone channels.
Regarding acoustic models, initially we trained 3-state,
cross-word triphones, with 8 Gaussians per state, on stand-
ard MFCC-plus-derivatives features on Wall street Journal
(WSJ) database [31], which is a close-talk database in
English. However, mismatch between train and test condi-
tions/environment deteriorates the DSR performance. Max-
imum Likelihood Linear Regression adaptation has been
employed to transform the means of the Gaussians on the
states of the models, using data recorded in this particular
setup.
D. Speech synthesis (TTS)
A speech synthesis module is also part of the system,
and has two particular uses: First, after the DSR module
produces a result, the TTS module gives feedback to the
user, in the form of synthesized speech, about the recognition
result asking confirmation. Second, if confirmation is given
by the user (recognized by the DSR module), the recognition
result is published to the decision support system, which in
its turn selects the appropriate system prompt to return to the
speech synthesis node, in order to be played back to the user.
This way we have a two-step verification process in order to
avoid as much as possible mis-recognized commands. The
employed speech synthesis module uses the widely available
Festival Speech Synthesis System [32].
V. USER ADAPTIVE ROBOT MOTION PLANNING
The on-line end-effector path adaptation problem, of a
robotic manipulator performing washing actions over the
user’s body part (e.g. back region), in a workspace including
depth-camera, is considered. The basis of this task lies on
the calculation of the appropriate reference end-effector pose,
in order for the robotic manipulator to be able to execute
predefined washing tasks (such as pouring water on the back
region of the user) in a compliant way with the motion and
the curvature of each body part.
In addition, this approach takes into consideration the
adaptability of the system to different users. There is a great
variability to the size of each body part among different
users and each user has different needs during the bathing
sequence. Therefore, in order to compensate with different
operational conditions, the planning of the end-effector path
begins on a normalized (in terms of spatial dimensions) 2D
“Canonical” space.
Properly designed bathing paths can be followed and fitted
on the surface of the user’s body part, with use of our
motion planning algorithm, satisfying both the time and the
spacial constraints of the motion. The resulting point of the
motion path at each time segment should be transformed
from the fixed 2D “Canonical” space to the 3D actual
operational space of the robot (Task space). In particular, we
use two bijective transformations for this fitting procedure,
Fig. 4(b),(c). The first bijection is an affine transformation
(anisotropic scaling and rotation) from the “Canonical” space
to the Image space, whereas the second one is the camera
projection transformation from the Image space to the 3D
operational space, which is included in the camera’s field
of view (Task space). These bijections ensure that the path
will be followed within the body part limits and will be
adaptable to the body motions and deformations. Finally, this
motion planning algorithm takes into account the emergency
situations that might raise on an integrated system and plans
the motion of the robot from the operational position to a
safe position (away from the user) incorporating obstacle
avoidance techniques.
5. Fig. 4: Three different spaces described in the proposed methodology. (a) 2D space normalized in x,y dimensions notated as
“Canonical” space. The sinusoidal path from the Canonical space is transformed the rectangular area on the Image space,
using scaling and rotation. (b) 2D Image space is the actual image (of size 512 x 424 pixels) obtained from Kinect sensor.
The yellow rectangular area marks the user’s back region, as a result of a segmentation algorithm. The transformed path is
fitted on the surface of a female subject’s back region, using the camera projection transformation. (c) 3D Task Space is the
operational space of the robot. The yellow box represents a Cartesian filter, including the points on which the robot will
operate.
Fig. 5: Point Cloud representation of the user’s back region during an experimental execution. (a) Procedure Initialization
when the user ask from the system to wash its back with a cyclic motion pattern. (b) Cyclic motion execution until another
command came. (c) The robot changed its motion primitive in order to follow the user’s will to change the motion pattern
to the sinus one.
VI. EXPERIMENTS
A. Setup Description
In order to test and analyze the performance of the
integrated system, an experimental setup is used that includes
a Kinect-v2 Camera providing depth data for the back region
of a subject, as shown in Fig. 4 (c), with accuracy analyzed
in [35]. The segmentation of the subjects’ back region is
implemented, for the purposes of this experiment, by simply
applying a Cartesian filter to the Point-Cloud data. The setup
also includes a 5 DOF Katana arm by Neuronics for basic
demonstration of a bathing scenario. In this study healthy
subjects took part in the experiments for validation purposes.
The coordination and inter-connection of the subsystems
constituting the robotic bathing system is implemented in
ROS environment.
For this specific scenario, the speech processing system
setup involves a MEMS microphone array [36], which is
portable and has been found to perform equally well with
other microphone arrays, placed opposite to the user. Re-
garding distant speech recognition task, a grammar with 11
different commands has been employed. All these commands
begin with the word “Roberta”, as the keyword that activates
the system. This is justified due to the system being always-
listening: If there is no keyword, users may be confused
when addressing someone else and not the system. The set
of commands include action prompts, i.e. “Roberta wash
my back”, “Roberta scrub my back”, start/stop prompts, i.e.
“Roberta initiate”, “Roberta stop”, “Roberta halt”, type of
motion commands, i.e. “Roberta circular”, “Roberta sinus”,
velocity commands, i.e. “Roberta faster”, “Roberta slower”
and confirmation commands, i.e. “Roberta yes”, “Roberta
no”.
B. Testing Strategy
The experiments conducted include a scenario in which
the user can decide:
• the washing action from a repertoire of tasks, i.e. wash
or scrub and the region of actions, i.e. back or legs;
• the confirmation or the rejection of on action based on
dialogue in any step is necessary;
6. • the type of motion that the user may prefer, i.e. cyclic
or sinus motion primitive;
• to pause the action, to execute it faster or slower, to
change the executed primitive, or to emergency stop the
whole system.
Based on these options the subject is free to decide its
command in a non-linear way of execution. The robot
under the subject’s instruction is able to follow a simple
sinusoidal or a cyclic path (representing the washing and
the scrubbing action respectively), keeping simultaneously
a constant distance and perpendicular relative orientation to
the surface of the user’s respective body region. These paths
are chosen for clarity reasons for the presentation of the
experimental results.
C. Results and Discussion
Our goal is to experimentally validate that the integrated
system can handle correctly the user’s information, prefer-
ences and instructions, while at the same time it is capable
to communicate and interact successfully with the robot that
perform the bathing task.
In Fig. 5 the Point Cloud data of the user’s back region
during a successfully executed experiment is presented. After
the initialization procedure the user asks from the system to
wash its back with a cyclic motion pattern (i.e. scrubbing
or washing action). The robot took an initial position in a
constant distance from its back, as depicted in Fig. 5 (a), and
therefore the cyclic motion was executed until another com-
mand came, Fig. 5 (b). The next instruction was to change
the motion pattern to the sinus one, and the robot changed
its motion primitive in order to follow the user’s will, as it is
shown in Fig. 5 (c). The overall performance of the system
was satisfying since a full scenario was successfully executed
from different users with different preferences.
VII. CONCLUSIONS AND FUTURE WORK
This paper presents an integrated intelligent assistive ro-
botic system designed for bathing tasks. The integration
of the communication and verbal interaction between the
user and the robot during the bathing tasks is a key issue
for such a challenging assistive robotic application. We
tackle this challenge by developing the described augmented
cognition system, which will interpret the user’s preferences
and intentions, providing the necessary assistance to the
frail senior citizens. The proposed system consists of two
real time perception systems, one for audio instructions
and communication and one for user capturing. A decision
control system integrates all the subsystems in a non-linear
way with a modular and hierarchical decision architecture
organized as a Behavior Tree. All the instructed washing
tasks are executed by a robotic manipulator, demonstrating
the progress of each task. The overall performance of the
system was tested and a full scenario was successfully
executed from different users with different preferences.
For further research, we aim to ameliorate this methodo-
logy in order to meet any soft robotic manipulation special
requirements. Furthermore, the initial predefined path will be
learned by demonstration of professional carers, with the aid
of DMP approach, in order to make the bathing sequence
more human-like. The speech processing system can be en-
hanced via training with data more matched with the usecase
scenario. Additionally, more microphone arrays distributed in
space can be used in order to better enhance speech, remove
noise coming from water and mitigate reverberation which
is very high in such environments. The system is designed to
work locally but it can be adapted to work as a teleoperation
platform. Additional functionality can be incorporated and
adaptability to users with more specific impairments can
be introduced. Users that have back injuries, wounds or
other skin abnormalities can be detected by the camera, by
the carer or the user himself, towards a decision system
with the ability to produce a personalized plans. There are
planned tests with elderly patients and the feedback can be
incorporated enhancing the systems usability.
ACKNOWLEDGMENT
The authors would like to acknowledge Rafa Lopez,
Robotnik Automation, Valencia, Spain, for the CAD rep-
resentation of the robotic bath system installed in a shower
room. The authors also wish thank Petros Koutras, Isidoros
Rodomagoulakis, Nancy Zlatintsi and Georgia Chalvatzaki
members of the NTUA IRAL for helpful discussions and
collaborations.
REFERENCES
[1] D. D. Dunlop, S. L. Hughes, and L. M. Manheim, “Disability in
activities of daily living: patterns of change and a hierarchy of
disability,” American Journal of Public Health, vol. 87, pp. 378–383,
1997.
[2] S. Katz, A. Ford, R. Moskowitz, B. Jackson, and M. Jaffe, “Studies
of illness in the aged: The index of adl: a standardized measure of
biological and psychosocial function,” JAMA, vol. 185, no. 12, pp.
914–919, 1963.
[3] T. Hirose, S. Fujioka, O. Mizuno, and T. Nakamura, “Development
of hair-washing robot equipped with scrubbing fingers,” in Robotics
and Automation (ICRA), 2012 IEEE International Conference on, May
2012, pp. 1970–1975.
[4] Y. Tsumaki, T. Kon, A. Suginuma, K. Imada, A. Sekiguchi,
D. Nenchev, H. Nakano, and K. Hanada, “Development of a skin-
care robot,” in Robotics and Automation, 2008. ICRA 2008. IEEE
International Conference on, May 2008, pp. 2963–2968.
[5] M. Topping, “An overview of the development of handy 1, a rehabilit-
ation robot to assist the severely disabled,” Artificial Life and Robotics,
vol. 4, no. 4, pp. 188–192.
[6] C. Balaguer, A. Gimenez, A. Huete, A. Sabatini, M. Topping, and
G. Bolmsjo, “The mats robot: service climbing robot for personal
assistance,” Robotics Automation Magazine, IEEE, vol. 13, no. 1, pp.
51–58, March 2006.
[7] A. J. Huete, J. G. Victores, S. Martinez, A. Gimenez, and C. Balaguer,
“Personal autonomy rehabilitation in home environments by a portable
assistive robot,” IEEE Transactions on Systems, Man, and Cybernetics,
Part C (Applications and Reviews), vol. 42, no. 4, pp. 561–570, July
2012.
[8] Z. Bien, M.-J. Chung, P.-H. Chang, D.-S. Kwon, D.-J. Kim, J.-S.
Han, J.-H. Kim, D.-H. Kim, H.-S. Park, S.-H. Kang, K. Lee, and
S.-C. Lim, “Integration of a rehabilitation robotic system (kares
ii) with human-friendly man-machine interaction units,” Autonomous
Robots, vol. 16, no. 2, pp. 165–191, 2004. [Online]. Available:
http://dx.doi.org/10.1023/B:AURO.0000016864.12513.77
[9] C. Laschi, M. Cianchetti, B. Mazzolai, L. Margheri, M. Follador, and
P. Dario, “Soft robot arm inspired by the octopus,” Advanced Robotics,
vol. 26, no. 7, pp. 709–727, 2012.
7. [10] C. Laschi, B. Mazzolai, V. Mattoli, M. Cianchetti, and P. Dario,
“Design of a biomimetic robotic octopus arm,” Bioinspiration &
Biomimetics, vol. 4, no. 1, p. 015006, 2009.
[11] I. D. Walker, “Continuous backbone continuum robot manipulators,”
ISRN Robotics, vol. 2013, 2013.
[12] D. B. Camarillo, C. R. Carlson, and J. K. Salisbury, “Task-space
control of continuum manipulators with coupled tendon drive,” in
Experimental Robotics. Springer, 2009, pp. 271–280.
[13] F. Renda and C. Laschi, “A general mechanical model for tendon-
driven continuum manipulators,” in Robotics and Automation (ICRA),
2012 IEEE International Conference on. IEEE, 2012, pp. 3813–3818.
[14] J. Xiao and R. Vatcha, “Real-time adaptive motion planning for a
continuum manipulator,” in Intelligent Robots and Systems (IROS),
2010 IEEE/RSJ International Conference on. IEEE, 2010, pp. 5919–
5926.
[15] M. Mahvash and P. E. Dupont, “Stiffness control of a continuum
manipulator in contact with a soft environment,” in Intelligent Robots
and Systems (IROS), 2010 IEEE/RSJ International Conference on.
IEEE, 2010, pp. 863–870.
[16] B. Bardou, P. Zanne, F. Nageotte, and M. De Mathelin, “Control of a
multiple sections flexible endoscopic system,” in Intelligent Robots
and Systems (IROS), 2010 IEEE/RSJ International Conference on.
IEEE, 2010, pp. 2345–2350.
[17] A. Kapadia and I. D. Walker, “Task-space control of extensible
continuum manipulators,” in Intelligent Robots and Systems (IROS),
2011 IEEE/RSJ International Conference on. IEEE, 2011, pp. 1087–
1092.
[18] P. A. Hancock, D. R. Billings, K. E. Schaefer, J. Y. C. Chen, E. J.
de Visser, and R. Parasuraman, “A meta-analysis of factors affecting
trust in human-robot interaction,” vol. 53, no. 5, pp. 517–527.
[19] T. G. Thuruthel, E. Falotico, M. Cianchetti, F. Renda, and C. Laschi,
“Learning global inverse statics solution for a redundant soft robot,”
in Proceedings of the 13th International Conference on Informatics in
Control, Automation and Robotics, 2016, pp. 303–310.
[20] M. Manti, A. Pratesi, E. Falotico, M. Cianchetti, and C. Laschi, “Soft
assistive robot for personal care of elderly people,” in 2016 6th IEEE
International Conference on Biomedical Robotics and Biomechatron-
ics (BioRob), June 2016, pp. 833–838.
[21] A. Vedaldi, S. Mahendran, S. Tsogkas, S. Maji, R. Girshick, J. Kan-
nala, E. Rahtu, I. Kokkinos, M. Blaschko, D. Weiss, et al., “Under-
standing objects in detail with fine-grained attributes,” in Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition,
2014, pp. 3622–3629.
[22] I. Rodomagoulakis, N. Kardaris, V. Pitsikalis, E. Mavroudi, A. Kat-
samanis, A. Tsiami, and P. Maragos, “Multimodal human action
recognition in assistive human-robot interaction,” in 2016 IEEE In-
ternational Conference on Acoustics, Speech and Signal Processing
(ICASSP), March 2016, pp. 2702–2706.
[23] A. Marzinotto, M. Colledanchise, C. Smith, and P. Ogren, “Towards
a unified behavior trees framework for robot control.” IEEE, pp.
5420–5427.
[24] M. Colledanchise, A. Marzinotto, D. V. Dimarogonas, and P. Oegren,
“The advantages of using behavior trees in mult-robot systems,” in
Proceedings of ISR 2016: 47st International Symposium on Robotics,
pp. 1–8.
[25] A. Klckner, “Interfacing behavior trees with the world using descrip-
tion logic.” American Institute of Aeronautics and Astronautics.
[26] M. Colledanchise and P. gren, “How behavior trees modularize hybrid
control systems and generalize sequential behavior compositions, the
subsumption architecture, and decision trees,” vol. PP, no. 99, pp. 1–
18.
[27] S. Lefkimmiatis and P. Maragos, “A generalized estimation approach
for linear and nonlinear microphone array post-filters,” Speech com-
munication, vol. 49, no. 7, pp. 657–666, 2007.
[28] C. Ishi, S. Matsuda, T. Kanda, T. Jitsuhiro, H. Ishiguro, S. Nakamura,
and N. Hagita, “A robust speech recognition system for communication
robots in noisy environments,” IEEE Trans. Robotics, vol. 24, no. 3,
pp. 759–763, 2008.
[29] M. W¨olfel and J. McDonough, Distant speech recognition. John
Wiley & Sons, 2009.
[30] A. Katsamanis, I. Rodomagoulakis, G. Potamianos, P. Maragos, and
A. Tsiami, “Robust far-field spoken command recognition for home
automation combining adaptation and multichannel processing,” in
Proc. ICASSP, 2014.
[31] D. B. Paul and J. M. Baker, “The design for the wall street journal-
based csr corpus,” in Proceedings of the workshop on Speech and
Natural Language. Association for Computational Linguistics, 1992,
pp. 357–362.
[32] A. Black, P. Taylor, R. Caley, R. Clark, K. Richmond, S. King,
V. Strom, and H. Zen, “The festival speech synthesis system
version 1.4.2,” Software, Jun 2001. [Online]. Available: http:
//www.cstr.ed.ac.uk/projects/festival/
[33] Y. Zhou, M. Do, and T. Asfour, “Learning and force adaptation for
interactive actions,” in Humanoid Robots (Humanoids), 2016 IEEE-
RAS 16th International Conference on. IEEE, 2016, pp. 1129–1134.
[34] ——, “Coordinate change dynamic movement primitivesa leader-
follower approach,” in Intelligent Robots and Systems (IROS), 2016
IEEE/RSJ International Conference on. IEEE, 2016, pp. 5481–5488.
[35] L. Yang, L. Zhang, H. Dong, A. Alelaiwi, and A. El Saddik, “Eval-
uating and improving the depth accuracy of kinect for windows v2,”
IEEE Sensors Journal, vol. 15, no. 8, pp. 4275–4285, 2015.
[36] Z. Skordilis, A. Tsiami, P. Maragos, G. Potamianos, L. Spelgatti,
and R. Sannino, “Multichannel speech enhancement using mems
microphones,” in Proc. ICASSP, 2015.