1. AUTHOR
Purdue University
TITLE
Problem Statement
Recent advancement in computers and robotics
makes it possible for people with spinal cord
injuries (SCIs) and other upper limb mobility
impairments to perform daily living and other
tasks more independently through the assistance
of a robotic arm. However, operation of robotic
arms has always been challenging, particularly for
individuals with upper extremity mobility
impairments. Some kind of human-computer
interface (HCI) must be employed to initiate and
orchestrate the task. Multiple methods have been
suggested to manipulate a robotic arm with
sufficient dexterity to accomplish most basic
tasks, such as picking up items, drinking from a
glass or self-feeding. There are very few human
computer interfaces that are designed specifically
to facilitate individuals with upper extremity
mobility impairments (Collinger et al., 2009)
A multimodal vision-based assistive robotic
system dedicated to assisting quadriplegics due
to SCI is presented (Figure 2.) . The system could
not only assist people with quadriplegia for
activities of daily living, such as eating, drinking
and dressing, but also it could provide
students/scientists with quadriplegia an
alternative way to perform “hands-on” laboratory
procedures more independently.
Multimodal Vision-Based Approach for Robotic Arm Control
for Individuals with Upper Level Spinal Cord Injuries
Hairong Jiang1, Juan P. Wachs1, Martin Pendergast2, Bradley S. Duerstock1, 3
Schools of Industrial Engineering1, School of Electrical and Computer Engineering2, Weldon School of Biomedical Engineering3
Purdue University, West Lafayette, Indiana
Methodology
ACKNOWLEDGMENTS
This work was established through the Institute for
Accessible Science by the NIH Director's Pathfinder Award
to Promote Diversity in the Scientific Workforce, funded by
the American Recovery and Reinvestment Act and
administered by the National Institute of General Medical
Sciences (grant no. 1DP4GM096842-01).
Future Directions
Perform feasibility and preliminary testing
with able-bodied subjects and subjects
with quadriplegia.
Compare Multimodal (keyboard and 3D
joystick combined), unimodal (default
joystick/keyboard/3D joystick) systems and
control (OEM) methods.
Conduct simple task performance tests
including average completion time and
false manipulations during participant
study.
Conduct lab procedure performance tests
with participants with upper extremity
impairments in ABIL.
Contact Info
Brad Duerstock, IAS Director
Susan M. Mendrysa, IAS Assistant Director
E-mail: ias@purdue.edu
URL: iashub.org
2. A computer vision system using a
Kinect® camera was adopted to obtain
feedback from the user and supervise
the performance of the actuator (Jiang
et al., 2012). The Kinect sensor
capturing both color and depth
information was calibrated and
integrated into the system to provide
vision-based information for robotic
control. Two main objectives were
achieved by utilizing the Kinect sensor:
assistance and supervision. Human
body was tracked to facilitate individuals
with quadriplegia for activities of daily
living. For instance, the face was tracked
to enable automatic drinking and
shaving service for individuals with
quadriplegia. The end effector of the
robotic system was supervised by the
Kinect sensor to provide further
information for object grabbing.
1. Two modalities were adopted to
control the system: Bluetooth keyboard
and three dimensional (3D) joystick. By
this procedure, the user was provided
with more flexibility, in turn, to make the
human computer interface more
adaptable. JACOTM Robot Manipulator
from Kinova Technology was adopted as
the actuator for the multimodal vision-
based assistive robotic system. All the
functions for robotic control were mapped
to a compact Bluetooth keyboard to
achieve wireless control. A 3D joystick
aims at haptic video game playing was
remodeled and programmed as a 3D
controller for the actuator. Two control
modes were employed (Figure 1.). Figure 1. 3D Joystick Control Diagram
Figure 2. System Architecture for Multimodal Vision-based
Assistive Robotic System
Reference
[1] Collinger JL, Wang W, Degenhart AD, Vinjamuri R, Sudre
GP, Weber DJ, Tyler-Kabara EC (2009) Towards a Direct Brain
Interface for Controlling Assistive Devices. In: The 1st
International Symposium on Quality of Life Technologies.
Pittsburgh, PA.
[2] H. Jiang, J. P. Wachs and B. S. Duerstock. “Facilitated
Gesture Recognition Based Interfaces for People with Upper
Extremity Physical Impairments" in Progress in Pattern
Recognition, Image Analysis, Computer Vision, and
Applications. Lecture Notes in Computer Science Volume
7441,pp 228-235, 2012