GRUPO 3 : s2.0-s0169260709002053-main (1)

340 views
274 views

Published on

Published in: Technology, Health & Medicine
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
340
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

GRUPO 3 : s2.0-s0169260709002053-main (1)

  1. 1. c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233 journal homepage: www.intl.elsevierhealth.com/journals/cmpbCYCLOPS: A mobile robotic platform for testing andvalidating image processing and autonomous navigationalgorithms in support of artificial vision prosthesesWolfgang Fink ∗ , Mark A. TarbellVisual and Autonomous Exploration Systems Research Laboratory, Division of Physics, Mathematics, and Astronomy, California Instituteof Technology, Pasadena, CA 91125, USAa r t i c l e i n f o a b s t r a c tArticle history: While artificial vision prostheses are quickly becoming a reality, actual testing time withReceived 22 February 2009 visual prosthesis carriers is at a premium. Moreover, it is helpful to have a more realis-Received in revised form tic functional approximation of a blind subject. Instead of a normal subject with a healthy10 June 2009 retina looking at a low-resolution (pixelated) image on a computer monitor or head-mountedAccepted 26 June 2009 display, a more realistic approximation is achieved by employing a subject-independent mobile robotic platform that uses a pixelated view as its sole visual input for navigationKeywords: purposes. We introduce CYCLOPS: an AWD, remote controllable, mobile robotic platformArtificial vision prostheses that serves as a testbed for real-time image processing and autonomous navigation systemsRetinal implants for the purpose of enhancing the visual experience afforded by visual prosthesis carriers.Image processing Complete with wireless Internet connectivity and a fully articulated digital camera withAutonomous navigation wireless video link, CYCLOPS supports both interactive tele-commanding via joystick, andRobotics autonomous self-commanding. Due to its onboard computing capabilities and extendedTele-commanding battery life, CYCLOPS can perform complex and numerically intensive calculations, such asSelf-commanding image processing and autonomous navigation algorithms, in addition to interfacing to addi-Cloud computing tional sensors. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed forWorldwide accessibility researchers in the field of artificial vision systems. CYCLOPS enables subject-independent evaluation and validation of image processing and autonomous navigation systems with respect to the utility and efficiency of supporting and enhancing visual prostheses, while potentially reducing to a necessary minimum the need for valuable testing time with actual visual prosthesis carriers. © 2009 Elsevier Ireland Ltd. All rights reserved.1. Introduction “experiencing” the visual perception of a blind person with a vision implant is emulated by having normal subjects withWhile artificial vision prostheses are quickly becoming a real- a healthy retina look at a low-resolution (pixelated) imageity, actual testing time with visual prosthesis carriers is at a on a computer monitor or head-mounted display. This is apremium. Moreover, it is helpful to have a realistic functional rather inadequate emulation as a healthy retina with 109approximation of a blind subject. Commonly, the process of photoreceptors can glean more information from a pixelated ∗ Corresponding author at: Visual and Autonomous Exploration Systems Research Laboratory, California Institute of Technology, 1200 EastCalifornia Blvd, Mail Code 103-33, Pasadena, CA 91125, USA. Tel.: +1 626 395 4587. E-mail address: wfink@autonomy.caltech.edu (W. Fink). URL: http://autonomy.caltech.edu (W. Fink).0169-2607/$ – see front matter © 2009 Elsevier Ireland Ltd. All rights reserved.doi:10.1016/j.cmpb.2009.06.009
  2. 2. c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233 227image (e.g., edges, edge transitions, grayscale information, image streams before they enter the visual prosthesis (Fig. 3).and spatial frequencies) than an impaired retina in which the Moreover, such image processing systems must provide thephotoreceptor layer is dysfunctional due to diseases such as flexibility of repeated application of image manipulation andretinitis pigmentosa and age-related macular degeneration. processing modules in a user-defined order. Thus, current A more realistic approximation is achieved by employing a and future artificial vision implant carriers can customizesubject-independent mobile robotic platform that uses a pix- the individual visual perception generated by their visualelated view as its sole visual input for navigation purposes. prostheses, by actively manipulating parameters of individ-Such a mobile robotic platform, described in the follow- ual image processing filters or altering the sequence of theseing, represents not only a constantly available testbed for filters.real-time image processing systems, but even more so pro-vides a subject-independent means for testing and validatingthe efficiency and utility of real-time image processing and 2. Hardware descriptionautonomous navigation algorithms for enhanced visual per-ception and independent mobility for the blind and visually For the purpose of creating a subject-independent mobileimpaired using artificial vision prostheses. testbed for image processing and autonomous navigation The current state-of-the-art and near future artificial algorithms for artificial vision prostheses, we have cre-vision implants, such as epi-retinal and sub-retinal implants ated CYCLOPS, an All-Wheel Drive (AWD) remote-controllable[1–9] (Fig. 1), provide only tens of stimulating electrodes, robotic platform testbed with wireless Internet connectivitythereby allowing only for limited visual perception (pixela- and a fully articulated digital camera with wireless video linktion). Usually these implants are driven by extraocular [8,9] (Fig. 4) [13]. For the basic robotic hardware we utilized a WiFi-or intraocular [10] high-resolution digital cameras that ulti- BoT [14]. The WiFiBoT has a 4G Access Cube, which serves asmately result in orders of magnitude smaller numbers of the central onboard processor, controlling four electric motors.pixels that are relayed to the respective implant in use. Hence, We custom-built CYCLOPS by equipping it with:real-time image processing and enhancement will afford acritical opportunity to improve on the limited vision afforded • Bi-level metal chassis and sensor trays.by these implants for the benefit of blind subjects. • General-purpose, high-performance mini Unix workstation Since tens of pixels/electrodes allow only for a very crude (i.e., Mac mini).approximation of the roughly 10,000 times higher optical reso- • Rechargeable battery for the Unix workstation.lution of the external camera image feed, the preservation and • Two rechargeable batteries for the wheel motors.enhancement of contrast differences and transitions, such as • Gimbaled IP camera that is user-controllable.edges, become very important as opposed to picture details • IEEE 1394 navigation camera with wide-angle field of view.such as object texture. Image processing systems (Fig. 2), such • Two forward-looking IR proximity sensors.as the Artificial Vision Simulator (AVS) [11,12], perform real-time • Real-time voice synthesizer interface.(i.e., 30 fps) image processing and enhancement of camera • Wireless Internet capability.Fig. 1 – One instantiation of an artificial vision prosthesis. An intraocular retinal prosthesis using an externalmicroelectronic system to capture and process image data and transmit the information to an implanted microelectronicsystem. The implanted system decodes the data and stimulates via an electrode array the retina with a pattern of electricalimpulses to generate a visual perception.
  3. 3. 228 c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233Fig. 2 – Schematic diagram of a real-time image processing system for artificial vision prostheses (e.g., AVS [11,12]) that aredriven by extraocular [8,9] or intraocular [10] camera systems.
  4. 4. c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233 229Fig. 3 – Typical palette of image processing modules (e.g., employed by AVS [11,12]) that can be applied in real time to avideo camera stream driving an artificial visual prosthesis. connects to one or more known “Com Servers” (Fig. 5) . The3. Testbed implementation Com Servers are known, established Internet entities to which both the mobile platform and the controlling computer sys-CYCLOPS supports both interactive tele-commanding via joy- tem connect, acting as a go-between and buffer. In this way,stick, and autonomous self-commanding. Due to its onboard neither end need know the actual IP address of the other, yetcomputing capabilities and battery life (i.e., 4.5 h for the an Internet connection is still established between them, withonboard mini Unix workstation and 2 h for the electric auto-reconnect in case of connection dropouts.motors), CYCLOPS can perform complex and numerically The cloud computing approach affords a great deal of archi-intensive calculations, such as: tectural flexibility; many operational modes are supported, from fully synchronous to fully autonomous. In the case of• Testing and validation of image processing systems, such as the joystick operation of CYCLOPS, the mobile platform is in AVS [11,12], to further the experience of visual prostheses synchronous connection to the Com Server. However, this is users. not strictly required for other modes of operation. Minimally,• Testing of navigation algorithms and strategies to improve constant connectivity is not required, as a temporary connec- the degree of unaided mobility. tion would be sufficient to upload from the mobile platform• Testing of additional sensors (e.g., infrared) to further the video and sensor data resulting from prior command sets, utility of visual prostheses. and to download to the mobile platform further operational commands (e.g., navigation commands), which would be exe-3.1. Cloud computing cuted autonomously. A fully “offline” autonomous operational capability, i.e., independent real-time onboard processing, isTo enable testing of real-time image processing modules, indi- currently in development for CYCLOPS.vidually or in sequence, and to enable the transmission ofthe resulting remote control navigation sequences, the mobileplatform must establish a connection between itself and the 3.2. Interconnectivitycomputer hosting the control software. A standard, direct one-to-one connection could be established, but this is fragile, as An Internet TCP/IP connection is established between the CPUeither system may not be reachable or known at the moment aboard the mobile platform testbed (via its wireless LAN) andthe connection attempt is made. Instead, a “cloud comput- the computer hosting the image processing and the front-ing” concept is utilized [15,16], wherein the mobile platform end control software via a Com Server. For the purpose of
  5. 5. 230 c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233 reliability, this connection is instantiated by the creation of a temporary AF INET stream socket at the transport layer, uti- lizing a three-way handshake. Once this full-duplex transport layer is accomplished, the mobile platform is able to trans- mit video frames and other sensor data in a packetized and compressed format over the layer. The mobile platform also transmits its housekeeping data (battery level, hardware sen- sor data, etc.) and awaits sensor and drive commands from the front-end software that are triggered by the results of the real-time image processing of the video frames (e.g., via AVS). 3.3. Video and sensor data processing The video and sensor data are treated similarly, however, the video data are first preprocessed into a suitable data format. This is accomplished by packetizing the video. Each non- interlaced stream frame of video data is compressed and inserted into the payload portion of a header packet, tagging the data as to type, length, timestamp, and sequence. This has the advantage over time-division multiplexing of allowing for real-time synchronization to occur on the receiving end withFig. 4 – CYCLOPS, an AWD remote-controllable robotic minimal reconstruction processing. The network connectionplatform testbed with wireless Internet connectivity and a is thus used as a virtual N-receiver broadcast channel, eachfully articulated (user-controllable) digital camera with channel being a Q-ary data channel, providing the same mech-wireless video link, as well as an IEEE 1394 navigation anism for video, sensor, and hardware housekeeping data (e.g.,camera with wide-angle field of view. It is equipped with a battery charge levels).general-purpose mini Unix workstation. CYCLOPS ispowered by rechargeable batteries. Furthermore, CYCLOPS 3.4. Navigation commandingsupports a sensor bay for additional sensors (e.g., infrared). Navigation and camera command transmittal to the mobile platform testbed is accomplished as follows: The platformFig. 5 – Principle of “cloud computing” [15,16]. A mobile platform connects to one or more known “Com Servers” on theInternet in lieu of direct end-to-end connection with the control system. In this way, neither end need know the actual IPaddress of the other.
  6. 6. c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233 231utilizes two independent pairs of CPU-controlled electric dependent command sets in the pipeline to be invalidated demotors, each using a 3-gear reduction module, providing a 4:1 facto, thus returning the mobile platform to a “known good”reduction ratio for the increase of applied torque to the wheels state.and reduced power consumption [14]. Each pair of wheelmotors is individually addressable via a command packet that 3.5. User interactivityspecifies motor speed, direction, duration, and speed limiter.Transmittal of the command packet to the mobile platform For the purpose of commanding CYCLOPS interactively (lateris analogous to the preceding, as the existing transport layer autonomously), the front-end software has an integratedis utilized in a full-duplex mode, albeit strictly sequenced video panel (Fig. 6) for displaying the transmitted video framesfor simplicity of processing and performing the commands from the mobile platform’s on-board camera (Fig. 4); it is alsoin order. Commanding the onboard camera follows a similar outfitted with a USB-based joystick device. The user, con-procedure. trolling the joystick, is in the loop for the purpose of the It should be pointed out that the current input com- development of automated software and algorithms to con-mand set (e.g., set of navigation commands) for the mobile trol CYCLOPS. Once developed, such automated software canplatform differs in form and function from the video and be “plugged in” in lieu of the user for automatic control, withsensor data, which are received. Input commands require manual user override always available. The user’s movementscomparatively few bytes for expression, thus a simpler, more of the joystick are translated into camera orientation andexpedient architecture is utilized. The input command set wheel rotation commands, and are sent to the mobile plat-for the mobile platform forms a Strictly Ordered Command form. As the mobile platform begins to move, it also sends backPipeline (SOCP) set. Such sets form conditional pipeline branch video, sensor, and housekeeping data, which are displayed onmaps, with sequencing precluding the need for individual the front-end. With this feedback information, a user (or auto-command prioritization. For example, the mobile platform mated control software for self-commanding) is able to controlmay be instructed to perform a certain overall movement, e.g., CYCLOPS interactively (or automatically) from anywhere inmove to the other side of the room. This is translated into the world, in near real-time.a SOCP set resembling a binary tree; it comprises individual CYCLOPS uses only the pixelated camera images to moverobotic movements (turns, motor commands, etc.) to accom- about an environment (e.g., room/corridor with obstacles),plish the overall goal of moving to the other side of the room. If thus more realistically emulating the visual perception of aany individual command in the SOCP set cannot be executed, blind subject. It processes and enhances the pixelated imagerythat particular SOCP set is invalidated at that point, causing to result in new motion and navigation commands, such asFig. 6 – CYCLOPS Commanding Interface, controlling the AWD remote robotic platform testbed in near real-time. Theinterface displays the current status of CYCLOPS including battery charge levels, heading, velocity, obstacle proximity, andthe high-resolution gimbaled camera view.
  7. 7. 232 c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233 igation systems (Figs. 2 and 3) with respect to the utility and efficiency of supporting and enhancing visual prosthe- ses, while potentially reducing to a necessary minimum the need for valuable testing time with actual visual prosthesis carriers. It is difficult to predict exactly what a blind subject with a camera-driven visual prosthesis may be able to perceive. Therefore, it is advantageous to offer a wide variety of image processing modules and the capability and flexibility for repeated application of these modules in a user-defined order. AVS [11,12], in particular, comprises numerous efficient image processing modules, such as pixelation, contrast/brightness enhancement, grayscale equalization for luminance control under severe contrast/brightness conditions, grayscale levels for reduction of data volume transmitted to the visual pros- thesis, blur algorithms, and edge detection (e.g., [17,18]). With the development of CYCLOPS it is now possible to empirically determine, in the absence of a blind subject, which particular sequences of image processing modules may work best for the blind subject in real world scenarios (e.g., Fig. 7). One of the goals is to get CYCLOPS to “behave” simi- lar to a blind subject (especially motionwise) by developing, implementing, testing, and fine-tuning/customizing onboard algorithms for image processing and analysis as well as auto- navigation. Once a certain degree of similarity in a behavioral pattern is achieved, such as navigating safely through a cor- ridor with obstacles or guideline following (e.g., Fig. 7), the underlying image processing and analysis algorithms as well as the sequences of image processing modules that enabled this successful behavior may be used to establish a practical initial configuration for blind subjects when implemented in their respective visual prosthesis systems. Furthermore, test- ing with CYCLOPS may contribute to improving the design of environments that provide suitable access for the blind (e.g., rooms, corridors, entrances) by choosing those that CYCLOPS performed best in. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed for researchers in the field of artificial vision systems and machine vision. We have provided a commanding interface that allows the research community to easily interface their respective image processing and autonomous navigation software packages to CYCLOPS by merely using high-level commands, such as “turn right byFig. 7 – Navigation camera view of CYCLOPS at different 25 degrees” or “move forward one meter”. Additionally, wevisual resolutions (i.e., degrees of pixelation), mimicking have provided numerous interfaces for onboard cameras (Eth-the view afforded by artificial vision prostheses. Each ernet, IEEE 1394, USB). The direction and orientation of thecolumn from top to bottom: 64 × 64, 32 × 32, 16 × 16, 8 × 8. gimbaled camera can be user-controlled, allowing for theLeft column shows navigating a corridor while avoiding an emulation of head/eye-motion of a blind subject wearing anobstacle (i.e., a chair). Right column shows the following of artificial vision prosthesis. The onboard real-time voice syn-a high-contrast guideline on the floor of a corridor. thesizer can be used as a means to communicate audio cues (e.g., “Door 2 meters ahead.”) resulting from autonomous navigation and obstacle recognition/avoidance systems (e.g.,navigating a corridor while avoiding obstacles, and guideline [19]).following (Fig. 7). Researchers can interface their software packages either by remotely issuing high-level commands over the Internet, or by integrating and running their software packages locally on the4. Conclusion onboard Unix workstation, thereby bypassing the Internet for command transmittal. Regardless, researchers will be able toCYCLOPS enables subject-independent testing, evaluation, monitor remotely the actions and camera views of CYCLOPSand validation of image processing and autonomous nav- via its commanding interface (Fig. 6).
  8. 8. c o m p u t e r m e t h o d s a n d p r o g r a m s i n b i o m e d i c i n e 9 6 ( 2 0 0 9 ) 226–233 233 CYCLOPS is directly and immediately applicable to any [6] S.C. DeMarco, The architecture, design, and electromagnetic(artificial) vision-providing system that is based on an imag- and thermal modeling of a retinal prosthesis to benefit theing modality (e.g., video cameras, infrared sensors, sound, visually impaired, PhD Thesis, North Carolina State University, 2001.ultrasound, microwave, radar, etc.) as the first step in the gen- [7] P.R. Singh, W. Liu, M. Sivaprakasam, M.S. Humayun, J.D.eration of visual perception. Weiland, A matched biphasic microstimulator for an implantable retinal prosthetic device, in: Proceedings of the IEEE International Symposium on Circuits and Systems, vol.Conflict of interest statement 4, 2004. [8] J.D. Weiland, W. Fink, M. Humayun, W. Liu, D.C. Rodger, Y.C.Fink and Tarbell may have proprietary interest in the technol- Tai, M. Tarbell, Progress towards a high-resolution retinalogy presented here as a provisional patent has been filed on prosthesis, Conf. Proc. IEEE Eng. Med. Biol. Soc. 7 (2005)behalf of the California Institute of Technology. 7373–7375. [9] J.D. Weiland, W. Fink, M.S. Humayun, W. Liu, W. Li, M. Sivaprakasam, Y.C. Tai, M.A. Tarbell, System design of a highAcknowledgment resolution retinal prosthesis, Conf. Proc. IEEE IEDM (2008), doi:10.1109/IEDM.2008.4796682.The work described in this publication was carried out at [10] C.-Q. Zhou, X.-Y. Chai, K.-J. Wu, C. Tao, Q. Ren, In vivo evaluation of implantable micro-camera for visualthe California Institute of Technology under support of the prosthesis, Invest. Ophthalmol. Vis. Sci. 48 (2007) 668National Science Foundation Grant EEC-0310723. (E-Abstract). [11] W. Fink, M. Tarbell, Artificial vision simulator (AVS) forreferences enhancing and optimizing visual perception of retinal implant carriers, Invest. Ophthalmol. Vis. Sci. 46 (2005) 1145 (E-Abstract). [12] W. Liu, W. Fink, M. Tarbell, M. Sivaprakasam, Image [1] W. Liu, M.S. Humayun, Retinal prosthesis, in: IEEE processing and interface for retinal visual prostheses, in: International Solid-State Circuits Conference Digest of ISCAS 2005 Conference Proceedings, vol. 3, 2005, pp. Technical Papers, 2004, pp. 218–219. 2927–2930. [2] E. Zrenner, K.-D. Miliczek, V.P. Gabel, H.G. Graf, E. Guenther, [13] M.A. Tarbell, W. Fink, CYCLOPS: A mobile robotic platform H. Haemmerle, B. Hoefflinger, K. Kohler, W. Nisch, M. for testing and validating image processing algorithms in Schubert, A. Stett, S. Weiss, The development of subretinal support of visual prostheses, Invest. Ophthalmol. Vis. Sci. 50 microphotodiodes for replacement of degenerated (2009) 4218 (E-Abstract). photoreceptors, Ophthalmic Res. 29 (1997) 269–328. [14] Robosoft, http://www.robosoft.fr/eng/. [3] J.F. Rizzo, J.L. Wyatt, Prospects for a visual prosthesis, [15] R. Chellappa, Cloud computing—emerging paradigm for Neuroscientist 3 (1997) 251–262. computing, INFORMS, 1997. [4] E. Zrenner, Will retinal implants restore vision? Science 295 [16] B. Hayes, Cloud computing, Commun. ACM 51 (2008). (2002) 1022–1025. [17] J.C. Russ, The Image Processing Handbook, CRC Press, 2002. [5] M.S. Humayun, J. Weiland, G. Fujii, R.J. Greenberg, R. [18] H.R. Myler, A.R. Weeks, The Pocket Handbook of Image Williamson, J. Little, B. Mech, V. Cimmarusti, G. van Boemel, Processing Algorithms in C, Prentice Hall PTR, 1993. G. Dagnelie, E. de Juan Jr., Visual perception in a blind [19] W. Fink, M. Tarbell, J. Weiland, M. Humayun, DORA: digital subject with a chronic microelectronic retinal prosthesis, object recognition audio-assistant for the visually impaired, Vision Res. 43 (2003) 2573–2581. Invest. Ophthalmol. Vis. Sci. 45 (2004) 4201 (E-Abstract).

×