Building the SMoRG Lab
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Building the SMoRG Lab

  • 913 views
Uploaded on

A detailed description of my work building the SensoriMOtor Research Group neurophysiology recording lab

A detailed description of my work building the SensoriMOtor Research Group neurophysiology recording lab

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
913
On Slideshare
913
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
1
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. 2. THE SMORG NEUROPHYSIOLOGY LABORATORYIntroduction The SensoriMotor Research Group (SMoRG) was founded at Arizona State University in2006 to investigate sensorimotor learning and representations in the nervous system, as well asthe neural mechanisms that enable fine motor skills. At its inception, total SMoRG assetsincluded people, ideas and a profoundly empty laboratory space in which to combine them toproduce meaningful science. This chapter will describe the process of developing the neuralrecording laboratory, where the experimental work of this manuscript was accomplished. Adescription of this work is fitting because it featured significant technical accomplishments,produced a novel experimental facility and required a sustained effort of more than two years tocomplete. The overall goal was clear, even if the path to achieve it was not; develop anexperimental facility that included a robot arm, 3D motion capture, virtual reality simulation, acortical neural recording system and custom software to integrate it all.Robot Arm Installation. A six-axis industrial robot (model VS-6556G, Denso Robotics, Long Beach,CA, USA) was acquired for object presentation during behavioral experimental tasks (Figure8(a)). The very first task required fabrication of a platform on which to mount the robot in asecure yet mobile way. A space frame cube was assembled from extruded aluminum segments(1530 T-slotted series, 80/20® Inc., Columbia City, IN, USA) with bolted corner gussets formaximum structural integrity. The top and bottom faces of the cube were covered with a singlepiece of 0.25 in. thick plate steel to which the base of the robot was attached with stainless steelbolts. The entire robot platform was supported by swivel joint mounting feet at the corners andrested on a 0.5 in. thick rubber pad to dampen the vibration and inertial loads resulting from robotmovement.
  • 2. 44Figure 8. Robot and Associated Hardware. A. The 6-axis industrial robot was mounted on asturdy platform and controlled using custom software. Dedicated signal and air channels routedthrough the robot enabled feedback from a 6-DOF F/T sensor, object touch sensors and control ofa pneumatic tool changer. B. The robot end effector. The F/T sensor (b2) was mounted directly tothe robot end effector (b1). The master plate of the tool changer (b3) was mounted to the F/Tsensor using a custom interface plate. Air lines originating from ports on the robot controlled thelocking mechanism of the master plate. C. Grasp object assembly. The object was mounted to asix-inch standoff post that mounted to a tool plate. Touch sensors were mounted flush with theobject surface and wires were routed to the object interior for protection. Power and signal lineswere routed through a pass-through connector (not visible), through the robot interior to anexternal connector on the robot base. Small felt discs on each sensor were used for grasp training.The large flange extending from the bottom of the object was a temporary training aide to guidethe subject’s hand to the correct location.
  • 3. 45A b3 B b2 b1 C
  • 4. 46 Programming. The robot included a tethered teach pendant interface device throughwhich basic simple operation of the robot could be accomplished either through direct control ofa specific axis, or by executing a script written in the PAL programming language. Thebehavioral task planned for our experiments required real-time programmatic control of robotactions, requiring the development of custom software routines using a software development kit(SDK) provided by the manufacturer (ORiN-II, Denso Robotics). The routines implemented basicmovement commands to pre-defined poses in the working space. Pose coordinates (position,rotation, pose type) were determined by manually driving the robot to a desired pose using theteach pendant, then using motor encoder data to read back the actual coordinates. Programmaticcontrol included the ability to select the desired pose, speed, acceleration and other secondarymovement parameters. For selected operations involving a stereotyped sequence of basicmovements (retrieving or replacing grasp objects) individual commands were grouped intocompound movements to simplify user programming and operation. The custom software routines were developed in the C++ programming language andcompiled into a library of functions accessed by code interface modules in the LabVIEW®graphical programming environment (National Instruments Corporation, Austin, TX, USA). Anintuitive graphical user interface (GUI) was developed in LabVIEW® allowing the user to easilyoperate the robot from a computer connected to the robot controller through the local network. Safety Measures. Errors in robot operation were capable of causing considerable damageto the experimental setup, including the robot itself. To mitigate this possibility, robot controlprograms developed in LabVIEW® actively monitored force and torque data acquired from a 6-axis force/torque (F/T) sensor (Mini85, ATI Industrial Automation, Inc., Apex, NC, USA)mounted on the robot end effector (Figure 8(b)). Maximum force and torque limits for eachmovement were specified, tailored to purely inertial loads during movement or to direct loading
  • 5. 47during object retrieval, replacement and behavioral manipulation. The robot was immediatelyhalted if these limits were exceeded. The addition of the F/T sensor also added the capability ofmonitoring kinetics of object manipulation for scientific analysis. Tool Changer. A pneumatic tool changer (QC-11, ATI Industrial Automation Inc.) wasthe final element of the overall robot system (Figure 8(b)). This enabled the robot to retrievepresentation objects from the tool holder mounted to the side of the robot platform. A masterplate was mounted directly to the force/torque sensor via a custom interface plate and connectedto compressed air lines, which operated the locking mechanism. The air lines connected directlyto dedicated air channels routed through the interior of the robot, which was supplied by a gascylinder mounted nearby. Internal solenoid valves in the robot were controlled programmatically(via LabVIEW®) to operate the tool changer during object retrieval and replacement. A tool platewas attached to each grasp object to interface with the master plate. Each tool plate was fittedwith four mounting posts that aligned the tool in the object holder for reliable and repeatableobject retrieval.Grasp Objects Object Design. Grasp objects were designed to elicit in the experimental subject avariety of hand postures in order to investigate the sensory feedback resulting from each. Initially,up to seven different objects were envisioned including simple polygonal shapes (cylinder,rectangular polygon, convex polygon and concave polygon) as well as objects requiring specificmanipulation (e.g., squeezing, pulling, etc.) for successful task completion. The behavioral taskused for the research described in this manuscript required just two objects; small and largeversions of a modified cylinder design. An early version of the small object used for training isshown in Figure 8(c).
  • 6. 48 Initially, grasp objects were machined out of solid polymer materials such aspolytetrafluoroethylene (Teflon®, DuPont) or polyacetal (Delrin®, DuPont). However, fabricationquickly shifted to stereolithography (rapid prototyping) techniques to speed production andreduce cost during numerous design iterations. The resulting prototype objects proved to besufficiently robust to withstand the rigors of repeated use. The modified cylinder design wasdeveloped primarily in response to the need to register precise finger placement during graspingusing surface mounted resistive touch sensors (TouchMini v1.2, Infusion Systems, Montreal,Quebec, Canada). Simple cylindrical designs could not balance the size of the object (cylinderdiameter, which drove hand aperture) with the need for a relatively planar surface on which toattach the touch sensors. Mounting the flexible sensors on a curved surface would haveintroduced an undesired bias into the output, which was modulated by deformation or bending ofthe sensor. The solution was to essentially unfold the surface of a cylinder into an extendedsurface whose center portion was curved to accept the palm of the hand, while the peripheralportions merged into a relatively flat surface. These complex shapes were perfectly suited to thestereolithography process, and had the added benefit of opening up additional space in the interiorof the object that was used to route and protect delicate electrical connections from wear and tear. Touch Sensors. Thin (0.04 in, 1 mm), circular (∅0.75 in, 19 mm) resistive touch sensorswere glued directly to the outer surface of the object in shallow indentations that perfectlymatched the thickness and diameter of the sensor. This prevented the monkeys from picking atthe edges since the surface appeared to be uniform except for a slight change in texture. At thecenter of each indentation was a deeper well that permitted further indentation of the flexiblesensor, which increased the magnitude and reliability of the output in comparison to mounting ona flat surface. Wires were immediately routed inside of the object for protection. Sensors wereplaced at locations where the distal phalange of the thumb, index and middle fingers contacted the
  • 7. 49object surface when a prototype version was pressed into the hand of the first monkey to betrained in the behavioral task (monkey F). Electrical connections were routed to a 10-pin pass-through connector on the tool plate that made electrical connections to a corresponding connectorwhen an object was retrieved by the tool changer. Signals were routed through dedicated linesinside the robot and emerged at a master connector on the robot base. From here, the signals wererouted to the behavioral control software (LabVIEW®) and actively monitored to indicatesuccessful object grasping. Grasp Training. The F/T sensor and touch sensors were excellent tools for trainingmonkeys to grasp the objects in a specific and repeatable way. The desired interaction was aprecision grip in which the distal phalange of the thumb, index and middle digits contacted theobject at the location of the sensors and maintained simultaneous supra-threshold contact for atleast 250 ms. Basic Interaction. The first training stage was to establish the connection between theobject and reward. Touch sensor feedback was not used during this stage. Instead, feedback fromthe F/T sensor was used to register physical interaction with an object presented directly ahead ofthe monkey. Any contact with the object was immediately rewarded with several drops of juice.Initially, these interactions often involved slapping or scratching the object. This behavior wassteadily eliminated by withholding reward (and an audible cue) when such actions resulted inexcessive force or torque levels. The basic interaction training stage was complete when themonkey had learned to consistently place its hand on the object without exceeding force andtorque thresholds. Fine Tuning. This stage involved training the monkey to place the thumb, index andmiddle digits directly on the touch sensors. F/T feedback was not used to register successfulinteraction, rather, only to detect excessive force applied to the object. In this case, the audible
  • 8. 50cue was played, the object was withdrawn from the workspace and no reward was given. Smallfelt discs approximately 2 mm in height were attached to each touch sensor to attract themonkey’s attention during haptic exploration of the object. Initially, brief (10 ms) contact withany of the three sensors was sufficient to earn a juice reward. Next, brief simultaneous contactwith any two sensors was sufficient then, finally, contact with all three sensors was required toearn the juice reward. The final step was to steadily increase the required grasp duration to 250ms.Motion Capture Our experiments required that the 3D position and orientation of the subject’s hand werecaptured at all times. This information served two primary functions. First, it was used to animatethe motion of hand and object models in a virtual reality simulation in which subjects wouldeventually be trained to carry out the behavioral task. Second, the data were used to reconstructthe kinematics of hand movement during the task, which could be correlated with simultaneouslyrecorded neural activity. Kinematic analysis of hand movement is a technically challenging undertaking,especially for the hand and even more so for the child-sized hand of the juvenile macaques usedin this research. Detailed reconstructions require tracking the orientation of individual digitsegments (implying two markers per segment) with millimeter precision. Markers attached to thesegments are often occluded by the movement of adjacent digits or by intervening experimentalapparatus. Active markers require power and signal connections, which quickly becomes alogistical challenge of routing wires and connections while minimizing the impact to theunderlying behavioral task. Approach #1: Passive Marker Motion Capture. The first approach was to implement acamera-based motion capture system using passive detection markers (Vitrius, Tenetec
  • 9. 51Innovations AG, Zürich, Switzerland). In theory, this approach offered several advantages formitigating the challenges of motion capture described above. First, passive markers required nopower or signal lines, thus eliminating a significant degree of logistical complexity and increasingreliability. Second, the Vitrius system was predicated on a unique approach that promised todramatically reduce the number of cameras and markers required for high-precision motioncapture; 3D position determination with just one camera and one marker. All other knowncamera-based motion capture systems ultimately derive 3D position from an estimation of theparallax between two distributed observations of a point in space. By contrast, the Vitrius systemcalculated position by estimating the linear distance between the camera focal plane and a flat,square marker of known size. The relationship was simple; the smaller the marker’s focal planerepresentation (pixel area), the further its distance along a ray extending from the center of thedetected area. The trajectory of the ray itself was determined by the optical properties of the lensand the orientation of the camera. A unique pixelated pattern on each maker was used foridentification. An example of a Vitrius marker is shown in Figure 9(d), where several individualmarkers have been attached to the faces of a cube-shaped base. Numerous shortcomings of the Vitrius system quickly became apparent. At best, positionaccuracy was 10-20 mm, an order of magnitude greater than the required value. Cameraresolution was insufficient to adequately capture the small (3 mm2) markers required for themonkey hand. When markers were viewed by the cameras at any angle other than perpendicularto the focal plane, the apparent decrease in detected marker area resulted in an accompanyingover-estimation of marker distance. The system had no integrated calibration procedure, requiringthe user to manually measure the position and orientation of each camera. Finally, the Vitriussoftware was poorly designed and implemented, resulting in frequent system crashes and loss ofdata.
  • 10. 52 Approach #2: Data Gloves. Given the substantial shortcomings of the Vitrius system,effort quickly shifted to developing non-camera based methods for capturing hand posture. Oneoption was to adapt a data glove (Figure 9(a)) for use on a monkey hand. Developed forapplications such as virtual reality simulations, video gaming, and animation, data gloves areoutfitted with an array of sensors to capture hand motion and return real-time joint angle data.Gloves normally include up to three resistive bend sensors per digit (spanning each joint),abduction/adduction sensors between digits and, in some cases, palm bend sensors and wristangle sensors. Typically, these systems do not measure 3D position, requiring the addition of amotion tracking system to the dorsal surface of the hand or forearm. The advantages of this approach were promising, yet significant challenges remained.First, the lack of position and orientation sensing implied that motion tracking could not beabandoned completely. Second, the cost of a typical data glove was prohibitive, especiallyconsidering the potential wear and tear when used in non-human primate research. Neither wouldany manufacturer even consider the possibility of customizing the glove for the monkey hand.Finally, a glove covering the hand would interfere with the basic research goals of investigatingcutaneous feedback during reaching and grasping. The first solution was to personally customize an inexpensive data glove, combined withVitrius motion capture for position and orientation sensing (Figure 9(b)). This approachcombined the benefits of a data glove while minimizing the use of the Vitrius system. Bendsensors were removed from the original glove and reassembled into the new Monkey Glove,where they were restrained within an inner pocket on the dorsal aspect of each digit. Electronics(wires, circuit boards) were encased in epoxy for protection and sewn into a pocket on the handdorsum. An array of three Vitrius markers was also mounted on the hand dorsum to track theorientation of the hand. Initially, the glove fingers were attached to the digits using only narrow
  • 11. 53loops of fabric at the intermediate and distal phalangeal joints in order to expose the skin thatwould come into contact with the grasp objects. Eventually, however, the entire finger wasremoved from the glove and bend sensors were held loosely in place using thin plastic fasteners atthe aforementioned joints. Numerous refinements to this approach were devised, including the addition of a wirelesstransmitter, and a 2-axis accelerometer for measuring pitch and roll (Figure 9(c)). Despite theseimprovements, numerous problems plagued this approach. The Vitrius system was still requiredfor position tracking and the estimation of finger posture from a single bend sensor wasinaccurate and unreliable. A strategy was developed to utilize larger (5 mm2) Vitrius markersattached to the faces of finger-mounted cubes to improve camera visibility (Figure 9(d)). Thisapproach used fewer markers and was able to capture only crude measures of hand posture.
  • 12. 54Figure 9. Approaches to Hand Posture Measurement. A. Commercially available data glovesfeature numerous integrated bend sensors to capture the posture of the digits and palm but wereprohibitively expensive and difficult to customize to the monkey hand. B. An early prototype ofthe custom Monkey Glove. Bend sensors and electronics were removed from a gaming glove andreconfigured to the monkey hand. C. A wireless version of the Monkey Glove with rechargeablebattery, accelerometer and transmitter encased in protective epoxy. D. An alternative strategy forpassive motion capture. Cube markers with finger attachment clips were developed to utilizelarger markers for improved camera visibility. A smaller set of these markers captured only crudemeasures of hand posture.
  • 13. 55A B C D
  • 14. 56 Approach #3: Active Marker Motion Capture. Ultimately, the solution to themotion capture dilemma was to implement an active marker motion capture system (ImpulseSystem, Phasespace Inc., San Leandro, CA, USA). The Impulse system used active LED markersand eight cameras equipped with linear sensors, each with a digitally-enhanced effectiveresolution of 900 megapixels, to capture marker positions at frame rates up to 480 Hz. A robustcalibration routine used a linear wand outfitted with several active markers to define the capturespace by systematically sweeping the wand through the field of view of the cameras. A singlemarker was glued directly to the nail of each digit and an array of three markers was placed onthe hand dorsum for tracking the position and orientation of the overall hand. Each finger markerand the dorsal array were encased in epoxy for protection and a single wire was routed along thearm to a nearby device that transmitted data wirelessly to a server computer outside the testingroom. Data were acquired in real-time through a network interface using custom softwaredeveloped in C++ using an SDK from the system manufacturer. Data were simultaneously savedto a file for later analysis and routed to the virtual reality simulation, running on a stand-alonecomputer, to animate the position and posture of a virtual hand model. The data required nofiltering and no perceptible time lag was observed due to network transmission delays.Virtual Reality Simulation Simulation Hardware. The virtual reality simulation provided all visual cues to themonkeys during the behavioral task. It was displayed on a flat screen 3D monitor (SeeRealTechnologies S.A., Luxembourg) mounted horizontally and directly above the seating area.Subjects were not required to wear anaglyphic glasses. The monitor generated a 3D screen imageby vertically interlacing distinct left and right eye images, then projecting each through a beamsplitter to the appropriate eye. This system did require the subject to maintain position in a sweetspot to produce the optimal 3D effect, which was easily accomplished since the subject’s head
  • 15. 57was restrained throughout the course of an experiment. A mirror was located four inches infront of the monkey at a 45° angle to reflect the screen image from the monitor. The mirrorextended down to approximately chin level, allowing subjects to use the arms freely in theworkspace while at the same time hiding it from view. The simulation was generated using adedicated computer to ensure that the computational load did not effect the operation of theMaster Control Program (MCP), which was implemented in LabVIEW® on a separate computer.The MCP continuously read current motion capture data from the network, computed kinematicparameters, then transmitted them to the simulation computer (via User Datagram Protocol, UDP)which used the parameters to animate a virtual hand model. Virtual Modeling. The software implementation used a software toolkit (Vizard,WorldViz LLC, Santa Barbara, CA, USA) based on the Python (Python Software FoundationCorporation, DE, USA) programming language. The virtual hand model was a fully articulated(all digital joints) human hand included with the software toolkit. Animated degrees of freedomincluded 3D position, 3-axis rotation and grasp aperture (all digits). To animate the graspingmotion, the rotation angle of all digit joints was scaled according to the current aperture estimateto produce a realistic representation of grasping movement. Virtual models of the grasp objects were generated from the same CAD models used todesign and fabricate the physical objects with stereolithography. CAD models were simplyconverted to the Virtual Reality Modeling Language (VRML) format and imported into thevirtual environment, resulting in exact representations of the original objects. Virtual graspobjects were located in the simulation environment in correspondence with physical objectspresented in the workspace. That is, the transformation from camera coordinates (millimeters,origin at task start position) to simulation coordinates (non-dimensional) was tuned so that whenthe subject made contact with a physical object in the workspace, the virtual hand intersected the
  • 16. 58virtual object in the simulation. Individual trials of the behavioral task began when a subject placed its right hand on a 4-in square hold pad located at mid-abdominal height. The virtual model of the hold pad was asimple flattened cube displayed in a position corresponding to the physical hold pad. Contact withthe hold pad was monitored by a single touch sensor, identical to those used to sense fingerplacement on the physical objects. Virtual Task Training. Training in the virtual task required subjects to carry out thephysical task, learned previously in full sight and with the aid of the F/T sensor and surface-mounted touch sensors, using cues only from the virtual environment. In actuality, this was acombined physical/virtual task with two variants. In the physical variant, an object was presentedin the workspace, while in the virtual variant no object was presented. In both task variants, theappearance of grasp objects was used as a training aid. At the start of a trial, only the virtual handand hold pad were displayed, the latter in red. Hand contact with the physical hold pad caused thevirtual hold pad to turn green. After the required hold period, the virtual hold pad was removed,followed by presentation of a physical object in the workspace. The corresponding virtual objectwas then initially displayed in white but turned green whenever the subject physically interactedwith the object in the workspace. Interaction was again determined by either the F/T sensor ortouch sensors. These visual cues facilitated the training process for the virtual task and remainedin place throughout the course of experimentation. In the virtual task variant, collision of thevirtual hand and object models (detected by the simulation software) triggered the change incolor.Neural Recording System Neurophysiological experimentation was accomplished using a 16-channel acquisitionsystem for amplification, filtering and recording neural activity (MAP System, Plexon Inc.,
  • 17. 59Dallas, TX, USA). The MAP box itself used digital signal processing for 40 kHz (25 µs)analog-to-digital conversion on each channel with 12-bit resolution. Included control softwareprovided a suite of programs for real-time spike sorting, visualization and analysis of neuralactivity, all of which were run on a dedicated computer independent of the MCP. Digital Eventswere generated by the MCP to mark the occurrence of significant events during an experiment.Each event type was encoded as a unique 8-bit digital word using digital outputs from a 68-pinterminal block (SCC-68, National Instruments Corporation) and input directly to the MAP box,which saved spike times and digital event data (word value, time) to a single file.System Integration System Architecture. The preceding sections of this chapter described the sub-systemsof the overall laboratory setup, including a 6-axis industrial robot, 6-DOF F/T sensor, toolchanger, 3D motion capture system, virtual reality simulation and a neural data acquisitionsystem. The LabVIEW® graphical programming environment was used to integrate these sub-systems into a coordinated whole. Initially, all subsystems were to be controlled by softwarerunning in a real-time operating environment (LabVIEW® Real-Time Module, NationalInstruments Corporation) uploaded to a dedicated target processor (PXI-6259, NationalInstruments Corporation) for deterministic performance. However, several subsystems requiredthe Windows® operating system (Microsoft Corporation, Redmond, WA, USA) for programmaticcontrol. This precluded a purely real-time application, which was based on the VxWorksoperating system (Wind River Corporation, Alameda, CA, USA). Instead, a hybrid system wasdeveloped that coordinated the overall programmatic control of a single LabVIEW® applicationbetween the real-time processor (the target) and a standard personal computer (the host). Systemtiming deferred to the target processor (1 µs resolution) and communication between the targetand host was mediated through the local network (network shared variables). Program
  • 18. 60development took place on the host computer. At run time, the MCP code was uploaded to thetarget computer where it was compiled and executed. System Operation. The MCP, which included compatible inputs (touch sensors,incoming UDP messages) and outputs (digital events, outgoing UDP messages), was executed onthe real-time target. The robot control program ran on the host, awaiting movement commandsfrom the MCP according to the progression of behavioral task stages. A separate program formonitoring F/T sensor output also ran on the host, providing continuous feedback related toobject contact and monitoring the robot arm for excessive loading conditions. The MCPgenerated digital events in response to task events, which were routed directly to the MAP box ofthe neural recording system. The MCP also featured a continuous loop that read data from themotion capture server (TCP/IP protocol) and saved it to a local data file. Digital event markerswere also written to the camera data file so that kinematic data and neural data, which were savedto different files, could later be temporally aligned by matching corresponding event markers.Kinematic parameters derived from the camera data were sent (via UDP) to the virtual realitysimulation computer to animate the virtual hand model. UDP messages were also sent from thesimulation to the MCP to report virtual object collisions.