Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide


  1. 1. 3.0 IntroductionVR is being applied to a wide range of medical areas, including remote and local surgery,surgery planning, medical education and training, treatment of phobias and other causes ofpsychological distress, skill training, and pain reduction. It is also used for the visualisation oflarge-scale medical records, and in the architectural planning of medical facilities, although theselast two applications are not covered by this survey.The survey focuses on three main application areas: surgery in general, neurosurgery, and mentaland physical health and rehabilitation. See also Section 4: Sources and Resources in Medical VR.3.1 VR for SurgerySurgery is mostly visual and manual. VR for surgery involves applications of interactiveimmersive computer technologies to help perform, plan and simulate surgical procedures. Inperformance, the VR guides the surgeon, sometimes with a robot to execute the procedure underthe surgeons control (to remove hand tremor and scale down manipulations for key-holesurgery, for example). In other words, VR is used to give the surgeon 3D interactive views ofareas within the patient. Planning is carried out preoperatively, to find the best approach tosurgery, involving minimum damage. Simulation is mostly used in training, using patient dataoften registered with anatomical information from an atlas. It may be used for routine training, orto focus on particularly difficult cases and new surgical techniques.VR is being applied in all three major areas of surgery: open surgery, endoscopic surgery andradiosurgery. The surgery may be remote (through the use of robotics) or local.In open surgery, the surgeon opens the body and uses hands and instruments to operate. This isthe most invasive form of surgery, with long recovery times. There is a strong movement awayfrom open surgery and towards improved techniques of minimally-invasive surgery.
  2. 2. Open surgeryEndoscopy is minimally invasive surgery through natural body openings or small artificialincisions (keyhole surgery): laparoscopy, thoracoscopy, arthroscopy, and so on. A smallendoscopic camera is used in combination with several long, thin, rigid instruments. The trend isto carry our as much surgery as is feasible by this means, to minimise the risk to patients.Endoscopic surgery: the current situation without VRAdvantages for the patient include less pain, and less strain on the organism, and faster recovery.There are also relatively small injuries, and an economic gain arising through shorter illnesstime.However, for the surgeon, there are several disadvantages, including restricted vision andmobility, difficult handling of the instruments, difficult hand-eye coordination and no tactileperception except force feedback.Endoscopic surgery is becoming increasingly popular, because of its significant advantages. It isalso the most popular surgical application of VR, partly because it expands on what is already an
  3. 3. "unnatural" view of the locus of operation. Another reason is that endoscopic surgery isrelatively easy to simulate because of the limited access, restricted feedback (especially tactile)and limited freedom of movement of instruments. Endoscopic simulators are being produced byall the main medical VR companies, usually with a focus on training.Another recent trend is towards so-called Virtual Endoscopy. This is a technique whereby datafrom non-intrusive sources - such as scans - are combined into a virtual data model that can beexplored by the surgeon as if an endoscope were inserted in the patient. VR is increasingly beingused to provide surgeons with a meaningful and interactive 3D view of areas and structures theywould otherwise be unable or unwilling to deal with directly.In radiosurgery, X-ray beams from a Linear Accelerator are finely collimated and accuratelyaimed at a lesion. Popular products include Radionics X-knife, and Elekta`s Gammaknife.Planning radiosurgery is suitable for VR, since it involves detailed understanding of 3Dstructure.Elektas Gammaknife(left ) and the X-knife from Radionics (right)VR in surgery differs from most other VR in its focus on contact with objects, which must oftenbe deformable objects and interdependent. The focus is on looking into objects rather thanlooking into space - there is less room available. The data is essentially volumetric and fingerand hand interaction must be extremely precise.The above characteristics bring with them certain technical requirements, such as real-timeresponse to user`s action - which implies fast graphics, low latency input devices. The imagesmust be of high resolution and faithful to the actual patient data, since life-critical decisions arebased on the presentation of patient data. For simulators, the physical procedures must matchthose used in the actual operation.
  4. 4. Other requirements of VR for surgery include registration of patient data with atlases and theability to coregister multimodal data. For use over extended periods, which is often needed insurgery, the style of user interaction should be natural, comfortable, and easy to use.Areas where VR is being applied:Image-guided surgeryGuiding surgeons to targets during actual operationsTraining simulatorsPracticing difficult proceduresPreoperative planningStudying patient data before surgeryTelesurgeryOperating remotely, or assisting other surgeons remotely.3.2 Image-guided surgeryVR can in principle be applied to enhance reality for image-guided surgery. When applied toimage-guided surgery in this way, the images obviously need to be available intra-operatively,and accurate registration of the real patient with the data becomes a crucial issue.Currently, VR is used much more for preoperative planning (see 3.4 below) than to guide actualsurgery (due to the understandable conservatism of medical practitioners). When VR is usedintra-operatively, it tends to be implemented as some form of Augmented Reality (see theUniversity of North Carolina system, below, and 2.4 above). Image-guided surgery is also aprerequisite of remote telemedicine and collaboration (see 3.5 below).
  5. 5. Image-guided surgery, implemented as Augmented Reality, at the University of N. CarolinaBrain tumour surgery guidance images
  6. 6. ARTMA (University of Vienna) system for Image-guided Ear, Nose & Throat SurgeryThe ARTMA team at the University of Vienna were pioneers in this field. They refer to theirapproach as Interventional Video Tomography (see abstract below). It is also applied totelemedicine (see 3.5 below)."Interventional Video Tomography"SPIE Proceedings of Lasers in Surgery, 4-6 February 1995, San Jose,CAPaper #: 2395-34, pp.150-152Author(s): Michael J. Truppe, Ferenc Pongracz, ArtmaMedizintechnik GmbH, Wien, Austria; Oliver Ploder, Arne Wagner,Rolf Ewers, Universität Wien, Vienna, Austria.AbstractInterventional Video Tomography (IVT) is a new imaging modality forImage Directed Surgery to visualize in real-time intraoperatively thespatial position of surgical instruments relative to the patients anatomy.The video imaging detector is based on a special camera equipped with anoptical viewing and lighting system and electronic 3D sensors. Whencombined with an endoscope it is used for examining the inside of cavitiesor hollow organs of the body from many different angles.The surface topography of objects is reconstructed from a sequence ofmonocular video or endoscopic images. To increase accuracy and speed ofthe reconstruction the relative movement between objects and endoscopeis continuously tracked by electronic sensors. The IVT image sequence
  7. 7. represents a 4D data set in stereotactic space and contains image, surfacetopography and motion data. In ENT surgery an IVT image sequence ofthe planned and so far accessible surgical path is acquired prior to surgery.To simulate the surgical procedure the cross sectional imaging data issuperimposed with the digitally stored IVT image sequence. Duringsurgery the video sequence component of the IVT simulation is substitutedby the live video source.The IVT technology makes obsolete the use of 3D digitizing probes forthe patient image coordinate transformation. The image fusion of medicalimaging data with live video sources is the first practical use of augmentedreality in medicine. During surgery a head-up display is used to overlayreal-time reformatted cross sectional imaging data with the live videoimage.A main tool for traditional image-guided surgery is the microscope.Microscopes are now being integrated with robotic transport systems.Microscopes can also, in principle, serve as the vehicles for VR, as theywill increasingly allow 3D views and are already "in place" in theoperating theatre. Surgeons readily accept microscope-based views fromwhich they can easily look away, whereas they are less comfortable withoverlays placed on their primary direct view of the patient - the traditionalaugmented reality approach.The Zeiss MKM Microscope transport. It is a 6 degree of freedom robot with a surgicalstereomicroscope attached.3.3 Education and Training
  8. 8. VR provides a unique resource for education about anatomical structure. One of the mainproblems for medical education in general is to provide a realistic sense of the inter-relation ofanatomical structures in 3D space. With VR, the learner can repeatedly explore the structures ofinterest, take them apart, put them together, view them from almost any perspective. This isobviously impossible with a live patient, and is economically infeasible with cadavers (which, inany case, have already lost many of the important characteristics of live tissue).Another advantage of VR for medical education is that demonstrations and exercises orexplorations can easily be combined. For example, a "canned" tour of a particular structure,perhaps with voice annotations from an expert, can be used to provide an overview. The learnermay then explore the structure freely and, perhaps later, be assigned the task of locatingparticular aspects of this structure. It is also possible to preserve particularly instructive cases,which would be impossible by other means.There is something of crisis in current surgical training. As the techniques become morecomplicated, and more surgeons require longer training, fewer opportunities for such trainingexist.Training in the operating theatre itself brings increased risk to the patient and longer operations.New surgical procedures require training by other doctors, who are usually busy with their ownclinical work. It is difficult to train physicians in rural areas in new procedures. Trainingopportunities for surgeons are on a case-by-case basis. Animal experiments are expensive, and ofcourse the anatomy is different.The solution to these problems is seen to be the development of VR training simulators. Theseallow the surgeon to practice difficult procedures under computer control. The usual analogymade is with flight simulators, where trainee pilots gain many hours of experience beforemoving on to practice in a real cockpit.
  9. 9. Boston Dynamics open surgery anastomosis trainerThe advantages of training simulators are obvious. Training can be done anytime and anywherethe equipment is available. They make possible the reduction of operative risks associated withthe use of new techniques, reducing surgical morbidity and mortality.However, the big challenge is to simulate with sufficient fidelity for skills to be transferred fromperforming with the simulation to performing surgery on patients. Faithfulness is hard to achieveand much more evaluation of different approaches to training simulation are needed. Manyexperienced surgeons predict that in time, experience with training simulators will constitute acomponent of medical certification. But this will require new regulations and legislation.Hot topics in the area include the use of force feedback (see 2.3.4 above), increased accuracy ofmodelling of soft tissue, and the role of auditory feedback.For simple operations like suturing and biopsy needle placement, VR is effective, but perhaps anoverkill to train skills that can easily and cheaply be acquired in other ways.The most useful and tractable areas for the development of training simulators are the varioustechniques of endoscopic surgery in widespread use today. It is relatively easy to reproduce inVR the restricted field of view and limited tactile feedback of endoscopic surgery. It is muchmore problematic to reproduce open surgery techniques realistically. For complex anatomicalstructures, this is definitely not yet possible.Karlsruhe Endoscopic Surgery TrainerThe pictures above illustrate both the value of simulators for training procedures, but also theircurrent weaknesses in terms of realism. To realistically simulate an operation, the method of
  10. 10. interaction should be the same as in the real case (as with flight simulators). When this is not thecase, the VR can serve as an anatomy educational system rather than a training simulation.One way of increasing the reality of interaction is to combine VR with physical models, asillustrated in the Gatech simulators for endoscopy and eye surgery, and the Penn State Universitybronchoscopy simulator (see below). These systems focus on training the surgeon in the use ofparticular medical devices, rather than on training a better awareness of general or specificpatient anatomy.Gatech: Endoscopic Surgical SimulatorGatech: Eye Surgery SimulatorBronchoscope Simulator from Penn State University Hospital at HersheyAn example of an anatomy educational system is the EVL eye (shown below) from theUniversity of Illinois. Since the VR is immersive and based around the CAVE, it cannot be said
  11. 11. to duplicate the interaction methods of real eye surgery (since surgeons cannot get physicallyinside eyes) and so is not a training simulator, unlike the Georgia Tech system above.The EVL eye, from the Electronic Visualisation Lab, University of Illinois at ChicagoThe EVL eye used by a group in the CAVEMore realistically in terms of interaction, the Responsive Workbench is another candidate foranatomy teaching (see below). As with CAVE-based applications, a shared VR enhances thepotential for collaborative learning.
  12. 12. The Responsive Workbench from GMD in GermanyThe most technologically challenging area of simulator training is for highly specialised aspectsof life-critical operations such as brain surgery. The Johns Hopkins/KRDL skull-base surgerysimulator for training aneurysm clipping (see below) is one example. The interaction is entirelywith the VR itself.JHU/KRDL Skull-base Surgery SimulatorResearchers at University of California San Diego Applied Technology Lab have developed aninteresting Anatomy Lesson Prototype [http://cybermed.ucsd.edu/AT/AT-anat.html]. They pointout that the main challenges they identified from talking to medical faculty and students includedvisualising potential spaces; studying relatively inaccessible areas; tracing layers and linings;establishing external landmarks for deep structures; and cogently presenting embryological
  13. 13. origins. Correlating gross anatomy with various diagnostic imaging modalities, and portrayingcomplex physiological processes using virtual representation were also considered highlyvaluable goals.Relevant Web Sites:VR and EducationVR in Surgical Education3.4 Preoperative planningSimulators such as the JHU/KRDL Skull-base Surgery Simulator blur into systems for pre-operative planning. Planning systems also sometimes blend with augmented reality, since theplanning is on a actual, particular patient, so that physical reality (the patient) and the VRnaturally come together in planning. The aim in such planning is to study patient data beforesurgery and so plan the best way to carry out that surgery.Preoperative planning must:Use actual patient data to be operated uponNot use an idealised model, or atlas, or Visible Human datasetBe fastBe accurateBe multimodal (different data sources): to show blood vessels, soft tissue, bone, etc.Convey as much information as possibleRadionics Stereoplan - a pure planning system
  14. 14. The aim of Stereoplan is to allow surgeons to examine patient data as fully as possible, andevaluate possible routes for intervention. Further, the system then provides the coordinates forthe stereotactic frames that are standardly used to guide the route for brain surgery. Similar to theRadionics Stereoplan, the KRDL Brainbench, built around the Virtual Workbench, aims athelping planning of stereotactic frame-based functional neurosurgery (see below).KRDL Brainbench for stereotactic frame-based neurosurgery planning
  15. 15. Combined neurosurgery planning and augmented reality from Harvard Medical SchoolIn pre-operative planning the interaction method need not be realistic and generally is not. Themain focus is on exploring the patient data as fully as possible, and evaluating possibleintervention procedures against that data, not in reproducing the actual operation. The Universityof Virginia "Props" interface illustrates this (below). A dolls head is used in the interaction withthe dataset, without any suggestion that the surgeon will ever interact with a patients head inquite this way.University if Virginia "Props" Interface used in pre-operative plannngKRDL VIVIAN: the Virtual Workbench used for stereotactic tumor neurosurgery planningOf course, the simulation must be accurate. Given this, techniques developed for planning cansometimes be applied to the prediction of outcomes of interventions, as in bone replacements orreconstructive plastic surgery. Such simulations can also help in training, and in communicationsbetween doctors and patients (and their families).An important aspect of such systems for use by medical staff is the design of the tools and howthis affects usability. See "Interaction Techniques for a Virtual Workspace".3.5 Telemedicine and Collaboration
  16. 16. Telemedicine is surprisingly little used today in actual medical practice. According to a recentarticle (The Economist, February 28th 1998) less than 1 in 1000 radiographs are viewed by adistant, rather than a local, specialist. This is despite the proven ability of telemedicine to savedoctors time and, hence, money (for example, from a recent study in Tromso on teleradiology).Similarly, home visits can be successfully replaced with remote consultations, saving money andincreasing aged patients satisfaction (because they can get more frequent consultations withouttroublesome travel), but currently only 1 in 2000 home visits are conducted remotely throughinformation technology. Telemedicine is successfully used in military settings, where normallegal and economic considerations do not apply.One promising area where VR could make a contribution is in remote diagnostics, where twosurgeons can confer on a particular case, each experiencing the same 3D visualisation, althoughlocated in different places.The other, often discussed, main applications are for remote operations, either through roboticsurgery, or through assistance to another remote surgeon. The big problem here is network delay,since almost immediate interactivity is required. The small delay introduced by the use ofsatellite communication is unacceptable in remote surgery. Talk of remote operations carried outon space crew in deep space, or even merely on Mars, is pure science fiction (they would requirecommunication at speeds above that of light).Robots are used more routinely non-remotely, for precision in carrying out certain procedures,such as hip replacement. The types of operation to which robots are applied in this way areusually high volume, repeated procedures. As well as improved accuracy, major cost savings canbe produced.A relatively new development is to use surgeon-controlled robots to carry out, by key-holemethods, operations which previously required open surgery. VR becomes important here inproviding a detailed 3D view to guide the surgeon in carrying out the operation via extremelysmall robotic instruments. Major operations, such as coronary bypass, can be carried out in thisway with significantly reduced trauma and recovery time for the patient.The technical possibility already exists for unsupervised robots to carry out surgery, but muchethical and legal debate and legislation will be needed before this could be put into practice. Thissurvey does not focus primarily on telerobotics, which is itself a large field.
  17. 17. Remote surgeon workstation at the University of VirginiaThe two upper images are live video provided via an ATM link. They show a view through thesurgical microscope and a room view. The remote surgeon may pan/tilt/zoom the room cameraand may move the microscope view with 6 degrees of freedom. The bottom windowsrespectively show presurgical imaging with functional overlays, volume rendering of the surgicalplan, and a snapshot archive taken during surgery.The Artma Virtual Patient System is an established technology for telesurgery.The SRI Telepresence project is representative of current work in this area (see below).
  18. 18. SRI Telepresence: telerobotics and stereo video interface, surgeon interactionSRI Telepresence: telerobotics and stereo video interface, patient interaction3.6 Snapshot of the State of the Art: Conference Report onScientific and Clinical Tools for Minimally InvasiveTherapies, Medicine Meets VR 1998
  19. 19. Much of the research in this area, especially in the USA, is funded by the government formilitary applications such as remote surgery on the battlefield. The main agencies re DARPAand Yale-NASA (mostly the same projects that previously got funding from DARPA). Progressreports on these projects, may of which have been running for several years, were presented atthe conference and included:a "smart" T-Shirt that senses the path of the bullet that hit its wearer, monitors hiscondition and location, etc., so that rescue teams can decide if hes worth rescuing, and beprepared, and combat units can knock out the location from which he was attacked;various personal monitoring devices, including a wearable system for astronauts;a Limb Trauma simulator using the PHANToM (by MusculoGraphics, in Boston):a stretcher with monitoring systems; andan enhanced dexterity robot called ParaDex, among others.HT Medical also gets funding from his committee, as well as SRI international, BostonDynamics, HIT Lab, and others.During the conference as a whole, a broad selection of current work, both academic andcommercial was presented. There were many endoscopic simulators, for the knee, shoulder,colon, abdomen... And all had some force-feedback that wasnt convincing as real tissue (fromwhat doctors said) but apparently helped in training (from what the engineers said).Tactile tissue simulation was one of the key phrases. Everybody is trying to figure out how to doit, but I didnt see (feel) any convincing implementation. Force feedback is the latest craze, butthe sensitivity to model subtle gradations just isnt there yet. An interesting alternative is to usesound as feedback.Also, many atlases of the whole human body (and one of a frog) were presented. Most used theVisible Human, but there were others (the Japanese) that had their own data sets.One interesting point that was raised by the team at SRI is that the key problem in trainingsurgeons is not how to convey the locomotive skills needed to manipulate an endoscope or howto cut using a scalpel, but how to understand patient anatomy. Training the hands how to use anendoscope takes a week or so, but learning how to the interpret a patients anatomy takes years. Iagree with this assessment, and I think thats where rich interaction capabilities combined withreal-time volumetric rendering of multimodal data are crucial.SRI, of Stanford, have tested their telepresence system with live animals using a 200 metre link.Their results are published in the Journal of Vascular Surgery. Dr. John Hill of SRI presentedtheir first attempts to move towards computer-generated graphics training simulators using theirtelepresence system. They use a set-up similar to the ISS Virtual Workbench, but with their owninteraction devices. They are working on simulating suture of tissue and vessels using an Onyxand 2D texture maps.Dr. Ramin Shahidi, Stanford University Medical Center, is working on SGI-based volumerendering neurosurgery and craniofacial applications. Their graphics didnt include more than
  20. 20. one volume at a time. His presentation was an overview of the use of volume rendering vs.surface rendering.NASA-Ames and Stanford University have created the National Biocomputation Center: Dr.Muriel Ross was announcing this centre as a resource for collaboration with academics andindustry, to promote medical VR. NASA-Ames have an Immersion Workbench (aka ResponsiveWorkbench, aka Immersadesk) and their own visualisation software and are working oncraniofacial "virtual" surgery. It appears that they use polygon meshes for their visualisation.Dr. Henry Fuchs presented work in progress at UNC that uses depth range finders to reconstructa surface map of the intestines to then guide an endoscope for colonoscopy. All this was added totheir well-known augmented reality system, and comprises an interestingly novel approach.HT Medical presented their VR Simulation of Abdominal Trauma Surgery. They use thePHANToM and some "wet" graphics to remove a kidney. They simulate the "steps" taken by thesurgeon. First the surgeon cuts the skin, which then opens, revealing the intestines. A wetgraphics effect is used, but this looks more like "cling film" wrapping over everything. Theintestines moved quite unconvincingly, in an animation that was slightly under the control of theuser (it didnt appear like inverse kinematics were attaching the end-point of the intestines to theusers tool). The kidney was removed by simply "reaching into it" and moving it out. Thepractical value of such a demonsytration was not clear, however.An impressive paper from Wegner and Karron of Computer Aided Surgery Inc., described theuse of auditory feedback to guide blind biopsy needle placement. Their audio feedback systemgenerates an error signal in 3D space with respect to a planned needle trajectory. This errorsignal and the preoperative plan are used to motivate a position sonification algorithm whichgenerates appropriate sounds to guide the operator in needle placement. To put it simply,harmonics versus dissonances are used to convey position information accurately along 6-8dimensions. A nice example of a synaesthetic medium - using one modality (sound) where onewould normally expect another (touch and/or vision). Their approach has wide applicability.Myron Kreuger is President of Artificial Reality Corporation and a claimant to the title ofinventor of VR. The system he described was a training system for dealing with emergencies,where smells of, for example, petrol or the contents of the lower intestine, can provide valuableinformation in a hazardous situation. It has also been claimed that smell can significantlyenhance the effectiveness of medical training systems.Many manufacturers presented their latest demonstrations and products at the conferenceexhibition. HT demonstrated CathSim, a simulator that trains nursing students to performvascular catheterisations. They built a special force feedback device and a some simple graphicsto provide visual feedback. It was quite good to guide the needle, but had little or no feedbackonce inside the skin. This seemed like an "technological overkill" since the procedure is easilylearned without VR and is not exactly hazardous.They also demonstrated a Flexible Bronchoscopy simulator developed with a partnership ofpulmonologists and pharmacology experts at Merck & Co. (based on the Visible Human
  21. 21. Project). They have a way to track the flexible tip of the endoscope ("a secret", I was told when Iasked) and they generate nice 2D texture-mapped graphics of the interior throat using an SRIImpact.Fraunhofer had two demonstrations from their Providence office:TeleInVivo, demonstrating a PC software volume renderer (a few seconds per renderingfor small windows areas) attached to an ultrasound probe.Interventional Ultrasound: A guiding system for biopsy needle insertion using anultrasound tracking system (not much of an implementation at the moment), so its the oldidea of using ultrasound to guide a biopsy needle. They overlay the ultrasound view withthe biopsy needle path, something that UNC demonstrated at SIGGRAPH, but withoutthe expensive head gear.Matthias Wapler, of the IPA branch in Stuttgart, also described a robot for precise endoscopyand neurosurgical navigation. They have not yet developed planning software for their system.Loral were at the Immersion booth, presenting a training system using the Immersion Corp.sforce feedback device. The application lets the surgeon guide an endoscope through the nose of apatient. The simulation was "helpful" to surgeons, although it is rather crude and doesnt feel likethe real thing.Prosolvia (the main Swedish VR company) demonstrated s system for Virtual Arthroscopy of theshoulder, developed with University Hospital of Linköping. They used the Immersion Corp.force-feedback system, and their own Oxygen software base. They are interested in collaboratingon VR medical training systems.Four demonstrations were shown at the SensAble booth:The Ophthalmic surgical simulator. This project combines N-Vision US$25,000 stereodisplay (binoculars with 1280x1024 resolution; there is a cheaper version for VGAgraphics at US$15,000) with the PHANToM, and a nice simulation of the feel of an eye.The computer platform is Intergraph. Since the PHANToM doesnt provide torquefeedback, I didnt really appreciate the usefulness of the feedback system while cuttingaround the cornea. However, prodding the eye produced convincing force feedback.MusculoGraphics surgery simulation solutions. Their Limb Trauma simulator DIDNThave force feedback, so the PHANToM was used as a 3D pointer. The simulationconsisted in picking up a bullet and stopping bleeding of a blood vessel. I thought thesystem was unrealistic and of limited usefulness.Their IV catheter insertion system HAD force feedback, and was quite convincing.Spine Biopsy Simulator, by the Georgetown University Medical School, for educationaluse. The aim is to mimic an actual spine biopsy procedure and improve overall learningby students. Unfortunately, their demo wasnt working.Virtual Presence presented two useful tools:
  22. 22. VolWin, a volume rendering package (US$700) on the PC that is based on the VoxarAPI. The performance was really good, running on a plain PC. A 256x256x256 volumewas rendered at some 5-6 fps, with some aliasing effects, but basically usable quality.A package that tests the surgeons performance using the Immersion Corp. laparoscopydevice. No fancy graphics, the idea being to measure performance in hitting targets. Anexcellent simple idea for laparoscopy training.Gold Standard Multimedia have produced a CD-ROM with a segmentation of the Visible Male.The package volume renders the views and structures chosen, on a PC platform.Sense8s medical customers are the National Centre for Biocomputation (NASA, StanfordUniversity), Rutgers University, Center for Neuro-Science, and Iowa School of Dentistry. Aknee simulator was presented. Unfortunately, it broke early in the conference.Vista Medical Technologies had a good head mounted display to substitute the microscope. It isnot head tracked, but it allows the surgeon to look through the microscope and outside. It alsoallows Picture-in-Picture, so that an endoscope can be used to supplement the microscope.There was a nice demonstration of 3D sonification from Lake Acoustics of Australia, who werealso involved in the 3D sound feedback for biopsy needle placement described briefly above (thepaper by Wegner and Karron). Using their kit, it is very simple to place sounds in a three-dimensional landscape surrounding the body to the front (as with normal stereo) and to the back(as with cinema surround sound) but using only headphones. They were giving away diskettescontaining an impressive demonstration of this system.3.7 Physical and Mental Health and RehabilitationIt is clear that this is one of the medical areas where VR can most immediately and successfullybe applied today. This is partly because the technical demands, particularly in terms of detailedvisualisation and interactivity, are actually less stringent than in some other areas, such assurgery. Often these systems simulate the physical environment, a world of rooms, doors,buildings, etc., many of which are simple shapes and much easier to model that the irregular andcontoured surfaces of internal organs. They also tend to be solid, and so the physics to beunderstood so that they may be modelled is much simpler, and the complexity of interacting withthem is much less.Main Application areas:Mental health therapy: fear of heights, fear of flying, various other phobias. Eatingdisorders. Stress control, Irritable Bowel Syndrome. Autism.Patient rehabilitation: treadmills, wheelchairs, people with disabilities (CACM Aug1997).
  23. 23. Parkinsons disease, stroke therapy - with physiological feedback.Exploration and communication of unusual mental/body states is also a potential applicationarea.Examples:A Treatment of Akinesia Using Virtual Images1998 "Technology and Persons with Disabilities" ConferenceAcrophobiaVirtual Environment -- Final ReportAutism and VRThe Use of Virtual Reality in the Treatment of PhobiasVR and DisabilitiesVR Exposure TherapyVR in Eating DisordersVR in Stroke Disorders3.7.1 Snapshot of the State of the Art: Conference Report on Mental Health session,Medicine Meets VR 1998Topics covered at this years conference included treatment of phobias, psychologicalassessment, and cognitive rehabilitation.The session also provided an opportunity for the launch of the new CyberPsychology andBehavior journal, the first number of which includes a useful summary of the use of VR as atherapeutic tool.Brenda Wiederhold presented a good paper on using VR to go beyond the standard "imaginal"training of phobic patients. The advantages of VR are, first, that fear can be effectively activated(which is necessary to bring about change) but can be controlled (too much fear reinforces thephobia) and, second, physiological measures can be used to control the display. One simplemeasure of anxiety, first used by Jung, is a drop in skin resistance.Similar work on claustrophobia and fear of heights was described by Bulligen of the Universityof Basle. Another paper on acrophobia (fear of heights) by Huang et al. of the University ofMichigan described comparisons of real and virtual environments for emotional desensitisation,and questioned the need for a high level of realism. Using the CAVE environment, theycompared the same views in VR and in reality. See their Web page for views.A rather pleasant system from Japan, the "Bedside Wellness" system by Ohsuga et al, allowsbedridden patients to take a virtual forest walk while lying on their backs in bed. An array ofthree video screens present the unfolding view of the forest as the patient gently steps on two
  24. 24. foot pedals. There is also 3D sound of birds, streams and wind in the trees. A slot below thecentral screen delivers a gentle breeze scented with pine to the "walking" patient.Rizzo, of the University of Southern California, is using VR to give increased ecological validityto standard tests applied to Alzheimers Disease patients, such as the mental rotation task (wherethe patient has to decide if a second figure is a rotated version of an earlier figure, or is differentin shape). This Immersadesk application seemed like technological overkill to me. However, afuller paper by Rizzo et al in the CyberPsychology and Behavior journal, lists several advantagesof VR for cognitive and functional assessment and rehabilitation applications:1. ecologically valid and dynamic testing and training scenarios, difficult to present byother means2. total control and consistency of administration3. hierarchical and repetitive stimulus challenges that can be readily varied in complexity,depending on level of performance4. provision of cueing stimuli or visualisation tactics to help successful performance in anerrorless learning paradigm5. immediate feedback of performance6. ability to pause for discussion or instruction7. option of self-guided exploration and independent testing and training8. modification of sensory presentations and response requirements based on usersimpairments9. complete performance recording10. more naturalistic and intuitive performance record for review and analysis by the user11. safe environment, although realistic12. ability to introduce game-like aspects to enhance motivation for learning13. low-cost functional training environmentsAlso on the topic of psychological assessment, Laura Medozzi et al, from Milan, described whatseemed to be high quality work to compare traditional tests with VR-based testing. The case of apatient suffering frontal lobe dysfunction several years after a stroke was used to make the pointthat traditional tests often fail to reveal deficits that can be identified with VR. This is thought tobe due to the nonverbal and immersive realism of VR, compared to the presence of a humanexaminer, in traditional testing, who inadvertently provided surrogate control over higher order
  25. 25. faculties - largely through verbal exchanges. The same group, in collaboration with workersunder David Rose at the University of East London, described the use of VR to aid cognitiverehabilitation.Joan McComas of the University of Ottawa described a VR system for developing spatial skillsin children. She had carried out a four-condition study where choice of location to move to waseither passive or active, as was navigation to that location. The four were then: passenger(passive choice/passive movement) navigator (active choice/passive movement), driver (activechoice/active movement) and navigated driver (passive choice/active movement). The task wasto find things hidden at locations, but without going to the same location twice. Measures werepercent of correct choices and visit of first error. It occurred to me that we could use this sort ofapproach in studies of exploration in 3D information landscapes. A paper by Weniger also strucka chord by comparing spatial learning (maze navigation) with exercise of the executive function(the maze with pictograms) and with the use of orientation skills (navigation of landscapes).Giuseppe Riva, from the Applied Technology for Psychology Lab at the Instituto AuxologicoItaliano in Verbania also discussed the use of VR for psychological assessment - particularly thedevelopment of the Body Image Virtual Reality Scale. Patients chose which virtual body theythink matches their own, and which they would prefer to have instead. The difference gives ameasure of body image distortions.Greene and Heeter, of the Michigan State University Communication Technology Lab, describedCD-ROMs that contain VR-like stories of cancer sufferers, particularly in relation to coping withpain. Details can be found at [http://www.commtechlab.msu.edu/products/]. An interesting paperby Hunter reported the finding that VR can be very effective in helping burn-recovery patientscope with the pain of treatment. Patients in the VR condition reported significant pain reductionand less time spent thinking about pain.Pope described the use of a VR system called "Viscereal" to provide physiological feedback.Users could control the flow of blood to their hands, and hence could warm or cool them at will.It has also been found to be effective in permitting conscious control of bowel activity, easingclinically harmless but distressing conditions such as Irritable Bowel Syndrome.The Woodburys, a husband and wife team from the Puerto Rican Institute of Psychiatry, musedon modern cosmology and the origins of our three dimensionality. They gave the conference auseful reminder that the 3D world is in our heads, not in the world "out there". Pathologicalpsychological states - especially various psychoses - and altered states of consciousnessproduced by certain hallucinogenic drugs, make this clear as the world around the experiencer,and his sense of his body and its place in that world, falls apart in typical psychotic panic states.Following Pribram, the Woodburys view the 3D world we know so well as a holographicprojection, formed in the brain according to principles established through evolution as aidingsurvival. While recognising that this world is an illusion, psychiatrists work to restore it inpatients whose world has literally collapsed.Although not mentioned by presenters, one of the audience, Rita Addison, talked about the use ofVR to communicate the reality of mental deficits to other, normal people. Rita has visited the
  26. 26. VRLab in Umea and is well-known for her "Detour: Brain Deconstruction Ahead" whichreproduces for others her visual problems since a car accident a few years ago.