H. Adams, J. Shinn, W. G. Morrel, J. Noble, and B. Bodenheimer.
Development and evaluation of an immersive virtual reality system for medical imaging of the ear. In Proc. SPIE Medical Imaging, 2019.
Immersive, stereoscopic displays may be instrumental to better interpreting 3-dimensional (3D) data. Furthermore, the advent of commodity-level virtual reality (VR) hardware has made this technology accessible for meaningful applications, such as medical education. Accordingly, in the current work we present a commodity-level, immersive simulation for interacting with human ear anatomy. In the simulation, users may interact simultaneously with high resolution computed tomography (CT) scans and their corresponding, 3D anatomical structures. The simulation includes: (1) a commodity level, immersive virtual environment presented by the Oculus CV1, (2) segmented 3D models of head and ear structures generated from a CT dataset, (3) the ability to freely manipulate 2D and 3D data synchronously, and (4) a user-interface which allows for free exploration and manipulation of data using the Oculus touch controllers. The system was demonstrated to 10 otolaryngologists for evaluation. Physicians were asked to supply feedback via both questionnaire and discussion in order to determine the efficacy of the current system as well as the most pertinent applications for future research.
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
Development and evaluation of an immersive virtual reality system for medical imaging of the ear
1. Development and evaluation of an immersive
virtual reality system for medical imaging of
the ear
Haley Adams
Electrical Engr. &
Computer Sci.
Justin Shinn
Otolaryngology
William Morrel
Otolaryngology
Jack Noble
Electrical Engr. &
Computer Sci.
Bobby Bodenheimer
Electrical Engr.
& Computer Sci.
1
2. Ear anatomy is difficult to visualize.
● Structures are complex and small
○ The cochlea has a 8.5 x 7 mm2 cross-
sectional area
● Middle and inner ear is encased in bone
2
3. VR provides opportunity for medical
visualization.
3
● Benefits
○ 3D visualizations for 3D structures
○ Controlled environment
Lin et al. (2013) SPIE
4. VR provides opportunity for medical
visualization.
4
● Benefits
○ 3D visualizations for 3D structures
○ Controlled environment
● Cost and Safety
○ Reusable system
○ Different scenarios can be emulated
safely and cheaply
Lin et al. (2013) SPIE
5. 5
The NVIS SX60
[1.25 kg or 2.76 lb]
Head mounted
display (HMD)
Year
Released
Weight Initial Cost
(Approximate)
NVIS SX60 2009 1290 g
(2.76 lb)
$15,000
Oculus DK1 2013 380 g
(0.84 lb)
$400
Oculus CV1 2016 470 g
(1.04 lb)
$400
VR simulation is becoming more viable.
12. Our data was generated from CT scan volumes.
● Cone-beam CT image of a human head
○ Xoran (Ann Arbor, MI) xCATⓇ
○ 640 x 640 x 355 voxels
○ Voxels are 0.4 mm isotropic in size
12https://xorantech.com/products/
13. Surface models were created from the CT volume.
● 6 variant skull models were created via marching cubes algorithm
○ Represent sequential time points in mastoid resection procedure
. . . .
13
14. Surface models were created from the CT volume.
● 6 variant skull models were created via marching cubes algorithm
○ Represent sequential time points in mastoid resection procedure
. . . .
14
15. Surface models were created from the CT volume.
● Level 6 - No Resection
6
15
27. ● Translate, rotate, and
scale
● Interact with 2D & 3D
data synchronously
User Interface
27
28. ● Translate, rotate, and
scale
● Interact with 2D & 3D
data synchronously
● View transparent and
opaque visualizations
● Simulate a
mastoidectomy
User Interface
28
29. User Evaluation by Experts
● Demonstration
○ 10 participants
○ 4-7 minutes exposure
● Evaluation
○ Semi-structured questionnaire
○ Think-aloud feedback*
29
*Fonteyn et al. (1993) Qualitative Health Research
30. ● User Demographic
● System Usability
● User Understanding
● Open-Ended
Evaluation
Questions
30
31. ● User Demographic
● System Usability
● User Understanding
● Open-Ended
Evaluation
Questions
31
Greetings, my name is Haley Adams
Today, I will be presenting on preliminary work I have conducted with an interdisciplinary team
Which integrates medical imaging of the ear into virtual reality
------------------- -------------------------- ------------------------ ------------------------
Our team includes both researchers from the Dept. of Electrical Eng. & Computer Sci..
And doctors of medicine ---- from Dept. of Otolaryngology.
------------------- --------------------------
As for myself..
I am a computer science graduate researcher in the Learning in Virtual Environments Lab.
And, in my lab, we study how people perceive and act in
immersive virtual environments (IVEs) as well as (occasionally) their application.
Today’s presentation focuses on anatomy visualization in VR--
And we have specifically chosen to look at ear anatomy because….
------------------- -------------------------- ------------------------ ------------------------ END
….bc.... It is difficult to visualize
------ Middle and inner ear anatomy present an especially difficult challenge for medical and surgical training. The structures of interest are complex and small; the largest portion of the cochlea—a spiral-shaped inner ear organ that converts mechanical sound waves into electrical potentials—has a cross-sectional area of only 8.5 x 7 mm. Additionally, the entirety of the middle and inner ear is encased in bone, which presents significant difficulties for visualization and anatomical study
Fortunately, VR provides opportunities for medical visualization..
---------- ----------------------- --------------
FOR ONE
IVEs allow for 3D, stereoscopic viewing of anatomy-- which is *3D data.*
Traditionally, anatomy is visualized using 2D representations whether that be a textbook or CT scan
However, when 2D visualizations are used to represent 3D data, there is a risk of information loss
And (esp. in anatomy) important spatial interactions between anatomical structures can be absent
This is problematic, Doctors in training must develop a comprehensive understanding of the human body in order to interact with patients. But, there are few opportunities for medical students and residents to practice and learn from real, 3D humans. Human cadavers and human patients, still represent the gold standard for surgical training. But--not even considering ethical concerns--human subjects are neither easily accessible for practice --- nor are they cheap-------------- ------------------- ------------------ HALF WAY THRU
---------------- -------------------
Medical professionals must have a comprehensive understanding of the human body in order to interact with patients.
And It is inadvisable for practitioners to perform evaluations and surgeries without practice.
---------------- ------------------- -------------------- - -------------------
For example, here, you can see a figure from prior work published by the LiVE Lab.
In which researchers designed a system to visualize the human abdomen
A simulation like this can provide opportunities to improve patient understanding of medical conditions for the abdomen (like ventral hernias)
And it can be used to improve medical education for anatomy .
------
VR may promote student understanding of 3D anatomy
Can visualize more data, more efficiently
Reduces risk of information loss
-----
Medical student and residents demands
Trainees have limited surgical exposure and a high workload
There are ethical and legal concerns over patient safety and the financial implications associated with accelerating the learning curve
Cost demands
Difference scenarios can be emulated safely and cheaply
Cadavers are expensive (and will expire? So not reusable)
Efficiency
Can provide repeated practice
Can provide error correction and feedback required for a proficiency-based curriculum
SO ANOTHER WAY THAT VR PROVIDES opportunity for medical visualization (as well as training)
.. is that it presents a Controlled Environment
→ as such it permits
repeated practice , error correction, and feedback wh/ are crucial for learning
------ ------ ------ ------ ------ And while cadavers are expensive. And hard to come by. BY contrast--VR is reusable, it is safe both for the virtual patient and for the user, And (now more than ever) it can simulate different scenarios cheaply------------------- ------------------- ------------------- ENDDD
This is due in part to the release of the Oculus Rift DK1 in 2013
(wh/ paved the way for this current generation of commodity level displays)
Since then
we have seen the cost of Head Mounted Displays (HMDs) reduce dramatically
While ergonomics and quality of these display have improved
Now, Before we discuss the current research project,
I would like to provide context by revisiting
Some prior work conducted for anatomy visualization.
----- END
The development of medical visualizations began as early as the 80s.
As such,
a wide variety of anatomy has been visualized using different, complex
datasets, 3d models, and applications
For example, here is an image of the Voxel-Man surgical simulator on a CRT monitor, from almost 30 years ago
However, despite the widespread use of technology for medical visualization for all kinds of anatomy,
few applications allow stereoscopic viewing of data
and even fewer of these simulations have been rigorously validated as learning tools.
---- END ----
Looking specifically, at ear anatomy visualizations and simulations, the problem of a lack of rigorous validation becomes more clear.
In fact, it is a known issue now and it was just as well known 13 years ago.
Both Nicholson et al. (2006) and Mubashi et al.-- a recent review paper on simulation in otolaryngology----
Articulate the validation problem particularly well.
Both papers express concern over limitations of contemporary system evaluations (e.g., the use of small sample sizes)
And they lament the lack of consensus on how surgical and educational systems should be evaluated
The Nicholson et al study is interesting, because the handful of studies
That evaluated the effects of 3D computer generated anatomical models on learning prior to this point
Actually found equivocal or negative results.
The authors speculated that this was due to
Study limitations (such as small sample sizes)
And limitation of the models that were studies (ex: lack of full interactivity)
-------------------- END
A non-immersive simulation via web-based tutorial [LEFT-5] and semi-immersive simulation via stereoscopic glasses and computer screen [RIGHT-6].
Mubarashi et al 2017Current Status of Simulation in Otolaryngology: A Systematic Review
[Left]
28 medical students completed a Web-based tutorial on ear anatomy that included the interactive model, while a control group of 29 students took the tutorial without exposure to the model.
At the end of the tutorials, both groups were asked a series of 15 quiz questions to evaluate their knowledge of 3-D relationships within the ear
The intervention group's mean score on the quiz was 83%, while that of the control group was 65%. This difference in means was highly significant (P < 0.001).
Content validity
[Right] ( Fang et al. 2014)
Subjects were 7 otolaryngology residents (3 training sessions each) and 7 medical students (1 training session each).
Tested knowledge of anatomy and surgical skills.
Face validity: technology acceptance model (TAM) questionnaire, satisfaction questionnaire (cadaver > vr or plastic)
Average comprehension score was significantly increased from before to after training for all anatomic structures.
Residents has similar mean performance scores after the first and third training sessions for all dissection procedures.
I should also note that...
While there are fewer simulations dedicated to ear anatomy education in otolaryngology.--…
There are--in contrast--numerous surgical simulations
Most of these systems use non-immersive and semi-immersive displays.
One of the reasons for this is due the frequently employed haptic devices,
which demand the user be seated and face one direction.
Here, we can see two exemplary surgical simulations for the ear.
And on the left, bringing us full circle, is a modern version of the Voxel-Man surgical simulator. (wh/ we saw 2 slides ago)
temporal bone surgical simulator.
------------ ------------------- ------------------------- ------------------- END OF BG ---------------- --------------------
An exception to this is Tabrizi et al (2017) ??
In the domain of otolaryngology, more work has been conducted for surgical training simulation. Non-
immersive systems have been developed for mastoidectomy [7] as well as congenital aural atresia [8]. Semi-immersive
simulations have been developed for cochlear implant. [9] And immersive simulations have been developed for
pediatric temporal bone dissection. [RIGHT - 10].
[Left]
Voxel-Man TempoSurg Virtual Reality simulator -- most well researched immersive, temporal bone simulation
74 ear, nose, and throat (ENT) surgeons participated.
Assessed face, content, and construct validity
The participants performed four temporal bone dissection tasks on the simulator. Performances were assessed by a global score and then compared to assess the construct validity of the simulator. Finally, the expert group assessed the face and content validity by means of a five-point Likert-type scale. Results. experienced surgeons performed better and faster than the novices. However, the groups did not differ in terms of bone volume removed or number of injuries . 93.7% of experienced surgeons stated they would recommend this simulator for anatomical learning. Most (87.5%) also thought that it could be integrated into surgical training. Conclusion. The Voxel-Man TempoSurg Virtual Reality simulator constitutes an interesting complementary tool to traditional teaching methods for training in otologic surgery.
[Right]
Pediatric temporal bone dissection w/ Oculus Rift CV1
New device. Technical description.
The simulator will present these key structures to the user and warn the user if needed by continuously calculating the distances between the tip of surgical drill and the key structures.
No evaluation?
Our current work demonstrates a simple prototype for anatomy visualization
That uses beautiful, data generated by our team’s medical imaging expert
We then demonstrated this prototype to ten Otolaryngologists (or ENTs as they’re more commonly known)
To determine how more a mature version of the system
could be developed for practical use in clinical education, training, or practice
--------------------- ------------------ END -------------- -------------
Within our simulation, participants can interact with segmented ear structures inside of a corresponding skull model--all of which were generated in prior research.
I will provide a brief overview of the methods employed.
However, if you would like to know more, please refer to the cited publications.
All of the surface models used in our simulation
were generated from the same high resolution CT scan volume…
Which was captured using the Xoran xCAT
---------------- --------------------- ---------------- -----------------------
Cone-beam CT image of a human head
Contains 640 x 640 x 355 voxels that are 0.4 mm isotropic in size
Acquired with a Xoran (Ann Arbor, MI) xCATⓇ
They were created directly from the CT volume
(revisit MatLab code)
And from this dataset, we further generated 6 variant skull models
that represent the shape of bony structures at sequential time points in a mastoid resection procedure.
Mastoidectomy = a common otological procedure performed to access critical structures of the ear --namely the middle and inner ear.
(It is used for cochlear implants, aural atresia, and other similar operations)
Now why we do this?
And, actually, the temporal bone surgical simulators, that we discussed previously, strive to accurately simulate mastoidectomies. And that is because this operation is necessary to safely access critical ear structures.
Although we do not simulate the actual resection procedure in our visualization, we believed that viewing the loss of bony structures in this area was important for better understanding where critical ear anatomy was located relative to the mastoid view. Therefore, all six models are included in the simulation.
NEXT SLIDE !!
The resection occurs in a small area of the skull behind the ear and is difficult to see,
As you can see here (GESTURE)
soo….
---------------------------
To do this, an experienced otologist used custom software developed in-house for editing 3d images to erase tissue in the volumes
And a mastoidectomy was simulated iteratively on the CT images, which generated six variant skull models that represent the shape of bony structures at sequential time points in the mastoid resection procedure.
The resection occurs in a small area of the skull behind the ear,
As you can see here (GESTURE)
However, it is difficult to see at this scale
soo….
---------------------------
To do this, an experienced otologist used custom software developed in-house for editing 3d images to erase tissue in the volumes
And a mastoidectomy was simulated iteratively on the CT images, which generated six variant skull models that represent the shape of bony structures at sequential time points in the mastoid resection procedure.
The resection occurs in a small area of the skull behind the ear and is difficult to see
So….
I have included some zoomed-in images for better viewing.
---------
They were created directly from the CT volume
(revisit MatLab code)
Mastoidectomy = a common otological procedure performed to access the middle ear and other critical structures of the ear
To do this, an experienced otologist used custom software developed in-house for editing 3d images to erase tissue in the volumes
No resection, complete resection, and four intermediate time-points
Marching cubes algorithm was used to generate the surface models.
The resection occurs in a small section of the skull behind the ear and is difficult to see
So I have included some up close images for better viewing.
-------
They were created directly from the CT volume
(revisit MatLab code)
Mastoidectomy = a common otological procedure performed to access the middle ear and other critical structures of the ear
To do this, an experienced otologist used custom software developed in-house for editing 3d images to erase tissue in the volumes
They were created directly from the CT volume
(revisit MatLab code)
Mastoidectomy = a common otological procedure performed to access the middle ear and other critical structures of the ear
To do this, an experienced otologist used custom software developed in-house for editing 3d images to erase tissue in the volumes
They were created directly from the CT volume
(revisit MatLab code)
Mastoidectomy = a common otological procedure performed to access the middle ear and other critical structures of the ear
To do this, an experienced otologist used custom software developed in-house for editing 3d images to erase tissue in the volumes
They were created directly from the CT volume
(revisit MatLab code)
Mastoidectomy = a common otological procedure performed to access the middle ear and other critical structures of the ear
To do this, an experienced otologist used custom software developed in-house for editing 3d images to erase tissue in the volumes
The ear structures, themselves, were segmented using automatic methods.
This has interesting implications for our VR system,
Because it makes it more accessible for applications that benefit from patient specific data.
For example, all of the structures that I am about to show you, can be segmented in about 5 minutes.
And, in theory, this rapid segmentation coupled with VR integration
Makes VR actually practical for surgical training and preoperative planning-----------------
For example, in congenital aural atresia, children are born without an ear canal.
Their ear anatomy is abnormal, which makes corrective surgery challenging even for experienced surgeons.
Being able to see the patient’s anatomy prior to operation
Would allow surgeons to better determine candidacy (so determine if the risk of the operation outways the benefit)
And even plan operations.
----------------- ---------------------- - --------------------------------- --------------------------
Active-shape model-based techniques
Graph-based path finding techniques
Non-rigid registration-based techniques
STATE WHY THIS IS IMPORTANT
SURGICAL TRAINING - ABNORM ANAT (CI)
PREOPERATIVE PLANNING
So, here, we can see some of the 3d models used in the simulation.
As a reminder, below each figure I have included the publications which go into further detail on the segmentation process
For each anatomical structure.
The scala vestibuli, scala tympani, modiolus, cochlea, & semicircular canals
Were all generated using active-shape model-based techniques
-------------------
Scala tympani, scala vestibuli, modiolus [14, 15]
Cochlea [16]
Meanwhile, the facial nerve and chorda tympani were generated
Using graph-based path finding techniques
-------------- -------------- -------------- --------------
Facial nerve and chorda tympani [18]
And finally, the ossicles, tympanic membrane, and external auditory canal
Were segmented using non-rigid registration-based techniques
-------------------------
Now you’ve seen all of the segmented anatomy used in the VR system.
------ ---------------- -------------------
Ossicles, external auditory canal, and tympanic membrane [20]
The simulation, itself, was developed in Unity, a multiplatform game engine,
And it was rendered by the Oculus Rift CV1
--------------------------- ------------------------------- ---------------
Unity = a multiplatform game engine
1200 x 1080 per eye pixel resolution
110 view diagonally
Maintained a frame rate of 90 Hz per second
a multiplatform game engine
A variant of HLSL called Cg (interchangable)
High Level Shading Language for DirectX
Here, we can see an overhead view of the virtual environment
In which the skull and ear models----that we viewed previously----were placed in the center
In addition, the virtual room contained screens at each corner
That displayed CT scans
The screens were placed so that the user may view CT scans at most angles
-----------------
A room with screens along the walls was designed.
Large, open room with screens along the walls
3d models placed in the center
The user interface was designed to encourage free exploration
----------------
Translation, small rotation
Change screen scale
Interact with 2D and 3D data synchronously via a cutting plane
Change of cutting plane
You can see the ear [STOP 1:00]
-----------
[SKIP 2:12]
4. Simulate a mastoidectomy
Transparent mode
-----------
[[Why were the design decisions for the UI made the way they were?]]
Bare bones - driven by basic functionality needs
Uncertain of specific design needs for medical professionals (exploratory design that allowed for flexibility)
E.g. surgical view, naive user view
CT scans. One at a time (overwhelming in preliminary eval)
Multiple walls
-------------------------
The user interface for the first prototype was driven by functionality demands.
As we were uncertain of specific design needs for medical professionals,
I developed a system that promoted free exploration.
[[Why were the design decisions for the UI made the way they were?]]
Bare bones - driven by basic functionality needs
Uncertain of specific design needs for medical professionals (exploratory design that allowed for flexibility)
E.g. surgical view, naive user view
CT scans. One at a time (overwhelming in preliminary eval)
Multiple walls
Consider adding some of discussion to prev slide
The user interface was designed to encourage free exploration
----------------
Translation, small rotation
Change screen scale
Interact with 2D and 3D data synchronously via a cutting plane
Change of cutting plane
You can see the ear [STOP 1:00]
-----------
[SKIP 2:12]
4. Simulate a mastoidectomy
Transparent mode
-----------
[[Why were the design decisions for the UI made the way they were?]]
Bare bones - driven by basic functionality needs
Uncertain of specific design needs for medical professionals (exploratory design that allowed for flexibility)
E.g. surgical view, naive user view
CT scans. One at a time (overwhelming in preliminary eval)
Multiple walls
-------------------------
The user interface for the first prototype was driven by functionality demands.
As we were uncertain of specific design needs for medical professionals,
I developed a system that promoted free exploration.
[[Why were the design decisions for the UI made the way they were?]]
Bare bones - driven by basic functionality needs
Uncertain of specific design needs for medical professionals (exploratory design that allowed for flexibility)
E.g. surgical view, naive user view
CT scans. One at a time (overwhelming in preliminary eval)
Multiple walls
Consider adding some of discussion to prev slide
This was the system that was demonstrated to 10 Otologists informally in 4-7 minute sessions
Due to time constraints during the demonstration, we used a fairly short semi-structured questionnaire.
Which collected demographic information and subjective measures of users experience with the system
However, physicians were also asked to supply feedback verbally
in order to determine the most pertinent applications for future
Development--in fact many of the applications I’ve reviewed throughout today’s presentation were
Revealed through think aloud feedback from physicians while they used the system
And from discussion with them after using the system.
------------------------------------ ------------------------------------ ------------------------------------ END
Short semi-structured questionnaire (( Contains both open-ended and closed ended questions ))
------------------------------------
Semi-structured Questionnaire (( Contains both open-ended and closed ended questions ))
User demographic
System Usability
User Understanding
Open-ended Evaluation
Face Validity
subjectively appears to measure the variable or construct that it is supposed to measure. In other words, face validity is when an assessment or test appears to do what it claims to do
So here we can see a full version of the questionnaire
It collected demographic information and subject measures about the user’s experience with the system
And we will focus on the evaluations of system usability,
evaluation of user understanding,
And the open-ended evaluation questions
Do not fret if you cannot read them. They will return shortly in the context of our results.
PAUSE
To assess system usability, we focused on 3 aspects of the system:
Usefulness
Ease of control
Success
Which were evaluated using a Likert scale
These are modified questions from the System Usability Scale (SUS)
Due to time constraints of the demonstration, we regrettably could not use the full questionnaire.
But we intend to in future work.
Fortunately, as you can see, responses were overwhelmingly positive.
-------------
Short non-validated questionnaire due to time constraints of the demonstration
Likert scale
Desirable functionalities from open ended questions
Our open-ended questions were also promising.
For example, Q 11. asked: “Would you find this useful in clinical practice?”
And all participants, who responded, answered affirmatively [ 9 / 9]
And for Q 12.
When the otolaryngologists were asked open-ended questions about which interaction they believed con-
tributed most to their understanding of anatomy, five out of ten responses indicated that the ability to interact with the skull was most beneficial. Four comments indicated that the ability to zoom in or be inside the ear—especially the inner ear—was beneficial.
And THREE comments suggest that viewing the anatomical relationships from different angles contributed to the users’ understanding of the anatomy.
---------------------- ------------------ --------------------
Q 13.
Encouraging
Positive “Great job!” etc.
There was a clear issue with our evaluation of user understanding.
Although several otologist made comments about how the system could apply to education
Our questionnaire was unable to capture ANY subjective measures of learning,,
because all of our participants were already experts in ear anatomy.
In addition, our current evaluation (in general) is highly preliminary.
As such, even though our results are promising, they are no substitute for those found thru more formal evaluations
of system validity & system usability
----------------- ---------------------- ----------------------------- END
_------------------------ _--------------------
Although several otologist made comments about how the system could apply to education
Exp was fine to gauge interest and possibility for future collaboration and development
BUT is not a formal validation
Audience unclear for demonstration.
We were not certain of our audience and the questionnaire reflects this
How can you quantify in more detail: “Did the demo better your understanding of the anatomy?”
Audience issue: “already experts”
Evaluate learning effects. Ex: cadaver + fiducial
e.g., distance error, time
What statistics could/should be used to quantify the survey data (or the next round of survey data)?
More data. Power analysis, independent t-tests.
Many studies used small numbers and lacked power calculations (like ours)
QQ plot?
However, the qualitative feedback we received from otologists has been key for designing our ongoing work.
Specifically, the current study helped us determine a context of use--
in which we can better evaluate how and why VR may benefit medical visualization
And--for us--that context of use is anatomy education.
------ ------ ---------- SKIP IF RUNNING OUT OF TIME vv
Our ongoing study uses a between subjects design to evaluate medical students’ ability
to locate ear anatomy on a human cadaver
After they have learned about the anatomy by viewing in either VR or in a standard textbook. The scripts are kept the same between groups.
We currently see a trend in which the VR learning group is outperforming the standard learning group, but this may change upon the completion of the study.
------ ------ ------ ------ ---------- BACK
In the long term, we also hope to study surgical training and preoperative planning in VR.
----------------------------
We needed to specify a context of use.
Expert evaluation allowed this
Then we were able to use their feedback to create a second study, which is currently ongoing.
In this study, we are evaluating the simulation as an educational tool via AB testing (between subjects study)
VR vs traditional method (textbook)
… methods
… currently, we see trends in the data which suggest that VR improves medical students’ ability to locate
Anatomical structures on an actual ear cadaver
In conclusion,
We present a promising preliminary evaluation of our VR system for ear anatomy visualization.
And our feedback from domain experts encourages the future development of our system.
We believe that virtual reality may benefit the domain of medical visualization.
And, in future work, we hope to determine
If it can
And--if it can--then how?
In the context of anatomy education
-------------------------------- END
Overall responses from verbal feedback and the questionnaire indicated that the system was easy to control and that it was perceived as useful for clinical practice.
Although highly preliminary, our results are promising and encourage further development of the system.
Otolaryngologists saw immediate uses of the
system for training residents and medical students. It was also discussed among the otolaryngologists that the
VR system would provide insight and likely improve patient outcomes if this data was available prior to surgical
Intervention.
-----
For future work, my lab is interested in better understanding and evaluating how 3D visualizations can transfer into meaningful understanding in real world settings. We hope to develop more rigorous and robust evaluation in the future
Verbal feedback was recorded by note taking. And much of these results are not reported in the SPIE submission.
How can you quantify in more detail: “Did the demo better your understanding of the anatomy?”
Audience issue: “already experts”
Evaluate learning effects. Ex: cadaver + fiducial
e.g., distance error, time
What statistics could/should be used to quantify the survey data (or the next round of survey data)?
More data. Power analysis, independent t-tests.
Many studies used small numbers and lacked power calculations (like ours)
QQ plot?
In my final slide, I present the ONR, the NSF and some of the great people whose advice and support
Made this publication possible.
-----
Thank you all for your attention!
I will now take your questions.
------ ------ ------ ------ ------ ------
Alejandro Rivas Campo, Robert Labadie, Shilo Anders, and Priya Rajan for advice and support throughout the project. This work was supported in part by the National Science Foundation.
** Grant #