SlideShare a Scribd company logo
1 of 158
Download to read offline
Student’s name and surname: Jan Klimczak
ID: 112006
Second cycle studies
Mode of study: Part-time
Field of study: Informatics
Specialization: Systems and Mobile Technologies
MASTER'S THESIS
Title of thesis:
Immersive 3D Visualization Laboratory Demonstrator
Title of thesis (in Polish):
Demonstrator możliwości Laboratorium Zanurzonej Wizualizacji Przestrzennej
Supervisor
signature
Head of Department
signature
PhD MEng Jacek Lebiedź PhD MEng, Professor with habilitation Bogdan
Wiszniewski
Gdańsk, 2014
2
STATEMENT
First name and surname: Jan Klimczak
Date and place of birth: 12.04.1982, Gdańsk
ID: 112006
Faculty: Faculty of Electronics,Telecommunications and Informatics
Field of study: informatics
Cycle of studies: postgraduate studies
Mode of studies: Part-time studies
I, the undersigned, agree/do not agree* that my diploma thesis entitled: Immersive 3D
Visualization Laboratory Demonstrator may be used for scientific or didactic purposes.1
Gdańsk, ................................. ................................................
signature of the student
Aware of criminal liability for violations of the Act of 4th February 1994 on Copyright and Related
Rights (Journal of Laws 2006, No. 90, item 631) and disciplinary actions set out in the Law on
Higher Education (Journal of Laws 2012, item 572 with later amendments),2 as well as civil
liability, I declare that the submitted diploma thesis is my own work.
This diploma thesis has never before been the basis of an official procedure associated with the
awarding of a professional title.
All the information contained in the above diploma thesis which is derived from written and
electronic sources is documented in a list of relevant literature in accordance with art. 34 of the
Copyright and Related Rights Act.
I confirm that this diploma thesis is identical to the attached electronic version.
Gdańsk, ................................. ................................................
signature of the student
I authorise the Gdańsk University of Technology to include an electronic version of the above
diploma thesis in the open, institutional, digital repository of the Gdańsk University of
Technology and for it to be submitted to the processes of verification and protection against
misappropriation of authorship.
Gdańsk, ................................. ................................................
signature of the student
*) delete where appropriate
1 Decree of Rector of Gdańsk University of Technology No. 34/2009 of 9th November 2009, TUG archive instruction
addendum No. 8.
2 Act of 27th July 2005, Law on Higher Education:
Art. 214, section 4. Should a student be suspected of committing an act which involves the appropriation of the
authorship of a major part or other elements of another person’s work, the rector shall forthwith order an enquiry.
Art. 214 section 6. If the evidence collected during an enquiry confirms that the act referred to in section 4 has been
committed, the rector shall suspend the procedure for the awarding of a professional title pending a judgement of the
disciplinary committee and submit formal notice of the committed offence.
3
STRESZCZENIE
Niniejsza praca badawczo-rozwojowa przedstawia możliwości tworzenia aplikacji do
uruchomienia w instalacjach typu CAVE. Praca rozpoczyna się od przeglądu istniejących
rozwiązań, opisuje w jaki sposób i gdzie są one wykorzystywane a następnie skupia się na
prezentacji czym jest CAVE i jak jest on zbudowany. Kolejne rozdziały opisują Laboratorium
Zanurzonej Wizualizacji Przestrzennej (LZWP) na Politechnice Gdańskiej i inne podobne
rozwiązania. Następnie została przedstawiona metodologia tworzenia aplikacji pod CAVE.
Zawiera ona przegląd i porównanie bibliotek kodu, frameworków oraz edytorów z graficznym
interfejsem użytkownika (GUI), który przyśpiesza i upraszcza proces tworzenia aplikacji. W
końcowej części pracy został przedstawiony opis utworzonej aplikacji demonstracyjnej, która
może zostać uruchomiona w CAVE na LZWP. Zakończenie zaś przedstawia dalsze plany
rozwojowo-badawcze. Pierwszy załącznik opisuje aplikacje demonstracyjne skompilowane i
uruchomione podczas tworzenia niniejszej pracy. Natomiast kolejny załącznik jest
dokumentacją, która pokazuje, w jaki sposób rozpocząć pracę z frameworkiem Virtual Reality
Toolkit ViSTA.
Wynikiem niniejszej pracy jest potwierdzenie, iż tworzenie aplikacji od podstaw w kodzie do
uruchomienia w CAVE jest skomplikowanym procesem. Mamy do dyspozycji kilka dobrych
frameworków, na których może bazować daną aplikacja. Prostszym rozwiązaniem tworzenia
aplikacji pod CAVE jest wykorzystanie istniejącego edytora z graficznym interfejsem
użytkownika, który pozwala na utworzenie takiej aplikacji w sposób wizualny. To znacznie
ułatwia i przyśpiesza proces projektowania aplikacji pod CAVE, ale w pewien sposób ogranicza
możliwości ich tworzenia.
Dziedzina nauki i techniki: Rzeczywistość Wirtualna, CAVE, Grafika 3D, OpenGL,
Przetwarzanie rozproszone, Silniki gier i symulacji, Symulatory, Silniki zarządzania sceną,
Silniki 3D.
4
ABSTRACT
This research and development (R&D) work present possibilities of creating applications to
run in cave automatic virtual environment (CAVE) installations. It begins of little review of
existing solutions, describing how and where they are used, and then it shows what is CAVE
and how it is build. The next topic describes the Immersive 3D Visualization Laboratory (I3DVL,
pol. Laboratorium Zanurzonej Wizualizacji Przestrzennej - LZWP) at Gdansk University of
Technology. Then it focuses on methodology of developing CAVE applications. It contains
review and comparison of code libraries, frameworks and editors with graphic user interface
(GUI) which speed-up and make easier developing process. At least it provides description of
developed example application to run in CAVE at I3DVL and predicts the possibilities of future
R&D. The first supplementary part shows demonstrate applications compiled and run during
creating of this work and second part is about Virtual Reality Toolkit ViSTA as documentation
for start-up to work with it.
The result of this work is that process of developing applications from scratch through
coding them for CAVE is difficult. There are a few good frameworks which you may base on.
The easier way of creating applications for CAVE is to use some dedicated tool with GUI where
you can create application in studio in visual way. This make easier and speed-up process of
developing applications for CAVE but it have also own limitations which you will read about it
further in this work.
Keywords: Virtual Reality, CAVE, 3D Computer Graphics, OpenGL, Distributed Rendering,
Game and Simulation Engine, Simulators, Scene Graphs, 3D Engine.
5
TABLE OF CONTENTS
STRESZCZENIE........................................................................................................................... 3
ABSTRACT ................................................................................................................................... 4
LIST OF MAJOR SIGNS AND ABBREVIATIONS ........................................................................ 8
INTRODUCTION AND PURPOSE OF WORK ............................................................................. 9
1. CAPABILITIES OF VIRTUAL REALITY.................................................................................. 11
1.1. Image ..........................................................................................................................................11
1.2. Sound ..........................................................................................................................................11
1.3. Other channels – touch and smell...............................................................................................12
1.4. Interaction....................................................................................................................................12
2. I3DVL AT GDANSK UNIVERSITY OF TECHNOLOGY ......................................................... 13
2.1. CAVE...........................................................................................................................................13
2.2. Edge blending .............................................................................................................................14
2.3. Colour Mapping...........................................................................................................................15
2.4. 3D Image.....................................................................................................................................15
2.5. Eye Tracking ...............................................................................................................................16
2.6. Surround 8.1 sound.....................................................................................................................17
2.7. VirtuSphere - locomotion platform..............................................................................................17
3. EXISTING CAVE SYSTEMS AND LOCOMOTION PLATFORMS......................................... 19
3.1. I3DVL - Gdansk University of Technology ..................................................................................19
3.2. Silesian University of Technology, Poland..................................................................................20
3.3. aixCAVE - Aachen University, Germany.....................................................................................21
3.4. Possible applications of CAVES .................................................................................................22
3.4.1. Flooding Crisis Simulation......................................................................................... 22
3.4.2. Molekül Visualisierung (MCE) ................................................................................... 22
3.4.3. Neurochirurgieplanung in immersiven Umgebungen............................................... 23
3.4.4. Virtual Gallery ........................................................................................................... 23
3.4.5. Example students projects........................................................................................ 24
4. PROPOSAL OF USE I3DVL ................................................................................................... 27
4.1. Simulations..................................................................................................................................27
4.2. Medicine......................................................................................................................................27
4.3. Prototyping ..................................................................................................................................27
4.4. Games.........................................................................................................................................27
4.5. Fun ..............................................................................................................................................28
4.6. Marketing.....................................................................................................................................28
4.7. Trainers .......................................................................................................................................28
5. METHODOLOGY OF CREATING SOLUTIONS FOR I3DVL................................................. 29
5.1. I3DVL as complete platform........................................................................................................29
5.2. Creating Virtual Reality applications for CAVE ...........................................................................30
5.2.1. Existing libraries and frameworks............................................................................. 31
6
5.2.1.1. API Graphic .........................................................................................................31
5.2.1.1.1. DirectX ..........................................................................................................31
5.2.1.1.2. OpenGL.........................................................................................................32
5.2.1.2. Scene graph engines...........................................................................................32
5.2.1.2.1. OpenGL Performer .......................................................................................33
5.2.1.2.2. OpenSG ........................................................................................................33
5.2.1.2.3. OpenSceneGraph .........................................................................................34
5.2.1.2.4. NVIDIA SceniX - NVSG ................................................................................35
5.2.1.2.5. Summary.......................................................................................................37
5.2.1.3. Frameworks for CAVE solutions..........................................................................39
5.2.1.3.1. ViSTA............................................................................................................39
5.2.1.3.2. VR Juggler ....................................................................................................42
5.2.1.3.3. Equalizer .......................................................................................................42
5.2.1.3.4. Summary.......................................................................................................45
5.2.1.4. Support libraries ..................................................................................................46
5.2.1.4.1. Cg Toolkit ..................................................................................................46
5.2.1.4.2. NVIDIA OptiX ............................................................................................46
5.2.2. Graphical editors .......................................................................................................47
5.2.2.1. Create own editor with GUI .................................................................................48
5.2.2.2. GUI libraries.........................................................................................................48
5.2.2.3. Existing graphic editors ..........................................................................................48
5.2.2.3.1. Simulators.........................................................................................................49
5.2.2.3.2. CAVE supported and dedicated .......................................................................56
5.2.2.3.2.1. VBS - Virtual Battlespace...........................................................................56
5.2.2.3.2.2. Quazar3D...................................................................................................57
5.2.2.3.2.3. EON Studio ................................................................................................60
5.2.2.3.2.4. Vizard.........................................................................................................62
5.2.2.3.2.5. Summary....................................................................................................65
5.2.2.3.3. Game dedicated engines..................................................................................66
5.2.2.3.3.1. UNIGINE ....................................................................................................66
5.2.2.3.3.2. UDK............................................................................................................67
5.2.2.3.3.3. CryEngine ..................................................................................................68
5.2.2.3.3.4. UNITY ........................................................................................................68
6. DEMONSTRATIVE PROJECT FOR I3DVL............................................................................70
6.1 System project............................................................................................................................. 70
6.2 Implementation notices................................................................................................................ 73
6.3 Quality tests................................................................................................................................. 74
6.4 Performance tests ....................................................................................................................... 75
6.5 System presentation.................................................................................................................... 76
6.6 User manual ................................................................................................................................ 78
7. FUTURE R&D WORK FOR I3DVL .........................................................................................80
8. SUMMARY ..............................................................................................................................81
THE STUDY BENEFITED FROM THE FOLLOWING REFERENCES ......................................82
7
LIST OF FIGURES...................................................................................................................... 85
LIST OF TABLES........................................................................................................................ 86
Attachment A - Example Applications......................................................................................... 87
1. ViSTA .............................................................................................................................................87
2. OpenSG 1.8 ...................................................................................................................................91
3. OpenSG 2.0 ...................................................................................................................................96
4. OpenSceneGraph 3 .....................................................................................................................103
4.1. Sample applications based on the books .................................................................................127
5. Nvidia SceniX 7............................................................................................................................138
Attachement B - ViSTA ............................................................................................................. 142
1. Download framework....................................................................................................................142
2. Compilation prepare.....................................................................................................................142
3. Setting up environment variables.................................................................................................143
4. Prepare project for Visual Studio 2012 ........................................................................................143
5. Libraries required by the sample application ...............................................................................148
6. Compilation of the sample applications .......................................................................................148
7. Configure sample application.......................................................................................................150
8. Manual create a project into Visual Studio 2012..........................................................................154
9. 3D objects import test ..................................................................................................................157
8
LIST OF MAJOR SIGNS AND ABBREVIATIONS
2D – Two-dimensional space
3D – Three-dimensional space
CAVE – Cave Automatic Virtual Environment
CryVE – Cryengine automatic Virtual Environment
FPS – Frames Per Second
GUI – Graphical User Interface
HDD – Hard Disk Drive
I3DVL – Immersive 3D Visualization Laboratory at Gdansk University of Technology
ODE – Open Dynamic Engine
R&D – Research and Development
SDD – Solid-State Drive
UDK – Unreal Development Kit
VBS – Virtual Battlespace
VR – Virtual Reality
9
INTRODUCTION AND PURPOSE OF WORK
The Master Thesis is R&D work about the possibilities of use and developing applications
for CAVE's. I begin my work from explanation of virtual reality (VR). Then I described Immersive
3D Visualization Laboratory (I3DVL) at Gdansk University of Technology in Poland. Here you
can read about many different important elements which are creating the laboratory. The main
part is cave 6-wall automatic virtual environment (CAVE) with multiplied projectors per walls for
higher the quality of displayed image. The system consists of image blending, tracking system,
surround sound and VirtuSphere locomotion platform also. This locomotion platform is a big
sphere where you can go inside and freely walk or run to move in virtual reality. This is quite
interesting composition of the VirtuSphere locomotion platform with CAVE, maybe the first such
configuration on the world.
Then I review existing CAVE installations. I visited i3D company with my supervisor Ph.D.
Jacek Lebiedź and Ph.D. Adam Mazikowski. After I described a few CAVE configuration from
our neighbour border friends as biggest one CAVE in Europe at Aachen University in Germany.
Then proceed with a few impressing CAVE installation on the world. Also I included Survey
Simulator from DNV which have VR Training Centre in Gdynia at DNV Academy Poland for
training and certificate the employers from all over the world. Is worth to say that this system
shorten training time from 5 to about 1 year which is a great result. The centre contains
interesting and very comfortable back projection system which improves their immersive
feelings. So you will find here a good notice about my visit in the VR Training Centre of DNV.
After this introduction I describe possibilities of use Immersive 3D Visualization Laboratory
in domains of simulators, medicine, prototyping, games, fun, marketing and trainers. Here you
will read how you can use CAVE laboratory for such kind of different projects.
Next on I move to methodology of creating solutions for I3DVL and mark a common
problems and requirements for creating applications for CAVE. I notice here that we can
develop applications from scratch where I point to important functionality of OpenGL and
DirectX API's. Then I move to existing scene graph engines which are commonly in use as
OpenSceneGraph, OpenSG, NVIDIA SceniX and some of them which are currently obsolete
but in the past were very common and important like OpenGL Performer or CAVELib. These
are powerful graphic libraries which make easier and faster the process of developing 3D
applications and VR simulations. After you will read about frameworks as Equalizer, VRJuggler
and ViSTA which enable you to create CAVE applications based on previously described scene
graphs. The main functionality of this frameworks is added possibility to distributed rendering
and displaying image, easy configurable setups for develop once and run application at different
computer configurations and in CAVE installations and support many different input and output
devices as manipulators and trackers. This topic is focused at coding application for CAVE.
To speed-up process of developing CAVE application is recommended use of GUI editors. I
described a QT framework which is best suited for it. To go one step further you can use
existing simulators or GUI engines. VBS of Bohemia Interactive Simulations is commonly used
training simulator for military, police, fire brigade and ambulance on over the world as for
example in US Army, NATO and currently in Poland as well. At the moment Gdansk University
of Technology sign contract that Bohemia and University will create Crisis Management Center
10
based on Virtual Battlespace (VBS), configurable simulator system. If these will not enough for
you than you can use CAVE dedicated editors like Quazar3D, EON Studio or Vizard which are
full environment specialized in fast and easy creation of CAVE applications. On the market
there are a few very good game studios as UNiGiNE, Unreal Development Kit (UDK),
CryEngine and Unity which are powerful tools for creating AAA level games.
For almost all of the scene graphs, graphics editor and game studios I pass through
compilation, configuration, running and analyzing the frameworks as well as their examples and
some of the tutorials to get really know how they works. The result of analyze is a big
attachment where you can see about three hundred of described small applications. At attached
disc you can see their images in HD resolution. There is also additional attachment just
described the ViSTA framework which is provided without any documentation, so it should help
you to start to work with it if you will decide on it.
11
1. CAPABILITIES OF VIRTUAL REALITY
The Virtual Reality has many names and meanings. It is interpreted differently by different
peoples and institutions. But there are a few common capabilities, elements which glue them
together. The first one is virtual world, then immersion, sensory feedback (responding to user
input) and interactivity [1].
Virtual World presents the environment where action takes place. It is an imaginary space
often manifested through a medium, a description of objects in a space.
Immersion is considered that the user must be physically immersed, have "a sense of
presence" within alternate reality or point of view. The alternate word may be a representation of
the actual space that exists somewhere, or it could be a purely imaginary environment.
Sensory Feedback allows participant to select their vantage point by positioning their body
and to effect events in the virtual world. The VR system provides direct sensory feedback to the
participants based on their physical position usually by tracking system.
Interactivity is the fourth element of VR which is just responding for the user interaction.
This gives opportunity to interact with virtual worlds and simulations.
1.1. Image
Do we need a photo-realistic image to create the Virtual Reality? No, we don't. There are
some of the VR systems for blind peoples without any graphics, were people can interact and
act in virtual reality worlds. We can come back to the past when first virtual worlds were created
in the games just in text modes. The first computer games could be example of VR which based
on text. By improving graphic quality we just improve our immersion of VR. There is important
how we can see, if we can see in colour, at big screen or maybe through glasses as HMD. The
resolution is important for the quality of virtual words. How we can see? Can we believe that
what we see could be real? Virtual reality is a medium. A virtual world is a real representation of
some world that may or may not exist in the physical world. To visualize this we use an image.
1.2. Sound
In real life we hear sound everywhere. Only in vacuum there is no sound. That's why we
know that if somewhere is no sound that should be not real. The same is in VR. The sound
improves our immersion. Without sound we lost a lot of immersion and we feel that's something
is not true
1
. The quality of sound is very important, as well as good spectrum of sound. We
know many of sound and we know how some of the sound should be heard like. Also the
position of sound in 3D is very important as is in real life. We can use background sound, sound
effects and voices which can be also recognized by system to interact. All of them improve our
immersion level.
1
This is not valid for deaf people.
12
1.3. Other channels – touch and smell
The Virtual Reality systems can be enhanced by other channels like touch, smell which
improve immersion. This is an optional, not required, but because of them you will fill just like in
the real world.
Touch is when you can touch a real device, which moves your manipulations into virtual
worlds. You can have real like devices or platforms as e.g. car cabin, plane cockpit or
submarine control room, which can be similar or just the same as in the real environment. This
provide you more accurate control over the system, train based on real-like devices or situations
in different configurations. Through using such techniques you can work with VR as you used
to. But also there are available many different devices just created to use in virtual reality which
help you to navigate in space. You can use move-driven manipulators which react for your
translating and rotating in different axes, even simultaneously. You can choose between
analogue, digital or mixed devices. The analogue thumbs give you easy control how you're
sliding up or down. It will give you possibility to almost visually specify values which you want to.
But the drawback is the lover quality of analogue control over digital one. Digital one provides
you state buttons, key pads or touch devices.
From the other side there are devices which react to simulation. Haptic devices are one of
them. These devices usually have mechanisms to provide force feedback which you will feel
during working with them. For example by using haptic pen during painting some of the 3D
model you will feel the touch at the virtual contact of pen with geometry. Other kind of devices
may be some installation which produces water bubbles, fog or other substances which are
controlled by simulation.
In modern platforms you can also smell much different savour which fulfils experience of
immersion.
1.4. Interaction
Interaction is very important in VR. Without possibilities to interact you will fell just like in the
movie. To be immersed in virtual world its need to react in real-time. There are many
possibilities how the simulation will interacting with you. You can manipulate it through devices
as manipulators, keyboards, mouse and trackballs as well as mobile, real-like or touch devices,
simulation platforms and cockpits etc. You can use motion capture and interact through moving
or use sensors as gloves or body tracking. You may use voice control to speech commands.
Head tracking very improve immersion in the sense that as you change position and orientation
of viewing point that scene will be displayed at different position and angle in real time. You can
combine head tracking system with other manipulators to get best results and better interaction
with the simulated VR [2].
13
2. I3DVL AT GDANSK UNIVERSITY OF TECHNOLOGY
I3DVL is advanced CAVE laboratory built in the end of 2014 at Gdansk University of
Technology. Process of setup and specification how the laboratory will looks like, took a few
year of research. The main idea was to project hi-end CAVE with something which will be
unique and improve their usability. The decision was to choose locomotion platform installed
inside 6-wall CAVE as the distinguishing feature. The locomotion platform is a big sphere,
where user can go into and freely walk around the virtual world. This is world unique solution
which gives possibility to R&D a new kind of solutions.
2.1. CAVE
A cave automatic virtual environment (better known by the acronym CAVE) is an immersive
virtual reality environment where projectors are directed to the walls of a room-sized cube (see
fig 2.1) [3].
Fig. 2.1. Typical CAVE installation [3]
The CAVE at University contains 6 walls which create square room. Each wall is 3,4m width
and height. The room is about 3m above floor of the building containing the CAVE. Walls are
created by acryl glass. The floor is strengthened and is divided into two parts with very thin gap
between them invisible from above. The floor glass will resist about 500 kg load. Image is
projected by rear projector's system consisted of 12 DLP Full HD 3D 120Hz projectors with
laser calibration system. Metal construction positions the CAVE room at second level of
building. At this level there is also light floor which eases entrance into room. First level contains
2 projectors with mirrors which display image into CAVE floor. There are also 10 additional
projectors located around the room which display the image at surroundings CAVE walls.
14
Displayed image is high quality with resolution 1920x1920px. Such display system needs huge
computing power which is realised by 14 servers with 32GB RAM, NVidia Quadro K5000 4GB,
SSD, fibber full-duplex network InfiniBand 40 Gb/s each, which guarantee high quality of
displayed image.
2.2. Edge blending
High quality of image is created by displaying two images from 2 projectors at one wall. The
problem is that CAVE walls are square and images from two sources don't fill exactly surface of
the wall. Second problem is how to connect such two images into one, that there will be not
visible gap and artefacts between them?
Fig 2.2. Edge blending and Color Mapping [4]
The solution is to setup two images that will overlap each other and use edge blending.
Edge blending blends two overlapped images at the place of its overlapping. It creates a
seamless image by adjusting the brightness at adjoining edges when using multiple projectors
side-by-side to reproduce single widescreen images [4].
Fig. 2.3. Edge blending function [4]
The blending, in simple, is the process which setup transparency from zero to one hundred
percent at the part of overlapped image which make the connection between them not visible
[5].
15
2.3. Colour Mapping
The use of multiple projectors to create a larger image can result in colour variations due to
slight differences in projector image processing. Each projector is adjusted so that the same
colours are reproduced when multiple projectors are used simultaneously.
2.4. 3D Image
Humans have two eyes situated close together side by side. This positioning means that
each eye has a view of the same area from a slightly different angle. Both views are merged in
a brain to form a single image. To provide real-like image we need to display two different
images for both eyes otherwise image will be flat and will look not real [6].
Fig 2.4. Perception of human viewing [6]
To see real 3D image we need two a little bit different images displayed at different way for
each eye. Each image should be visible just for one eye. This technique is named 3D stereo.
There are available a few techniques which allow display 3D stereo image. Generally we have
passive and active systems which required using special glasses. Passive systems use
polarisation filters or spectrum selection at glasses. Active one just open and close display in
glass for each eye and display image for one eye and then for second one in turns. To see 3D
image we need to use glasses. The University decided to use passive Infitec system with
spectrum selection and active solution, which is based on NVIDIA 3D Vision Pro system which
guarantees high quality of 3D immersion and is dedicated for NVIDIA Quadro graphics cards.
16
Fig. 2.5. NVIDIA Quadro Sync [7]
Displaying 3D stereo image is a little bit more complicated in CAVE environments. There is
synchronisation of the displays that you will see images in the same moment of time at each
display which is projected by different projectors. To synchronise it is used special hardware
which synchronize 3D signal for each graphic card. These NVIDIA Quadro Sync cards are
connected to each other through separate network. Quadro Sync connects to NVIDIA Quadro
GPUs, synchronizing them with the displays or projectors attached to them. This guarantee
correctly display 3D stereo image at each displays in CAVE [7].
2.5. Eye Tracking
Tracking system detects your motion and reacts for it. We can track full body motion or their
different part as hand or head. The most important in CAVE is eye tracking system because in
CAVE you can walk around, so you need different 3D perspective from different point of view.
This is done by eye tracking system in real time, which need to use special eye tracking
positioning system.
Fig. 2.6. Eye tracking glasses with positioning system [8]
17
The tracking system consists of cameras and IR's which accurate locate glasses with their
transformation in space. This information is further used in simulation to transform displayed
image to enhance such virtual reality experiences [8].
2.6. Surround 8.1 sound
8.1 sound is the common name for an eight-channel + subwoofer surround audio system
commonly used in home theatre configurations.
Fig. 2.7. Surround system [9]
The CAVE is square room where sound comes from different directions. This is done by 8
channel surround sound system. Each channel is independent. This system produces real 3D
stereo sound which gives you chance to feel immersed in a scene as part of the action [9].
2.7. VirtuSphere - locomotion platform
The main concept for creating the I3DVL laboratory was to add something unique and
useful which will improve CAVE installation. The VirtuSphere is platform for immersion into
cyberspace. It is a big semi-transparent sphere where user can go inside and control movement
by walk in virtual world by just walking [10].
18
Fig. 2.8. VirtuSphere in action [10]
Platform allows rotating freely in any direction according to the user’s steps. A user is able
to walk and run inside the sphere. The sensors collect and send data to the computer in real
time and user’s movement is replicated within the virtual environment. This gives you full body
immersion into virtual reality.
19
3. EXISTING CAVE SYSTEMS AND LOCOMOTION PLATFORMS
I would like to introduce the CAVE at Gdansk University of Technology in Poland. This is
one of a few available 6-wall CAVE systems in the world. There is a big 3.05 meters sphere
inside, locomotion platform where user can go inside and freely move around, moving the virtual
world. This is impressive configuration as well as one of the most advanced configurations in
the world.
Another impressive solution is CAVE at Aachen University in Germany which is one of the
biggest in Europe. It has walls 5.25 x 3.3m in size. This system uses 24 projectors to project
image connected from 4 projectors for each wall which improve display quality.
3.1. I3DVL - Gdansk University of Technology
Gdansk University of Technology took a few years to create Immersive 3D Visualization
Laboratory (I3DVL) with 6-wall CAVE. Immersion of user is enhanced on 6 wall projection. The
CAVE is one of the most advanced in Europe and is the top solution in the world. It is unique
because it is optionally supported in mobile locomotion platform which can be installed inside
the CAVE. The locomotion platform is a big sphere named VirtuSphere where user can go
inside and naturally walk around the virtual world [11].
Whole solution is based on high-end technologies which create high quality and realism of
the simulation at the highest level. For such system it was built up a 13 meters height building
with glass room inside. At each walls there are rear projected images from external sides. It
uses 2 projectors for wall to double resolution of image. Computer system is based on 14
computers with 32 GB RAM, NVIDIA Quadro K5000 4GB, fast SSD and fibber network. Each
computer is connected into high quality Barco DLP HD 3D 120Hz projector with laser
calibration.
Fig. 3.1. Proposed room schema of I3DVL (at the moment is a little modified) [11]
20
Technical specification:
 CAVE with walls 3.4x3.4m placed 3m above floor,
 Spherical locomotion platform with diagonal 3.05m (VirtuSphere),
 Acrylic glass all 6 screen, floor with load at least 500kg,
 12 DLP HD 3D 120Hz projectors with laser calibration of image (< 0,5 mm),
 14 computers, 32GB RAM, NVIDIA Quadro K5000 4GB, HDD SSD,
 InfiniBand fibber network, 40 Gb/s full duplex,
 Surround sound 8.1,
 Tracking system.
3.2. Silesian University of Technology, Poland
Silesian University of Technology probably built the first CAVE in Poland. I have visited
Silesian University with Ph.D. Jacek Lebiedź and Ph.D. Adam Mazikowski to check it in action.
It was my first contact with CAVE. This is simple system created with 3 walls and floor. The
image is displayed at 1024x768px resolution which is in middle range. When you come closely
you will see little pixels. There are just 4 projectors, each one for different wall. There are no
mirrors used for projectors. Screens are created from material which is elastic. The floor is
made from wood.
Fig. 3.2. Author in CAVE at Silesian University of Technology
This is really simple installation which uses powerful Quazar3D application to display
simulations. When I wear glasses and come into CAVE the impression was just amazing. I have
never seen something like that before. Quazar3D provide high level of visualisations where I
21
feel immersed at all. I feel that what I see it was like a real world. So even of simple CAVE
installation it was amazing experience for me. The only minus it was no ceiling and back wall
which force me to focus at the front wall and blocked me to watching up. Even that this
amazing feeling of immersion is not describable.
3.3. aixCAVE - Aachen University, Germany
The solution created by Aachen University in 2012 contains 5-walls CAVE which gives you
full freely in 360 degree movement. With size bigger than 5x5m
2
and with rear projection this is
the biggest such solution in the Europe. System provides high quality image. The image is
bright, uniform and provides 3D active stereo vision which guarantee excellent experience for
the user [12].
Fig. 3.3. CAVE installation at Aachen University [12]
3D stereoscopy projection is created through 24 DLP full-HD projectors. There is used four
projectors for each wall and eight projectors for floor (which is divided for 2 screens). Rendering
system is created by 24 computers with 2 for slave and 1 for master NVIDIA Quadro 6000
graphic cards (the older ones), 2x Intel Xeon with 6 cores 2,7GHz, 24 GB RAM and fast
InfiniBand QDR (4x) fibber network.
Technical specification:
 Five screens with rear projection (4 walls and floor),
 24 HD projectors with 3D active stereo NVIDIA 3D Vision Pro 120Hz,
 Walls 5.25m x 3.30m,
 4 projectors per wall with edge blending image,
 Floor 5.25m x 5.25m,
 8 projectors for glass floor of thickness 6.5cm,
22
 8 camera optical tracking system,
 Use of power about ~67kWatt,
 Automatic closing door.
3.4. Possible applications of CAVES
Possible applications of CAVES will be shown as the examples of use Virtual Reality Center
(VRC) of Johannes Kepler Universität, Austria.
VRC at Johannes Kepler University was created in 2005 year. At attached DVD there are
additional movies and photos in directories: “documentationmoviesVirtual Reality Center -
Johannes Kepler Universitat” and “ocumentationphotosVirtual Reality Center - Johannes
Kepler Universitat”.
3.4.1. Flooding Crisis Simulation
Application is the simulation of flooding based on Grid platform (CrossGrid UE Project) [13].
It's provide ability to simulate different flooding with different parameters. By using CAVE
experts may batter estimate ravages of flooding and better counteract them. It's based on
OpenSG [14].
Fig. 3.4. Flooding system in action [14]
3.4.2. Molekül Visualisierung (MCE)
MCE is a collection of research programs about visualizations the electron density
distribution. The application is created to visualize values of calculations X-ray diffraction data.
There are available versions for Windows, Linux, IRIX and CAVE [15].
23
Fig. 3.5. Molecules and particle system visualization [15]
3.4.3. Neurochirurgieplanung in immersiven Umgebungen
Project was created in cooperation with Medicine Department at University in Insbruck and
Institute of Fluid Mechanics. Application is a teacher of medicine education or may help to plan
neurosurgical procedures [16].
Fig. 3.6. Anatomical structure in medicine [16]
3.4.4. Virtual Gallery
Virtual Gallery provide virtual travelling and study scenes in virtual worlds.
24
Fig. 3.7. Travel in virtual world
3.4.5. Example students projects
There is a few students' works for CAVE installation.
3D Kunstwerk
Application shows interaction with 3D art. It's based on CAVElib [17].
Fig. 3.8. Interactive 3D art [17]
25
Multi User Maze
Application is a maze where a few users may participate at once. It's based on OpenGL
Performer [18].
Fig. 3.9. Multi user maze [18]
CAVE Skiing
Application attempts to move skiing into CAVE. It's based on OpenSG [19].
26
Fig. 3.10. Ski simulator [19]
27
4. PROPOSAL OF USE I3DVL
The CAVE provides many possibilities for use. VirtuSphere is movable so it is possible to
use just standalone CAVE or CAVE with locomotion platform. This increases possibilities of
uses. You can use it in simulations, medicine, prototyping, games, fun, marketing, trainers and
other disciplines. Only the imagination limits applications which you can use in CAVE. You can
create new one optimised for CAVE or just run existing one with a few modification. You can
use full 6-wall environment or just a few walls in it.
4.1. Simulations
The first group of possible applications are simulations. During simulations you can train,
learn or see how something works. The Gdansk University of Technology with cooperation with
Bohemia Interactive Simulations from Czech Republic based on their VBS 3 engine will create
"Crisis Management Centre". This kind of simulations provides solutions to prepare or what to
do, when some incidents will happen. There you can not only just imagine how it will be looks
like but you can see it and prepare for it.
4.2. Medicine
Conventional medicine needs a models or organs to work with them. Sometimes there are
very small and sometimes is difficult to see at real model how some parts are build or how they
are works. Here you can also treat fears. For example when somebody fears something than
you can slowly accustom for it. No one wants to be treated or operated by not good trained and
experienced man. Medicine in CAVE trainings provides adequate learning paths with exactly
showing how organism works and provides some exercises. You can learn how to make some
operations, how some of the organs are built and how they are working without possibility to
provide real models. This improves medicine experience.
4.3. Prototyping
Prototyping is a cost prone and long time process. Usually creating prototype in real is an
expensive and single operation. Sometimes it is even impossible to create prototypes in the
middle of stage because of costs or time limits. The CAVE is ideal solution for it. You can
prototype and verify it in real scale every product. Additionally you can change a prototype in
real time and see changes immediately. This provides great possibilities for prototyping.
4.4. Games
Every day games are going to be better and to provide immersion that is not a game, but it's
real. CAVE increase immersion of such felling and provide more natural and free navigation in
virtual worlds. In CAVE you will feel that you are inside virtual world. Every game will look
different. Some game which doesn't immerse you at PC here may immerse you at all. You can
cooperate with somebody in multiplayer mode. The players may use CAVE or different
platforms. This gives rich possibilities for playing games in CAVE.
28
4.5. Fun
There are some applications just for fun. CAVE gives new possibilities to feel immersion. In
CAVE just simple animations or movie can provide so much immersion and fun like you haven't
it never before. This is a place which gives you a lot of fun. You will discover it from beginning.
Perhaps you will discover a new type of fun and you will love it. You can travel, play with toys,
relax with animals and nature and do many more amazing things. Because of CAVE it will look
so real.
4.6. Marketing
Marketing is another group of application which you can use. You can plan how
advertisement will look like and where it should be placed to get the best result. In real time you
can change configuration. You may provide virtual work at new estate. You can present
apartment in different styles. Maybe hotel, look through window? That also is possible and it will
help you a lot when you want to sell or build PR.
4.7. Trainers
You can simulate some vehicles, devices and other thinks which can be supported with real
models as e.g. cockpit or control panel. This can teach you what you should do or what you
should not to do and why. In big environments this can add some randomness for training
paths. In opposite to real, that virtual training may lower the costs of training and sometimes
may train in way that in real is not possible. This is a big advantage over the conventional
trainings.
29
5. METHODOLOGY OF CREATING SOLUTIONS FOR I3DVL
Creating applications for CAVE in many cases is different than creating typical 3D
application. Of course you can use existing editors which support CAVE. Then it looks just the
same or it needs just a little modification in code. But when you want create some application
from scratch, just by using some frameworks then it's need from you more advanced work and a
few notices that you should remember.
The main goal is that CAVE applications should work in distributed environment with
synchronisation in every frame for receive proper stereo 3D image. You should think about it.
Some objects will look the same at all distributed nodes which should be just synchronised. This
may be for example a transformation and animation of some objects. The first problem will be if
you use for it some algorithm which is based on same randomness. Then you need provide
synchronisation of each frame for all steps of the algorithm for all nodes, which sometimes may
be difficult. The second problem is to provide state of objects which should be different at each
node. This may be vector of camera which is different at each node, because of square room
projection. The third problem is local node computation. There is no need to make all
computation at server and just send result of it for all nodes. This just makes higher network
usage. We should think that we have at least 12 computers connected each one to each other.
Typical CAVE application has one server which control and synchronize state of objects
between nodes. We don't have too much time for every frame. So if we will exceed bandwidth of
network then we will see jams at our application.
So the first think is that the CAVE application is a server-client type of application. The
server control whole application, share state of objects and synchronize each frames. The
clients are renderers which render frames, make local calculations and display image at
projectors. Server controls input and output devices as manipulators and tracking system,
maintains network connections, and setups main camera system based on external sensors. In
our CAVE we have 12 positions of cameras. There are 2 cameras for one wall. First we should
setup these cameras and then provide for them transformations from eye tracking system that
these cameras will react for our movement of head. This transformation we should multiply by
data comes from manipulator device and VirtuSphere locomotion platform that we will be able
moving around virtual world. Usually frameworks have built file configuration for displays and
control devices which shorter time of configuration for different platforms.
You should have in mind that when you want to use some framework function or some
library, it will work in distributed environment. Is there any possibility to share state of object?
This is a requirement for development CAVE applications. Many times you will need to write on
your own some functionality to use in CAVE because usually libraries are not designed to use in
distributed environments.
5.1. I3DVL as complete platform
I3DVL consists of 6-wall CAVE and spherical walk simulator named VirtuSphere. Each wall
3,4m square, displays image from 2 projectors. Each wall contains 2 images with 480px edge
blending in the middle. The walls have horizontal split of images for edge blending. Inside CAVE
is installed VirtuSphere locomotion platform which may be removed off. VirtuSphere is 3.05m
30
semi-transparent plastic in the form of grid sphere where user can go to inside and walk in
virtual world. The VirtuSphere technically works as a mouse. There you have 3D stereo active
image which is provided by NVIDIA 3D Vision Pro or Infitec Barco system. Both of them need to
use different glasses and drivers. Additionally you have markers at glasses for eye tracking
system and cameras with IR sensors to detect head movements in real time. There is also 8
channels sound system based on eight speakers plus one subwoofer. Applications are running
at 12 computers plus 2 additional in control room. Computers are connected by fibber and cable
network and additional independent cable network for 3D synchronisation. These create whole
I3DLV contemporary configuration.
5.2. Creating Virtual Reality applications for CAVE
Virtual reality applications most often are created in 3D technology. Applications frequently
create virtual reality in 3D. These applications typically consist of many elements such as the
scene in 3D, rendering system and image displaying, user interaction, physics or other laws of
nature, movement and animation elements, audio and surround sound, special effects such as
fog, rain or post effect like e.g. motion blur, etc. [20].
There are also important components at a lower level such as increasing the efficiency of
the system through the use of multiple threads and optimal algorithms to calculate the
distribution or synchronization of data between clusters, generating and displaying image in 3D
stereo, GPU utilization and enhanced instruction for the calculation of whether the use of the
advanced capabilities of the latest graphics cards through such implementations like shaders
[21].
Solutions for CAVE consider features such as combining edges of images projected from
multiple projectors on a single plane (Edge Image Blending), generation and synchronization
technology of stereo image using multiple clusters consisting of multiple graphics cards and
projectors, the detection of the head position for the observer (Head Tracracking) and on this
basis, generating position of the 3D image and support for additional peripherals like gloves or
other 3D manipulators such as used at the Gdansk University of Technology locomotion
platform so-called VirtuSphere - mostly obtained through opportunities to write and attach own
driver for the user.
Not every system or application for CAVE should comply with all such requirements, but
advanced ones may. There are systems dedicated for just one specified operating system or
multi-platform, which in turn extends the field of application. We have some libraries that offer
full or partial functionality described above, which can then be used in a newly-created
applications or we can use editors with a user interface that help us a lot in creating advanced
applications with all aspects of creating application for CAVE. Such editors offer feature of
WYSIWYG interface and scripting languages that allow make changes in real-time in the
running application without needs to recompile the script or whole application to see result in
real time, which significantly speeds up process of application development.
At least we can write a complete framework from scratch, editor or an application for use in
the CAVE. A key element of the final visual effect is a way of rendering graphics. Such low level
graphic may be created using the CPU or GPU. Currently, most graphics cards have very
31
powerful GPU computing units designed for efficient graphic generation and are able to display
a much more complex graphics in real-time compared to CPU. Virtual reality applications
require real-time interaction and the same requirements apply to the displaying image. For this
reason the graphics are not generated at CPU but it is used GPU instead. Therefore, the
creation of virtual reality applications for use in the CAVE uses mostly API like OpenGL or
DirectX . These two APIs form the backbone of all existing libraries, frameworks, engines to
create applications that use 3D graphics, including the CAVE solutions [22].
5.2.1. Existing libraries and frameworks
CAVE solutions are expensive investments, in many times costing hundreds thousands or
even millions of dollars. Because of high cost there are not so much such platforms on the
world. The most often we can find it at different Universities and Military Areas. There are open-
source software developed mainly by Universities and a few commercial rather expensive
solutions available on the market.
5.2.1.1. API Graphic
At the lower level of rendering graphics there are interfaces like OpenGL and DirectX
2
[23].
API at this level is very thin layer, specialized in just generating and processing computer
graphics at GPU. This layer has direct connection to graphic cards via graphic driver.
Functionality of such layer sometimes is called as state machine, which means that at this level
it is not available whole scene but just base elements like triangles, from scene is build up and
displayed without any knowledge about their past and future. Here are also available shaders
which provide ability to makes some operations at GPU in streams at many cores
simultaneously.
Because of such limitations about knowledge of scene, there is need to create a layer of
higher level which will take care of creating a scene, lighting, handling input and output devices
and interaction in virtual world. The knowledge of whole scene gives possibility to optimize
performance of application. We can choose dedicated solutions for specifying applications e.g.
games or use general purposes solutions e.g. scene-graph engines. For CAVE solutions the
general purpose frameworks are better suited.
At the next stage we can use or create an editor with user interface. This will shorten time
and make easier of creating application. In editors we can build our scene and manage it in
graphical way often through WYSWIG editor. By using editors we can also make simpler
configuration of displays, network, tracking and devices to run application in CAVE.
5.2.1.1.1. DirectX
Microsoft DirectX is used mainly in games. It's a stable standard which new versions are
created rarely. These guarantee that applications will work for long time at many computers.
2
At the moment there is under development Mantle API by AMD which is the lower level graphics API.
Microsoft also works to add low level instructions into DirectX in new 12 versions. OpenGL want to add
such possibilities as well. Such API’s are not available at the moment that’s why I don’t describe it there.
32
The minus is that is not an open standard, and you should wait a long time for a new functions
or improvements. DirectX works only at Windows and XBOX. It is projected to work mainly in
one window, but it supports more than one. The main advantage is that NVIDIA 3D Vision
works at GeForce GPU's in heuristic way. This enable 3D stereo at the cost of slow-down your
application for low end graphic cards. DirectX don't support hardware stereo 3D and there are
not many scientists' libraries. This is the main thing that it is rather not used in professional 3D
applications, as this one used in CAVE [24].
5.2.1.1.2. OpenGL
OpenGL developed by Khronos is open source library for 3D graphics. Because of open
source and many additional libraries, hardware 3D stereo image support, possibility of work with
multiple displays and availability for different systems: Windows, Linux, Mac and UNIX it is the
mostly chosen for advanced 3D application. The drawback is that not every graphic cards
support all extension of the library as is in DirectX so developed applications which use some
extension may not work at all computers. This incapability issues contributed to often replace it
by DirectX in games. The different situation is that OpenGL ES is a standard in mobile devices.
Only newest Windows Phone support DirectX. But the most mobile devices based on Android
and Mac OS X support OpenGL ES. Almost all further described frameworks are based on
OpenGL.
5.2.1.2. Scene graph engines
Scene graph engines provide possibilities to creating and managing whole scene displayed
in 3D virtual simulation. They are usually used for general purpose and they are easy
integralable for any applications. Using them give us possibility to manage virtual world, adding
and removing objects, transform them, generate scene in many threads in cluster environment
and display it at many devices like monitors, HMDs or projectors.
Scene graph represents logical connections between elements in the scene and is used for
performance management and rendering. The most often scene is represented by hierarchical
graph contained child nodes and one main root node. Each node may contain other nodes. In
advanced systems the node may have a few parents which create directed acyclic graph
(DAG). At default each operation performed at parent is performed at all his children as well.
Scene graph systems are often specified as retained or deferred rendering. It means that
they not just provide content to rendering but keep it in the buffer which adds possibility to
additional transformations and optimizations e.g. for use multi-threading just before rendering.
These systems often are object-oriented which give possibility to extend their functionality
through implementing different modules and plug-ins. This provides easy way to scale the
system.
OpenSG and OpenSceneGraph are Open-Source solutions which are often used for
creating VR and CAVE systems. NVIDIA have own scene-graph framework named SceniX
which is very powerful and provide real-time raytracer. SceniX is optimized for NVIDIA graphics
cards and have not available source code. The problem with SceniX is that is not prepared to
33
use with CAVE out of the box and there are not currently available libraries which provide
integration SceniX for CAVE solutions. So the only way to use SceniX in CAVE is to write own
module to use it in such environments.
5.2.1.2.1. OpenGL Performer
OpenGL Performer is the one of the first systems for scene-graph management. It was
created by SGI. Entirely was available only for SGI graphics stations with IRIX operationg
system. The main goal for SGI was hardware, not software. OpenGL Performer not share
source code so this factors causes that in the middle-time was arisen other systems as open-
source e.g. OpenSG where everyone may add own additional modules. For this reasons
OpenGL Performer just disappeared from the market and is currently outdated [25].
5.2.1.2.2. OpenSG
OpenSG is open-source scene-graph management system for creating 3D real-time virtual
reality applications. It is available for Windows, Linux, Solaris and MacOS [26]. It extends
OpenGL. System was developed across many years. In 2001 it was published the first version
of OpenSG. The year 2007 begins the work at second version. At sourceforge.net we can
observe that the last version was published in March 2013 and from this time it was just once
downloaded. In git
3
repository the changes are added almost every day.
For the top advantages we can include cluster and multi-tread support in rather easy way
at framework level. Also the ability to render graphics over several computers and graphics
cards undoubtedly belongs to the advantages of this solution. With the open code and its
availability is still extended. OpenSG is not an application. It is just a library that we can use in
our application. This framework may be used with VRJuggler and Open Tracker so it makes
easier to prepare applications for running in CAVE solutions.
The biggest improvements in OpenSG 2 vs 1.8 is an improvement of the architecture, which
currently relies on the shaders. Additionally programming is simplified because some thread
synchronization happens in the new version automatically. There are improved handling of
pointers by introducing their new types. Properties of geometry have been changed. Many
internal implementations have been improved, rebuilt or created in a new way. In new version
support for NVIDIA CUDA, CG, EXR, NURBS, VTK and Collada is added. All these changes
make it worth to use a newer version of OpenSG. The most importantly OpenSG in second
version is faster than the previous one.
Documentation for version 1.8 contains about 200 pages OpenSG Starter Guide which
describes the entire important topic related to the library. In addition, there are described API for
all classes and framework division into modules. There are available on the market some books
about OpenSG. Unfortunately, the documentation for version 2 is a little abandoned and much
of it is just simply copied of the documentation from first version.
3
Address: git://git.code.sf.net/p/opensg/code.
34
Most of the sample applications from OpenSG 2 were simply carried from previous version.
There are no more advanced examples provided with OpenSG 2. Therefore I attached
presentations of example programs for both OpenSG 1.8 and 2. Originally OpenSG 1.8
contains example applications provided for Visual Studio 2005. 22 example applications are
provided to download and seven additional you can download with the source codes of
OpenSG. Each examples I converted to Visual Studio 2012 and included at attached DVD.
Otherwise OpenSG 2 provides compiled libraries for Visual Studio 2010 for both framework and
supporting libraries. First full compilation on my computer takes about 6 hours. OpenSG project
is managed through Cmake. Compiled library size is 25 MB for lib and 15 MB for dll for OpenSG
1.8. For second version we have respectively 20 MB and 120 MB (there are also some
extensions that take the extra a few megabytes of data). The dependent libraries for 1.8 take
about 30 MB for lib and 5 MB of dll. In contrast, second version weight 600 MB for lib and 30
MB for dll.
5.2.1.2.3. OpenSceneGraph
OpenSceneGraph is one of the most frequently used scene management systems in the
world. Is used among others by Boeing in Flight Simulator, NASA's Earth Simulator or Gear with
Flight Simulator and others such as Sony or ESA in their projects. In spite of its advanced
features it is fairly simple to use. The first version of OpenSceneGraph was founded in 1998
year. It was created by Don Burn's, who previously worked for SGI at their scene-graph
OpenGL Performer. In the middle time he created a solution of scene-graph named SG, which
was the prototype for the OSG. In 1999, the project was officially named OpenSceneGraph [27].
The entire framework is based on several primary and optional libraries. On the other hand,
if necessary on-demand dynamic plug-ins are included in the form of dll files which make writing
applications simpler.
Framework has modular structure. Basic modules include scene operation management,
building the graph, math class containing implementations for vectors and matrices,
implementation of object-oriented multi-threading management, mechanisms for managing files
and streaming 2D and 3D as well as components for dynamic loading graph to handle large
scenes and the mechanisms to travel the graph, modifying its elements and call instructions
with OpenGL.
Additional modules allow to make animations, including skeletal and morphing based on
key frames and canals, a module for creating special effects in 3D, the system of multi-platform
GUI with support devices, mechanisms of manipulation objects in space (rotation, scale and
translation), particle system for rendering explosions, fire, smoke, etc., libraries to add shadows,
terrain generation system based on the height maps, vector text rendering in 2D and 3D based
on the FreeType font, integration of management systems for Windows Win32, X11, MacOS
and other, generation volumes and integration with Qt library, which allows for example to
generate Qt components in space (such as a web browser).
For tests I used the latest version OpenSceneGraph 3.3.1, developer release published on
29 January 2014. Every few months a new version is released. Previous stable version 3.2.0
35
was released about half year earlier. On this basis, it is easy to conclude that the framework is
still being developed. There are provided supporting libraries for across VS 2005 and VS 2013,
Linux, Mac OSX and Android. At the time of writing this work the compiled binaries were not
available.
Compared to OpenSG this framework is managed in terms of a better version releases to
users. There are frequently updates. Also here is much better designed website and all codes
lay on own servers. Preparation and setup library using CMake in contrast to OpenSG went
smoothly and compilation itself is also not encountered additional problems. This looks like
more solid release apart to OpenSG.
All provided documentation is based on several books. On the OSG website we will find not
enough information that will teach us how to use the library. For this reason, we can say that we
are forced to buy the books. We can choose from a few items e.g. “Begginer 's Guide” and
“Cookbook” later. They are prepared to learn framework from begging. Therefore, they are
written in a clear and arranged manner. They fully compensate the lack of documentation not
available on the website. The books also describe how to configure and build the library and the
method of preparation projects in CMake for Visual Studio. You should start by reading them,
then you can analyze the accompanying examples and then start creating a new solutions. For
the purposes of this work I created all of the sample applications that are described in the
books. Together with a library it's provided a quite large number of sample applications. They
show a wide range of available functionality. These examples are much more advanced in
contrast of samples provided by OpenSG.
By press ‘s’ key we can both turn on and off and switch between various modes of statistics.
We have information about amount of frames per second on the busy threads in terms of
rendering scenes and information about the complexity of the scene including information on
the number of its elements, nodes, vertices, or even the instance objects.
Before compilation we should add at least the following environment variables:
 OSG_ROOT - pointing to the root directory of the OSG,
 OSG_NOTIFY_LEVEL - NOTICE - Setup the level of debug messages for OSG,
 OSG_FILE_PATH - indicating on attached files containing resources for the sample
applications.
5.2.1.2.4. NVIDIA SceniX - NVSG
The scene management in the implementation of NVIDIA is largely dedicated for their
solutions and its "strongest" squeezed last power of NVIDIA graphics cards in standard use of
the advanced capabilities of graphics cards NVIDIA Quatro [28]. A strong element of framework
is work with a range of advanced NVIDIA libraries to render scene and raytracking module, bulk
processing or scripting level shader of graphics card. The strength of this framework may
indicate that they are used in systems such as Autodesk Showcase [29], which allows for photo-
realistic visualization and interaction in the prepared scenes in AutoCAD or Autodesk Inventor
36
or Image Courtesy of Realtime Technology AG (RTT) for application DetlaGen2, which is used
for visualization of the highest quality, mainly cars.
Unlike competing solutions, this framework was enhanced in a shader layer which is
characterized by remarkable speed of operation and the quality of the generated image.
Shaders are built on the basis of language CgFX [30]. Also, its use interactively ray tracker
based on OptiX or RTFx (Ray Tracing Effect interchange Format).
Framework is available only for Windows and Linux in 32 and 64 bit without source code.
Moreover, there are available pre-compiled libraries for SceniX 7.3 from August 2012 to use in
Visual Studio 2008 and 2010. Based on history of updates we can see that this framework is
updated once for every 1.5 years (but the last available version comes from two years ago).
We should prepare about 2GB of free disk space. To use the library in VS 2010 you need to
install an additional package "Visual Studio 2010 redistributables"
4
and "Service Pack 1"
5
(otherwise you will be not able to properly setup Cmake for VS 2010 project). There are known
some issues with troublesome under Linux, which is manifested by the fact that some
operations may result in errors. In contrast, the 64-bit Windows cannot load textures in TIFF
format (which should not be a problem, because we can load the textures in other formats). For
compile examples using CMake Qt and wxWidgets frameworks must be prepared.
To compile wxWidgets 2.8.12 locally it's necessary to comment out in windows.cpp file:
#if !defined __WXWINCE__ && !defined NEED_PBT_H
// #include <pbt.h>
#endif
and add value to preprocessor "_ALLOW_KEYWORD_MACROS".
Fig. 5.1. NVIDIA SceniX viewer
4
You can download it from: http://www.microsoft.com/download/en/details.aspx?id=5555.
5
You can download it form: http://www.microsoft.com/en-us/download/confirmation.aspx?id=23691.
37
Viewer is a complete application based on Qt framework with available source code. Viewer
allows you to view scene and 3D graphic objects and components.
5.2.1.2.5. Summary
Scene graphs engines helps develop CAVE applications. Through years there were
significant changes in architecture of graphics cards which forced serious changes in such
frameworks. Because of that the most important are modern frameworks which can provide full
power of existing graphics cards. So at the moment the most valuable are OpenSG and
OpenSceneGraph which are open sourced and NVIDIA SceniX. Below you can see comparison
table where I also included ViSTA framework because of contained scene graph engine
described further in this work.
Table 5.1. Scene graphs comparison
Feature OpenSG 1.8 OpenSG 2 OpenSceneGraph ViSTA SceniX
Scenegraph x x x x
6
x
Realtime
graphics
x x x x x
Open Source x x x x -
Licence LGPL LGPL OSGPL LGPL Own
7
Based on OpenGL OpenGL OpenGL/OpenGL
ES
OpenSG OpenGL/DirectX
Supported
platforms
Windows,
Linux,
MacOS X,
Solaris
Windows,
Linux,
MacOS X,
Solaris
Windows, Linux,
Mac OSX,
FreeBSD, Solaris,
Android
Windows,
Linux,
MacOS X
Windows, Linux
Extensibility x x x x x
Multithreading x x x x x
Clustering x x x x x
Creating Simple
geometry
x x x x x
Support mouse
and keyboard
events
x x x x x
Sample
applications and
tutorials
x x x x x
Documentations
and books
x x x - x
API
documentation
x x x x x
6
ViSTA based on OpenSG 1.8 (there are works on implementation of OpenSceneGraph).
7
You can read license during installation.
38
Direct OpenGL
drawing -
glBegin()
x x x x x
Materials x x x x x
Load scene files
8
VRML97,
OBJ, dxf,
raw, stl, 3ds,
OFF, BIN
VRML97,
OBJ, dxf,
raw, stl, 3ds,
dae, OFF,
BIN,
COLLADA
.3dc, .3ds, .obj,
.ac3d, .bsp, .dae,
.sw., .dxf., .fbx.,
.geo, Inventor, .ive,
.logo, .lwo, .lws,
.md2, .ogr,
OpenFlight, .osg,
.pfb, .shp, .stl,
.dds, VRML, .x
VRML97,
OBJ, dxf,
raw, stl,
3ds, OFF,
BIN
COLLADA,
COLLADA FX,
VRML2.0/WRL,
OpenFlight, OBJ,
3DS, PLY
Picking objects x x x x x
Lights x x x x x
Cameras x x x x x
GLSL Shader x
9
x x -
10
Stereo 3D x x x x x
OpenGl
extensions
x x x x x
Scene statistics x x x x x
Shadows x x x x x
NURBS - x
11
x - x
OpenEXR
12
- x x - x
Cg - x x - x
CgFX - x ? - x
Nvidia CUDA - x x - x
LOD x x x x x
Viewports x x x x x
Cube map x x x x x
Graph traverse x x x x x
VTK - x x x -
Collada - x x - x
Cmake x x x x -
VS libraries to compile to compile to compile to compile 2008 or 2010
GUI Toolkit GLUT, Qt,
wxWidget,
Win32
GLUT, Qt,
wxWidget,
Win32
GLUT, Qt,
wxWidget, Win32
GLUT GLUT, Qt,
wxWidget, Win32
NVIDIA OptiX - - - - x
RTFx RTFx - - - - x
RT raytracer - - - - x
8
In each framework there are supported other file formats through custom plug-in.
9
GLSL is available through ShaderChunk object which is experimental.
10
Shader used as material (extension of OpenSG) or is used for particle system generation.
11
Through OpenNurbs library.
12
OpenEXR is high-dynamic range (HDR) image file format.
39
Ambient
Occlusion
- - - - x
Mobile - - Android/OpenGL
ES
- -
Lib size 25 MB 20 MB 8 MB 4,5 MB 12 MB
Dll size 15 MB 120 MB 44 MB + 780MB 26 MB + 5,5
MB
16 MB
Support lib size 30 MB 600 MB 1,6 GB -
Support dll size 5 MB 30 MB 64 MB 32 MB
As we can see in comparison table the functionality of selected scene graphs engines are
very similar to each other. The base functionality are almost the same for all of them. The main
difference is between NVIDIA SceniX and other. SceniX have no provided source code, but is
very powerful. It's specialised for NVIDIA graphics cards and as only one can works with
DirectX and OpenGL and have real-time ray-trace engine. SceniX is the most advanced scene
graph engine. OpenSceneGraph (OSG) is the only one which supports mobile. OSG contains
the most number of additional modules and natively support shaders. It makes it good choice to
use too. Then we have OpenSG which looks like a little forgotten framework and at the moment
is not so functional as OpenSceneGraph. And at the least there is ViSTA framework which
based at old fundaments as OpenSG 1.8 which makes it a little bit depreciated at the moment.
5.2.1.3. Frameworks for CAVE solutions
In this chapter I will describe frameworks that extend the possibilities of scene management
engines. This extension concerns above all the possibility of image rendering by multiple
computers and multi GPU rendered image into several instances. In addition, these systems
synchronize the user's camera head tracking system using mechanisms to detect the position of
the head in order to properly render the image. It is concerned with the management of various
manipulators, so that each server receives consistent information about its properties.
Using the presented solutions, we can write an application with distributed rendering
processing, rendering units (separate computers as clusters) and where both the output image,
the input devices and events will be synchronized in the resulting application.
These frameworks provide advanced mechanisms of network connections and serialization of
objects. Sometimes you need the given object to make available to read for all rendering unit
(e.g. containing initialization data) and sometimes it led to a renderer to each unit have its own
state of mind not shared with other machines (e.g. for storing data information about the
configuration of the camera).
5.2.1.3.1. ViSTA
VIRTUAL REALITY for SCIENTIFIC TECHNICAL APPLICATIONS - ViSTA framework
created by Virtual Reality Group at RWTH Aachen University in Germany. University has set up
several applications in the CAVE. Framework is available as Open-Source project. This solution
was developed for about 15 years. During this time, several generations of graphics cards
40
architectures have passed in the era of information technology and the framework itself was
strongly changed. Initially it was available at the super computers like SGI Irix, HP-UX, Sun
Solaris and now is available for Windows, Linux and Mac. At the moment, at least when it
comes the latest version of framework no anyone outside of the University of Aachen benefited
from this solution [31].
The biggest advantage of framework is its integration with various existing libraries which
broadens its area of application and the fact that it fully supports CAVE systems and display
image in stereoscopic 3D technology combining images from multiple projectors (Image
Blending), tracking and adjusting the position of the user's head (Head Tracking), support
calculations on multiple clusters and multiple input-output devices. All these features give us the
basis for the creation of a dedicated application in the CAVE.
Main features of ViSTA framework:
 scene management,
 support input and output devices (e.g. manipulators, tracking camera and haptic
devices),
 is based on OpenSG 1.8 (in the future will be support OpenSceneGraph as well),
 support for cluster computing (VistaDataFlow),
 support for multiple screens (including video monitors and stereo 3D),
 tools for managing threads, links, files, network, etc.,
 the ability to write and add own drivers for input and output devices,
 integration with many available Open Source libraries,
 contains own mechanisms to handle the keyboard (mainly via events),
 allows to create basic 3D geometric solid objects,
 import 3D objects and scenes created in other applications,
 allows coloring and texturing objects,
 support lighting and its management,
 display text in both 3D space and on the GUI layer,
 allows to add interactivity to objects created (e.g. you can select an object and move it
to another location),
 create and manage the camera (set its parameters, location, etc.),
 add a layer overlay containing other scenes both in 2D and 3D rendered in real-time,
 implementation of the events on the phenomenon in the application (for example, after
obtaining the position of a given object is generated the event),
 communication with other applications in C/C++,
 debugging tools that display information on both the console and on the scene.
Integration with the following arrangements:
 OpenSG - allows you to manage and display a 3D scene in real time,
 OpenSG Ext - extension OpenSG (e.g. particle system or fog),
41
 VTK (The Visualization Toolkit) - adds a lot of graphics functions for working with
graphics,
 OpenGL - enables native OpenGL command execution within the node,
 Python - allows you to write dynamic scripts.
Initially, the biggest problem to start with the framework is total lack of any documentation.
There are only comment at the source code, generated API documentation based on classes
and several sample very simple applications that show the basic capabilities of the framework.
Knowledge of OpenSG 1.8 will help a lot because many of the framework functionality expand
and use mechanisms of it.
Configuration is been based on text files, which can detect changes in running application.
This configuration allows you to easily move the application between different environments,
e.g. between developer station consisting of two monitors and CAVE like systems. For that you
only need to specify how many walls and projection system is composed of. Here you can
configure the network addresses for communication between computers in clusters and input-
output devices. This allows to separate application itself on its configuration depending where it
has to be launched. Same configuration files can consist of multiple files so there is a possibility
to prepare the so-called configuration modules and plug-in them to streamline the configuration.
A key element of framework is scene management system, which is based on OpenSG 1.8.
The OpenSG system is described in the chapter devoted to it - this is the main mechanism that
is responsible for displaying the scene in real time. OpenSG directly sends data to the graphics
card to the GPUs via OpenGL, which then renders the data held in the form of an image.
OpenSG 1.8 was completed in 2007, which greatly reduces the possibility of the internals of the
ViSTA framework. Hope is in the ongoing work on the replacement of the old OpenSG 1.8 for
competitive solution OpenSceneGraph. At this point I just want to point out that currently ViSTA
is not able to fully exploit the potential of the latest computers.
On the official website is information that ViSTA has additional libraries (VistaAddonLibs)
that add additional functionality, offering among others use of physics and collision detection,
soft body simulation and sound support. But those shared libraries are not shared to download.
Without them it can be implemented by own self or we may use other existing libraries through
their independent implementation.
For the purposes of this document, I described how to build both ViSTA framework and
supporting libraries as well as sample applications. I attached the workspace containing both all
the projects and source codes and compiled versions of the applications. Also I created mini-
framework "FirstTry" using ViSTA for make easier to create a new applications in this
technology (located on the accompanying CD in catalog
"workspacemyvistaCAVE_PG_VS2012FirstTry"). The framework consists of several modules:
communication for interfacing with external applications, providing support for the keyboard
controller, circulation and transformation of objects, allowing for the interaction with objects.
Scene stage manager allows you to add more objects to the scene and text, which allows you
to add text both in 2D and 3D. In the framework the main file is Application.cpp that sets and
42
initializes the initial stage of application. By this way I prepared a solution that divided ViSTA
frameworks for functional modules, so you can quickly begin to create a new scene with it.
5.2.1.3.2. VR Juggler
VR Juggler is the one of first library specialised for implementing CAVE applications. It is
scalable system which supports complex multi-screen systems running on clusters. The
flexibility of VR Juggler allows applications to execute in many VR system configurations
including desktop VR, HMD, CAVE-like and powerwall-like devices. VR Juggler supports IRIX,
Linux, Windows, FreeBSD, Solaris, and Mac OS X. Library contains Gadgeteer which is a plug-
in system to support local or remote devices. The configuration is based on .xml files. It can
work standalone as a scene graph based on OpenGL or can cooperate with existing scene
graphs engines like OpenGL Performer, OpenSG and OpenSceneGraph. This sounds good but
unfortunately it doesn't work with newest version of such engines and it cannot be compiled in
64 bit mode. This solution is simple to implement and configure to work in CAVE. But it is
outdated [32].
5.2.1.3.3. Equalizer
Equalizer is a framework that allows parallelization of OpenGL-based applications [33].
Thanks to it we can benefit from the use of multiple graphics cards, processors and even
computers to improve the efficiency and quality of the running applications. Applications based
on this framework can be run without modification on both single computer and virtual reality
systems consisting of a number of computers. It is a proven solution because many open-
source application and commercial products are based on this framework. These include known
applications such as RTT DeltaGen or 3D player Bino. It is available for Windows, Linux and
Mac. The solution is based on GLUT. At the moment creators are working on adding the
administrative library, which will allow the addition, configuration for new windows and changing
their templates from separate application.
There is available Sequel project, which simplifies the process of creating applications using
Equalizer by introducing mechanisms of modules. Sequel can reduce the amount of code
written as a ratio of 1 to 10. It is recommended to start with Sequel project and then move on
Equalizer with more advanced projects.
The main possibilities of framework include distributed rendering based on clusters, support
stereo 3D, tracking head position (Head Ttracking), support for virtual HMD helmets,
synchronization display on multiple screens, software combining edges (Edge Blending),
automatic configuration and one based on ASCII files, compression of the image sent over the
network, load-balanced mechanism for renderers units (Load-Balancing) and which is important
for the project to I3DVL support InfiniBand network and G-Sync image hardware
synchronization (using barriers "NV group" and " NV barrier").
Supported modes of parallel rendering image:
43
 2D (SFR - Sort-First Compounds) - each module renderer renders a portion of the
target image and display in a single window. This mode is used for example when 4
computers will render an image of the scene (each client after the fourth screen)
and then whole image is joined side by side which in turn will give us full screen
display,
 DB (SLC - Sort-Last Compounds) - lies on the fact that each module render part of
the scene in parallel, which is then assembled into a whole image. In this mode,
there may be problems with anti-aliasing, transparency and shadows,
 Stereo Compounds - image for each eye is attributed to the rendering into an
independent entity. The resulting image is copied into the stereo buffer. This mode
supports virtually every available stereo 3D image modes e.g. among others active
mode (quad-buffer), anaglyphic stereo 3D displays with multi-fold course,
 DPlex Compounds (AFR or Time-Multiplex) - in this mode, different grids are
assigned to different units renderer. Based on them is reproduced image. This
method allows you to increase the number of frames displayed per second,
 Tile Compounds - mode similar to the previously described 2D mode, with the
difference that each unit renderer renders a few tiles of which the complete picture
is created. Rendering the tiles used queuing provides load balancing,
 Pixel Compounds - split image rendering unit to render different part of the pixels at
each unit,
 Subpixel Compounds - This mode assigns separate samples for units rendering to
create effects such as anti-aliasing, depth of field, etc in order to speed up
rendering the desired effect.
For 2D and DB Compounds modes we can take advantage of the "Load Equalizer", which
is based on the actual resource utilization of the unit to keep the rendering, adjust the size
distribution of the image data to enhance rendering performance of the whole image. In
contrast, the "View Equalizer" will use the "Cross-Segment Load-Balancing" with the most
current division will adjust the rendering of the image at the level of the GPU to achieve high
performance. This option is recommended for use in the CAVE like systems, in order to free
resources for the GPU to pass it on to render an image where these resources are missing. An
interesting option is the "DFT Equalizer" (Dynamic Frame Transform) which in the case of an
overload and too little FPS renders the image at a lower resolution and then rescales it to
actively display resolution which will help in improving productivity through the picture at the
lower quality. In the event of inactivity, or when the data resources of computing a given image
will be generated at full resolution. "Monitor Equalizer" will allow us to scale and display a
picture of the system of multi-screen display on the monitor of your computer.
Solution architecture is based on a client-server model. It is used here Collage project to
build a distributed applications. Each client is controlled by the server. Both the client and server
can be the same application (file). The server can respond for only application logic (called the
"master"), or participate in the rendering of the 3D image as does the client.
44
For several years there are not already supplied binary libraries on Windows - so you
should compile framework from source-code. To compile the source code Equalizer must either
use Buildyard package, which contains the entire framework with all dependencies, or you can
do it manually one by one starting with the compilation of projects: vmmlib (this is a set of
mathematical operations at the level of vectors and matrices), Lunchbox (which is an abstract
Connects functionality performed on the operating system level, among others. processor clock,
etc.) and Collage (this is a library to manage connections at the network level) and then you can
compile the Equalizer.
Additional modules include Hardware Service Discovery (hwsd) which allows for automatic
detection and configuration of both the network and the machine rendering of the GPU.
During working with the framework a good feature is that you can run multiple clients on a
single computer (individual's renderers) for the purposes of the developer tests. However, for
reasons of performance requirements, it is recommended that each client will be running on a
separate computer. Applications can be running centrally from the server using the ssh protocol
(then on each client and server the application should be exactly in the same folder) or fostered
run them on clients and then call on the server.
This library is quite divided into logical modules: "eq :: Node" represents a physical
computer, "eq :: Pipe" represents the GPU, "eq :: Window" is the window in which the image is
displayed from a single computer, which can be divided into separate parts, the channels "eq ::
Channel" can share and send one image on multiple projectors. Using the class "eq :: Canvas"
configures the displayed image on any surface including CAVE as well. When displayed flat
surfaces such powerwall must configure frustum for all screens while the systems in which the
screens do not form a frustum lines should be set up for each screen separately. Properly
configured frustum should be the same one as used in application in the calculation of the
transformation matrix for head tracking system. Each canvas composed of segments, which
already represents the projected image on the screen. Segment should be assigned to each
screen or projector. Segments can overlap for use in projectors with the option of combining
images (Edge-Blend) and may have cracks for use in so-called walls system of display (Display
Walls). To configure frustrum we use segments of the Viewport.
For passive 3D stereo installation we must configure the segments "eq :: Segment" for each
eye. The two channels (left and right) should be assigned to the same viewport. For active
stereo 3D display is used framelock mechanism using software or the hardware barrers. Only
hardware barriers (e.g., those in the G-Sync) give confidence properly and correctly
synchronization of the image at the right time.
45
Fig. 5.2. Application osgScaleViewer integrates Equalizer with OpenSceneGraph
From my point of view, a very important element is the integration with the
OpenSceneGraph. Until 2010 there was prepared and provided sample application
osgScaleViewer with Equalizer which shows integration with OpenSceneGraph. This project
renders the node OSG through load 3D object as cow in example show in Fig. 5.2. This
example is an extension of the demo eqPly application and advanced management at the level
of multiple clusters in a distributed system graphics rendering. In addition to that still
respectively part of the functions of the OSG has been replaced by a side Equalizer so you
need to learn the proper application development based on OSG and Equalizer.
On the basis of the source code it can be seen that the framework is still being developed.
The latest version of the code for the moment of writing work was released in late 2013. Also,
the latest version of the accompanying documentation "Equalizer - Programming and User
Guide" dated July 2013.
5.2.1.3.4. Summary
As you can see there are not so much available frameworks dedicated for CAVE
development. In the past we can saw VR Juggler which is great suited for it. But unhappiness
it's not developed for long time and doesn't support modern scene graph engines. Similar
situation we see with ViSTA framework which was well developed in the past and currently is
outdated.
Here we have only Equalizer which is very advanced and difficult to use. It is working with
OpenSceneGraph but do not work witch OpenSG. The integration with OSG was done with
group of students so we can see that module is overgrown and is very difficult to use making
chance to do something wrong. But we don't have many more possibilities to choose. We can
also use just OpenSG or OpenSceneGraph cluster modules and implement own CAVE support
functionalities.
46
Table 5.2. Comparison of CAVE frameworks integrations
Description ViSTA VRJuggler Equalizer
Distributed computing x x x
Static Distributed Object x x x
Versioned Distributed Object x x x
Head Tracking x x x
CAVE support x x x
File based configuration x x x
CAVE simulation mode - - x
OpenGL Performer support - x -
OpenSG support v. 1.8 v. 1.8 -
OpenSceneGraph support -
13
v. 2 v. 2 and 3
Advanced scalability - - x
Table 5.2 shows that base functionality to provide CAVE solutions is provided by each
library. The main difference is in supporting modern scene graphs engines and with advanced
functionality. Based on supported scene graphs engines we have one winner which is
Equalizer. Equalizer contain CAVE simulation mode support which display 5 window at your
desktop. This will give you some view how the result applications will looks like. Also Equalizer
contains advanced scalability which gives you possibility to scale application between different
nodes by splitting your image.
5.2.1.4. Support libraries
As support libraries you can use physics, animation, scientists, graphics and other. Here I
want only focus at two of them which are used in some of the scene graphs. Cg is used in
OpenSceneGraph and NVIDIA SceniX. NVIDIA OptiX is used only in NVIDIA SceniX. Cg is well
known and currently is marked as depreciated so I will write only a few sentences about it. But
OptiX looks great and is still developed. This is not famous library which provide foto-realistic
result almost in real-time. That's the reason why is noticed here.
5.2.1.4.1. Cg Toolkit
Cg toolkit is obsolete framework for writing applications that runs on the GPU for OpenGL
and DirectX on Windows, MacOSX and Linux. It is no longer developed and supported by
NVIDIA. The last version comes from April 2012 which has been developed from 2005. In the
pleace NVIDIA recommends using GLSL shaders directly, HLSL or just developed nvFX [34],
lua [35] or glfx [36].
5.2.1.4.2. NVIDIA OptiX
13
There is planned future integration ViSTA with Open Scene Graph.
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator
6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

More Related Content

Similar to 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

Google app engine
Google app engineGoogle app engine
Google app engineSuraj Mehta
 
Minor Project Synopsis on Data Structure Visualizer
Minor Project Synopsis on Data Structure VisualizerMinor Project Synopsis on Data Structure Visualizer
Minor Project Synopsis on Data Structure VisualizerRonitShrivastava057
 
Seminar Report - Managing the Cloud with Open Source Tools
Seminar Report - Managing the Cloud with Open Source ToolsSeminar Report - Managing the Cloud with Open Source Tools
Seminar Report - Managing the Cloud with Open Source ToolsNakul Ezhuthupally
 
F3-DP-2015-Milata-Tomas-java-ee-batch-editor (1)
F3-DP-2015-Milata-Tomas-java-ee-batch-editor (1)F3-DP-2015-Milata-Tomas-java-ee-batch-editor (1)
F3-DP-2015-Milata-Tomas-java-ee-batch-editor (1)Tomáš Milata
 
Deploying deep learning models with Docker and Kubernetes
Deploying deep learning models with Docker and KubernetesDeploying deep learning models with Docker and Kubernetes
Deploying deep learning models with Docker and KubernetesPetteriTeikariPhD
 
Camara for uav jan2012 eas 021
Camara for uav jan2012 eas 021Camara for uav jan2012 eas 021
Camara for uav jan2012 eas 021M.L. Kamalasana
 
Between Simulator and Prototype: Crossover Architecture for Testing and Demon...
Between Simulator and Prototype: Crossover Architecture for Testing and Demon...Between Simulator and Prototype: Crossover Architecture for Testing and Demon...
Between Simulator and Prototype: Crossover Architecture for Testing and Demon...Jaroslaw Domaszewicz
 
Implementing Saas as Cloud controllers using Mobile Agent based technology wi...
Implementing Saas as Cloud controllers using Mobile Agent based technology wi...Implementing Saas as Cloud controllers using Mobile Agent based technology wi...
Implementing Saas as Cloud controllers using Mobile Agent based technology wi...Sunil Rajput
 
Full Body Immersion in AR
Full Body Immersion in ARFull Body Immersion in AR
Full Body Immersion in ARAli Said
 
Cisco Packet Tracer Overview 20 Jul09
Cisco Packet Tracer Overview 20 Jul09Cisco Packet Tracer Overview 20 Jul09
Cisco Packet Tracer Overview 20 Jul09Tumennast Erdenebold
 
Platform Independent App design
Platform Independent App designPlatform Independent App design
Platform Independent App designArnab Pradhan
 

Similar to 6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator (20)

Google app engine
Google app engineGoogle app engine
Google app engine
 
Minor Project Synopsis on Data Structure Visualizer
Minor Project Synopsis on Data Structure VisualizerMinor Project Synopsis on Data Structure Visualizer
Minor Project Synopsis on Data Structure Visualizer
 
Face Detection And Tracking
Face Detection And TrackingFace Detection And Tracking
Face Detection And Tracking
 
projekt_praktikum
projekt_praktikumprojekt_praktikum
projekt_praktikum
 
Seminar Report - Managing the Cloud with Open Source Tools
Seminar Report - Managing the Cloud with Open Source ToolsSeminar Report - Managing the Cloud with Open Source Tools
Seminar Report - Managing the Cloud with Open Source Tools
 
F3-DP-2015-Milata-Tomas-java-ee-batch-editor (1)
F3-DP-2015-Milata-Tomas-java-ee-batch-editor (1)F3-DP-2015-Milata-Tomas-java-ee-batch-editor (1)
F3-DP-2015-Milata-Tomas-java-ee-batch-editor (1)
 
bhargav_flowing-fountain
bhargav_flowing-fountainbhargav_flowing-fountain
bhargav_flowing-fountain
 
Deploying deep learning models with Docker and Kubernetes
Deploying deep learning models with Docker and KubernetesDeploying deep learning models with Docker and Kubernetes
Deploying deep learning models with Docker and Kubernetes
 
NEW BACKEND.pdf
NEW BACKEND.pdfNEW BACKEND.pdf
NEW BACKEND.pdf
 
Camara for uav jan2012 eas 021
Camara for uav jan2012 eas 021Camara for uav jan2012 eas 021
Camara for uav jan2012 eas 021
 
thesis
thesisthesis
thesis
 
Face detection
Face detectionFace detection
Face detection
 
R.E.M.O.T.E. SACNAS Poster
R.E.M.O.T.E. SACNAS PosterR.E.M.O.T.E. SACNAS Poster
R.E.M.O.T.E. SACNAS Poster
 
Between Simulator and Prototype: Crossover Architecture for Testing and Demon...
Between Simulator and Prototype: Crossover Architecture for Testing and Demon...Between Simulator and Prototype: Crossover Architecture for Testing and Demon...
Between Simulator and Prototype: Crossover Architecture for Testing and Demon...
 
Implementing Saas as Cloud controllers using Mobile Agent based technology wi...
Implementing Saas as Cloud controllers using Mobile Agent based technology wi...Implementing Saas as Cloud controllers using Mobile Agent based technology wi...
Implementing Saas as Cloud controllers using Mobile Agent based technology wi...
 
Praveen Kumar S S.docx(1)
Praveen Kumar S S.docx(1)Praveen Kumar S S.docx(1)
Praveen Kumar S S.docx(1)
 
Full Body Immersion in AR
Full Body Immersion in ARFull Body Immersion in AR
Full Body Immersion in AR
 
Cisco Packet Tracer Overview 20 Jul09
Cisco Packet Tracer Overview 20 Jul09Cisco Packet Tracer Overview 20 Jul09
Cisco Packet Tracer Overview 20 Jul09
 
Platform Independent App design
Platform Independent App designPlatform Independent App design
Platform Independent App design
 
Crime file
Crime fileCrime file
Crime file
 

6-wall CAVE Immersive 3D Visualization Laboratory Demonstrator

  • 1. Student’s name and surname: Jan Klimczak ID: 112006 Second cycle studies Mode of study: Part-time Field of study: Informatics Specialization: Systems and Mobile Technologies MASTER'S THESIS Title of thesis: Immersive 3D Visualization Laboratory Demonstrator Title of thesis (in Polish): Demonstrator możliwości Laboratorium Zanurzonej Wizualizacji Przestrzennej Supervisor signature Head of Department signature PhD MEng Jacek Lebiedź PhD MEng, Professor with habilitation Bogdan Wiszniewski Gdańsk, 2014
  • 2. 2 STATEMENT First name and surname: Jan Klimczak Date and place of birth: 12.04.1982, Gdańsk ID: 112006 Faculty: Faculty of Electronics,Telecommunications and Informatics Field of study: informatics Cycle of studies: postgraduate studies Mode of studies: Part-time studies I, the undersigned, agree/do not agree* that my diploma thesis entitled: Immersive 3D Visualization Laboratory Demonstrator may be used for scientific or didactic purposes.1 Gdańsk, ................................. ................................................ signature of the student Aware of criminal liability for violations of the Act of 4th February 1994 on Copyright and Related Rights (Journal of Laws 2006, No. 90, item 631) and disciplinary actions set out in the Law on Higher Education (Journal of Laws 2012, item 572 with later amendments),2 as well as civil liability, I declare that the submitted diploma thesis is my own work. This diploma thesis has never before been the basis of an official procedure associated with the awarding of a professional title. All the information contained in the above diploma thesis which is derived from written and electronic sources is documented in a list of relevant literature in accordance with art. 34 of the Copyright and Related Rights Act. I confirm that this diploma thesis is identical to the attached electronic version. Gdańsk, ................................. ................................................ signature of the student I authorise the Gdańsk University of Technology to include an electronic version of the above diploma thesis in the open, institutional, digital repository of the Gdańsk University of Technology and for it to be submitted to the processes of verification and protection against misappropriation of authorship. Gdańsk, ................................. ................................................ signature of the student *) delete where appropriate 1 Decree of Rector of Gdańsk University of Technology No. 34/2009 of 9th November 2009, TUG archive instruction addendum No. 8. 2 Act of 27th July 2005, Law on Higher Education: Art. 214, section 4. Should a student be suspected of committing an act which involves the appropriation of the authorship of a major part or other elements of another person’s work, the rector shall forthwith order an enquiry. Art. 214 section 6. If the evidence collected during an enquiry confirms that the act referred to in section 4 has been committed, the rector shall suspend the procedure for the awarding of a professional title pending a judgement of the disciplinary committee and submit formal notice of the committed offence.
  • 3. 3 STRESZCZENIE Niniejsza praca badawczo-rozwojowa przedstawia możliwości tworzenia aplikacji do uruchomienia w instalacjach typu CAVE. Praca rozpoczyna się od przeglądu istniejących rozwiązań, opisuje w jaki sposób i gdzie są one wykorzystywane a następnie skupia się na prezentacji czym jest CAVE i jak jest on zbudowany. Kolejne rozdziały opisują Laboratorium Zanurzonej Wizualizacji Przestrzennej (LZWP) na Politechnice Gdańskiej i inne podobne rozwiązania. Następnie została przedstawiona metodologia tworzenia aplikacji pod CAVE. Zawiera ona przegląd i porównanie bibliotek kodu, frameworków oraz edytorów z graficznym interfejsem użytkownika (GUI), który przyśpiesza i upraszcza proces tworzenia aplikacji. W końcowej części pracy został przedstawiony opis utworzonej aplikacji demonstracyjnej, która może zostać uruchomiona w CAVE na LZWP. Zakończenie zaś przedstawia dalsze plany rozwojowo-badawcze. Pierwszy załącznik opisuje aplikacje demonstracyjne skompilowane i uruchomione podczas tworzenia niniejszej pracy. Natomiast kolejny załącznik jest dokumentacją, która pokazuje, w jaki sposób rozpocząć pracę z frameworkiem Virtual Reality Toolkit ViSTA. Wynikiem niniejszej pracy jest potwierdzenie, iż tworzenie aplikacji od podstaw w kodzie do uruchomienia w CAVE jest skomplikowanym procesem. Mamy do dyspozycji kilka dobrych frameworków, na których może bazować daną aplikacja. Prostszym rozwiązaniem tworzenia aplikacji pod CAVE jest wykorzystanie istniejącego edytora z graficznym interfejsem użytkownika, który pozwala na utworzenie takiej aplikacji w sposób wizualny. To znacznie ułatwia i przyśpiesza proces projektowania aplikacji pod CAVE, ale w pewien sposób ogranicza możliwości ich tworzenia. Dziedzina nauki i techniki: Rzeczywistość Wirtualna, CAVE, Grafika 3D, OpenGL, Przetwarzanie rozproszone, Silniki gier i symulacji, Symulatory, Silniki zarządzania sceną, Silniki 3D.
  • 4. 4 ABSTRACT This research and development (R&D) work present possibilities of creating applications to run in cave automatic virtual environment (CAVE) installations. It begins of little review of existing solutions, describing how and where they are used, and then it shows what is CAVE and how it is build. The next topic describes the Immersive 3D Visualization Laboratory (I3DVL, pol. Laboratorium Zanurzonej Wizualizacji Przestrzennej - LZWP) at Gdansk University of Technology. Then it focuses on methodology of developing CAVE applications. It contains review and comparison of code libraries, frameworks and editors with graphic user interface (GUI) which speed-up and make easier developing process. At least it provides description of developed example application to run in CAVE at I3DVL and predicts the possibilities of future R&D. The first supplementary part shows demonstrate applications compiled and run during creating of this work and second part is about Virtual Reality Toolkit ViSTA as documentation for start-up to work with it. The result of this work is that process of developing applications from scratch through coding them for CAVE is difficult. There are a few good frameworks which you may base on. The easier way of creating applications for CAVE is to use some dedicated tool with GUI where you can create application in studio in visual way. This make easier and speed-up process of developing applications for CAVE but it have also own limitations which you will read about it further in this work. Keywords: Virtual Reality, CAVE, 3D Computer Graphics, OpenGL, Distributed Rendering, Game and Simulation Engine, Simulators, Scene Graphs, 3D Engine.
  • 5. 5 TABLE OF CONTENTS STRESZCZENIE........................................................................................................................... 3 ABSTRACT ................................................................................................................................... 4 LIST OF MAJOR SIGNS AND ABBREVIATIONS ........................................................................ 8 INTRODUCTION AND PURPOSE OF WORK ............................................................................. 9 1. CAPABILITIES OF VIRTUAL REALITY.................................................................................. 11 1.1. Image ..........................................................................................................................................11 1.2. Sound ..........................................................................................................................................11 1.3. Other channels – touch and smell...............................................................................................12 1.4. Interaction....................................................................................................................................12 2. I3DVL AT GDANSK UNIVERSITY OF TECHNOLOGY ......................................................... 13 2.1. CAVE...........................................................................................................................................13 2.2. Edge blending .............................................................................................................................14 2.3. Colour Mapping...........................................................................................................................15 2.4. 3D Image.....................................................................................................................................15 2.5. Eye Tracking ...............................................................................................................................16 2.6. Surround 8.1 sound.....................................................................................................................17 2.7. VirtuSphere - locomotion platform..............................................................................................17 3. EXISTING CAVE SYSTEMS AND LOCOMOTION PLATFORMS......................................... 19 3.1. I3DVL - Gdansk University of Technology ..................................................................................19 3.2. Silesian University of Technology, Poland..................................................................................20 3.3. aixCAVE - Aachen University, Germany.....................................................................................21 3.4. Possible applications of CAVES .................................................................................................22 3.4.1. Flooding Crisis Simulation......................................................................................... 22 3.4.2. Molekül Visualisierung (MCE) ................................................................................... 22 3.4.3. Neurochirurgieplanung in immersiven Umgebungen............................................... 23 3.4.4. Virtual Gallery ........................................................................................................... 23 3.4.5. Example students projects........................................................................................ 24 4. PROPOSAL OF USE I3DVL ................................................................................................... 27 4.1. Simulations..................................................................................................................................27 4.2. Medicine......................................................................................................................................27 4.3. Prototyping ..................................................................................................................................27 4.4. Games.........................................................................................................................................27 4.5. Fun ..............................................................................................................................................28 4.6. Marketing.....................................................................................................................................28 4.7. Trainers .......................................................................................................................................28 5. METHODOLOGY OF CREATING SOLUTIONS FOR I3DVL................................................. 29 5.1. I3DVL as complete platform........................................................................................................29 5.2. Creating Virtual Reality applications for CAVE ...........................................................................30 5.2.1. Existing libraries and frameworks............................................................................. 31
  • 6. 6 5.2.1.1. API Graphic .........................................................................................................31 5.2.1.1.1. DirectX ..........................................................................................................31 5.2.1.1.2. OpenGL.........................................................................................................32 5.2.1.2. Scene graph engines...........................................................................................32 5.2.1.2.1. OpenGL Performer .......................................................................................33 5.2.1.2.2. OpenSG ........................................................................................................33 5.2.1.2.3. OpenSceneGraph .........................................................................................34 5.2.1.2.4. NVIDIA SceniX - NVSG ................................................................................35 5.2.1.2.5. Summary.......................................................................................................37 5.2.1.3. Frameworks for CAVE solutions..........................................................................39 5.2.1.3.1. ViSTA............................................................................................................39 5.2.1.3.2. VR Juggler ....................................................................................................42 5.2.1.3.3. Equalizer .......................................................................................................42 5.2.1.3.4. Summary.......................................................................................................45 5.2.1.4. Support libraries ..................................................................................................46 5.2.1.4.1. Cg Toolkit ..................................................................................................46 5.2.1.4.2. NVIDIA OptiX ............................................................................................46 5.2.2. Graphical editors .......................................................................................................47 5.2.2.1. Create own editor with GUI .................................................................................48 5.2.2.2. GUI libraries.........................................................................................................48 5.2.2.3. Existing graphic editors ..........................................................................................48 5.2.2.3.1. Simulators.........................................................................................................49 5.2.2.3.2. CAVE supported and dedicated .......................................................................56 5.2.2.3.2.1. VBS - Virtual Battlespace...........................................................................56 5.2.2.3.2.2. Quazar3D...................................................................................................57 5.2.2.3.2.3. EON Studio ................................................................................................60 5.2.2.3.2.4. Vizard.........................................................................................................62 5.2.2.3.2.5. Summary....................................................................................................65 5.2.2.3.3. Game dedicated engines..................................................................................66 5.2.2.3.3.1. UNIGINE ....................................................................................................66 5.2.2.3.3.2. UDK............................................................................................................67 5.2.2.3.3.3. CryEngine ..................................................................................................68 5.2.2.3.3.4. UNITY ........................................................................................................68 6. DEMONSTRATIVE PROJECT FOR I3DVL............................................................................70 6.1 System project............................................................................................................................. 70 6.2 Implementation notices................................................................................................................ 73 6.3 Quality tests................................................................................................................................. 74 6.4 Performance tests ....................................................................................................................... 75 6.5 System presentation.................................................................................................................... 76 6.6 User manual ................................................................................................................................ 78 7. FUTURE R&D WORK FOR I3DVL .........................................................................................80 8. SUMMARY ..............................................................................................................................81 THE STUDY BENEFITED FROM THE FOLLOWING REFERENCES ......................................82
  • 7. 7 LIST OF FIGURES...................................................................................................................... 85 LIST OF TABLES........................................................................................................................ 86 Attachment A - Example Applications......................................................................................... 87 1. ViSTA .............................................................................................................................................87 2. OpenSG 1.8 ...................................................................................................................................91 3. OpenSG 2.0 ...................................................................................................................................96 4. OpenSceneGraph 3 .....................................................................................................................103 4.1. Sample applications based on the books .................................................................................127 5. Nvidia SceniX 7............................................................................................................................138 Attachement B - ViSTA ............................................................................................................. 142 1. Download framework....................................................................................................................142 2. Compilation prepare.....................................................................................................................142 3. Setting up environment variables.................................................................................................143 4. Prepare project for Visual Studio 2012 ........................................................................................143 5. Libraries required by the sample application ...............................................................................148 6. Compilation of the sample applications .......................................................................................148 7. Configure sample application.......................................................................................................150 8. Manual create a project into Visual Studio 2012..........................................................................154 9. 3D objects import test ..................................................................................................................157
  • 8. 8 LIST OF MAJOR SIGNS AND ABBREVIATIONS 2D – Two-dimensional space 3D – Three-dimensional space CAVE – Cave Automatic Virtual Environment CryVE – Cryengine automatic Virtual Environment FPS – Frames Per Second GUI – Graphical User Interface HDD – Hard Disk Drive I3DVL – Immersive 3D Visualization Laboratory at Gdansk University of Technology ODE – Open Dynamic Engine R&D – Research and Development SDD – Solid-State Drive UDK – Unreal Development Kit VBS – Virtual Battlespace VR – Virtual Reality
  • 9. 9 INTRODUCTION AND PURPOSE OF WORK The Master Thesis is R&D work about the possibilities of use and developing applications for CAVE's. I begin my work from explanation of virtual reality (VR). Then I described Immersive 3D Visualization Laboratory (I3DVL) at Gdansk University of Technology in Poland. Here you can read about many different important elements which are creating the laboratory. The main part is cave 6-wall automatic virtual environment (CAVE) with multiplied projectors per walls for higher the quality of displayed image. The system consists of image blending, tracking system, surround sound and VirtuSphere locomotion platform also. This locomotion platform is a big sphere where you can go inside and freely walk or run to move in virtual reality. This is quite interesting composition of the VirtuSphere locomotion platform with CAVE, maybe the first such configuration on the world. Then I review existing CAVE installations. I visited i3D company with my supervisor Ph.D. Jacek Lebiedź and Ph.D. Adam Mazikowski. After I described a few CAVE configuration from our neighbour border friends as biggest one CAVE in Europe at Aachen University in Germany. Then proceed with a few impressing CAVE installation on the world. Also I included Survey Simulator from DNV which have VR Training Centre in Gdynia at DNV Academy Poland for training and certificate the employers from all over the world. Is worth to say that this system shorten training time from 5 to about 1 year which is a great result. The centre contains interesting and very comfortable back projection system which improves their immersive feelings. So you will find here a good notice about my visit in the VR Training Centre of DNV. After this introduction I describe possibilities of use Immersive 3D Visualization Laboratory in domains of simulators, medicine, prototyping, games, fun, marketing and trainers. Here you will read how you can use CAVE laboratory for such kind of different projects. Next on I move to methodology of creating solutions for I3DVL and mark a common problems and requirements for creating applications for CAVE. I notice here that we can develop applications from scratch where I point to important functionality of OpenGL and DirectX API's. Then I move to existing scene graph engines which are commonly in use as OpenSceneGraph, OpenSG, NVIDIA SceniX and some of them which are currently obsolete but in the past were very common and important like OpenGL Performer or CAVELib. These are powerful graphic libraries which make easier and faster the process of developing 3D applications and VR simulations. After you will read about frameworks as Equalizer, VRJuggler and ViSTA which enable you to create CAVE applications based on previously described scene graphs. The main functionality of this frameworks is added possibility to distributed rendering and displaying image, easy configurable setups for develop once and run application at different computer configurations and in CAVE installations and support many different input and output devices as manipulators and trackers. This topic is focused at coding application for CAVE. To speed-up process of developing CAVE application is recommended use of GUI editors. I described a QT framework which is best suited for it. To go one step further you can use existing simulators or GUI engines. VBS of Bohemia Interactive Simulations is commonly used training simulator for military, police, fire brigade and ambulance on over the world as for example in US Army, NATO and currently in Poland as well. At the moment Gdansk University of Technology sign contract that Bohemia and University will create Crisis Management Center
  • 10. 10 based on Virtual Battlespace (VBS), configurable simulator system. If these will not enough for you than you can use CAVE dedicated editors like Quazar3D, EON Studio or Vizard which are full environment specialized in fast and easy creation of CAVE applications. On the market there are a few very good game studios as UNiGiNE, Unreal Development Kit (UDK), CryEngine and Unity which are powerful tools for creating AAA level games. For almost all of the scene graphs, graphics editor and game studios I pass through compilation, configuration, running and analyzing the frameworks as well as their examples and some of the tutorials to get really know how they works. The result of analyze is a big attachment where you can see about three hundred of described small applications. At attached disc you can see their images in HD resolution. There is also additional attachment just described the ViSTA framework which is provided without any documentation, so it should help you to start to work with it if you will decide on it.
  • 11. 11 1. CAPABILITIES OF VIRTUAL REALITY The Virtual Reality has many names and meanings. It is interpreted differently by different peoples and institutions. But there are a few common capabilities, elements which glue them together. The first one is virtual world, then immersion, sensory feedback (responding to user input) and interactivity [1]. Virtual World presents the environment where action takes place. It is an imaginary space often manifested through a medium, a description of objects in a space. Immersion is considered that the user must be physically immersed, have "a sense of presence" within alternate reality or point of view. The alternate word may be a representation of the actual space that exists somewhere, or it could be a purely imaginary environment. Sensory Feedback allows participant to select their vantage point by positioning their body and to effect events in the virtual world. The VR system provides direct sensory feedback to the participants based on their physical position usually by tracking system. Interactivity is the fourth element of VR which is just responding for the user interaction. This gives opportunity to interact with virtual worlds and simulations. 1.1. Image Do we need a photo-realistic image to create the Virtual Reality? No, we don't. There are some of the VR systems for blind peoples without any graphics, were people can interact and act in virtual reality worlds. We can come back to the past when first virtual worlds were created in the games just in text modes. The first computer games could be example of VR which based on text. By improving graphic quality we just improve our immersion of VR. There is important how we can see, if we can see in colour, at big screen or maybe through glasses as HMD. The resolution is important for the quality of virtual words. How we can see? Can we believe that what we see could be real? Virtual reality is a medium. A virtual world is a real representation of some world that may or may not exist in the physical world. To visualize this we use an image. 1.2. Sound In real life we hear sound everywhere. Only in vacuum there is no sound. That's why we know that if somewhere is no sound that should be not real. The same is in VR. The sound improves our immersion. Without sound we lost a lot of immersion and we feel that's something is not true 1 . The quality of sound is very important, as well as good spectrum of sound. We know many of sound and we know how some of the sound should be heard like. Also the position of sound in 3D is very important as is in real life. We can use background sound, sound effects and voices which can be also recognized by system to interact. All of them improve our immersion level. 1 This is not valid for deaf people.
  • 12. 12 1.3. Other channels – touch and smell The Virtual Reality systems can be enhanced by other channels like touch, smell which improve immersion. This is an optional, not required, but because of them you will fill just like in the real world. Touch is when you can touch a real device, which moves your manipulations into virtual worlds. You can have real like devices or platforms as e.g. car cabin, plane cockpit or submarine control room, which can be similar or just the same as in the real environment. This provide you more accurate control over the system, train based on real-like devices or situations in different configurations. Through using such techniques you can work with VR as you used to. But also there are available many different devices just created to use in virtual reality which help you to navigate in space. You can use move-driven manipulators which react for your translating and rotating in different axes, even simultaneously. You can choose between analogue, digital or mixed devices. The analogue thumbs give you easy control how you're sliding up or down. It will give you possibility to almost visually specify values which you want to. But the drawback is the lover quality of analogue control over digital one. Digital one provides you state buttons, key pads or touch devices. From the other side there are devices which react to simulation. Haptic devices are one of them. These devices usually have mechanisms to provide force feedback which you will feel during working with them. For example by using haptic pen during painting some of the 3D model you will feel the touch at the virtual contact of pen with geometry. Other kind of devices may be some installation which produces water bubbles, fog or other substances which are controlled by simulation. In modern platforms you can also smell much different savour which fulfils experience of immersion. 1.4. Interaction Interaction is very important in VR. Without possibilities to interact you will fell just like in the movie. To be immersed in virtual world its need to react in real-time. There are many possibilities how the simulation will interacting with you. You can manipulate it through devices as manipulators, keyboards, mouse and trackballs as well as mobile, real-like or touch devices, simulation platforms and cockpits etc. You can use motion capture and interact through moving or use sensors as gloves or body tracking. You may use voice control to speech commands. Head tracking very improve immersion in the sense that as you change position and orientation of viewing point that scene will be displayed at different position and angle in real time. You can combine head tracking system with other manipulators to get best results and better interaction with the simulated VR [2].
  • 13. 13 2. I3DVL AT GDANSK UNIVERSITY OF TECHNOLOGY I3DVL is advanced CAVE laboratory built in the end of 2014 at Gdansk University of Technology. Process of setup and specification how the laboratory will looks like, took a few year of research. The main idea was to project hi-end CAVE with something which will be unique and improve their usability. The decision was to choose locomotion platform installed inside 6-wall CAVE as the distinguishing feature. The locomotion platform is a big sphere, where user can go into and freely walk around the virtual world. This is world unique solution which gives possibility to R&D a new kind of solutions. 2.1. CAVE A cave automatic virtual environment (better known by the acronym CAVE) is an immersive virtual reality environment where projectors are directed to the walls of a room-sized cube (see fig 2.1) [3]. Fig. 2.1. Typical CAVE installation [3] The CAVE at University contains 6 walls which create square room. Each wall is 3,4m width and height. The room is about 3m above floor of the building containing the CAVE. Walls are created by acryl glass. The floor is strengthened and is divided into two parts with very thin gap between them invisible from above. The floor glass will resist about 500 kg load. Image is projected by rear projector's system consisted of 12 DLP Full HD 3D 120Hz projectors with laser calibration system. Metal construction positions the CAVE room at second level of building. At this level there is also light floor which eases entrance into room. First level contains 2 projectors with mirrors which display image into CAVE floor. There are also 10 additional projectors located around the room which display the image at surroundings CAVE walls.
  • 14. 14 Displayed image is high quality with resolution 1920x1920px. Such display system needs huge computing power which is realised by 14 servers with 32GB RAM, NVidia Quadro K5000 4GB, SSD, fibber full-duplex network InfiniBand 40 Gb/s each, which guarantee high quality of displayed image. 2.2. Edge blending High quality of image is created by displaying two images from 2 projectors at one wall. The problem is that CAVE walls are square and images from two sources don't fill exactly surface of the wall. Second problem is how to connect such two images into one, that there will be not visible gap and artefacts between them? Fig 2.2. Edge blending and Color Mapping [4] The solution is to setup two images that will overlap each other and use edge blending. Edge blending blends two overlapped images at the place of its overlapping. It creates a seamless image by adjusting the brightness at adjoining edges when using multiple projectors side-by-side to reproduce single widescreen images [4]. Fig. 2.3. Edge blending function [4] The blending, in simple, is the process which setup transparency from zero to one hundred percent at the part of overlapped image which make the connection between them not visible [5].
  • 15. 15 2.3. Colour Mapping The use of multiple projectors to create a larger image can result in colour variations due to slight differences in projector image processing. Each projector is adjusted so that the same colours are reproduced when multiple projectors are used simultaneously. 2.4. 3D Image Humans have two eyes situated close together side by side. This positioning means that each eye has a view of the same area from a slightly different angle. Both views are merged in a brain to form a single image. To provide real-like image we need to display two different images for both eyes otherwise image will be flat and will look not real [6]. Fig 2.4. Perception of human viewing [6] To see real 3D image we need two a little bit different images displayed at different way for each eye. Each image should be visible just for one eye. This technique is named 3D stereo. There are available a few techniques which allow display 3D stereo image. Generally we have passive and active systems which required using special glasses. Passive systems use polarisation filters or spectrum selection at glasses. Active one just open and close display in glass for each eye and display image for one eye and then for second one in turns. To see 3D image we need to use glasses. The University decided to use passive Infitec system with spectrum selection and active solution, which is based on NVIDIA 3D Vision Pro system which guarantees high quality of 3D immersion and is dedicated for NVIDIA Quadro graphics cards.
  • 16. 16 Fig. 2.5. NVIDIA Quadro Sync [7] Displaying 3D stereo image is a little bit more complicated in CAVE environments. There is synchronisation of the displays that you will see images in the same moment of time at each display which is projected by different projectors. To synchronise it is used special hardware which synchronize 3D signal for each graphic card. These NVIDIA Quadro Sync cards are connected to each other through separate network. Quadro Sync connects to NVIDIA Quadro GPUs, synchronizing them with the displays or projectors attached to them. This guarantee correctly display 3D stereo image at each displays in CAVE [7]. 2.5. Eye Tracking Tracking system detects your motion and reacts for it. We can track full body motion or their different part as hand or head. The most important in CAVE is eye tracking system because in CAVE you can walk around, so you need different 3D perspective from different point of view. This is done by eye tracking system in real time, which need to use special eye tracking positioning system. Fig. 2.6. Eye tracking glasses with positioning system [8]
  • 17. 17 The tracking system consists of cameras and IR's which accurate locate glasses with their transformation in space. This information is further used in simulation to transform displayed image to enhance such virtual reality experiences [8]. 2.6. Surround 8.1 sound 8.1 sound is the common name for an eight-channel + subwoofer surround audio system commonly used in home theatre configurations. Fig. 2.7. Surround system [9] The CAVE is square room where sound comes from different directions. This is done by 8 channel surround sound system. Each channel is independent. This system produces real 3D stereo sound which gives you chance to feel immersed in a scene as part of the action [9]. 2.7. VirtuSphere - locomotion platform The main concept for creating the I3DVL laboratory was to add something unique and useful which will improve CAVE installation. The VirtuSphere is platform for immersion into cyberspace. It is a big semi-transparent sphere where user can go inside and control movement by walk in virtual world by just walking [10].
  • 18. 18 Fig. 2.8. VirtuSphere in action [10] Platform allows rotating freely in any direction according to the user’s steps. A user is able to walk and run inside the sphere. The sensors collect and send data to the computer in real time and user’s movement is replicated within the virtual environment. This gives you full body immersion into virtual reality.
  • 19. 19 3. EXISTING CAVE SYSTEMS AND LOCOMOTION PLATFORMS I would like to introduce the CAVE at Gdansk University of Technology in Poland. This is one of a few available 6-wall CAVE systems in the world. There is a big 3.05 meters sphere inside, locomotion platform where user can go inside and freely move around, moving the virtual world. This is impressive configuration as well as one of the most advanced configurations in the world. Another impressive solution is CAVE at Aachen University in Germany which is one of the biggest in Europe. It has walls 5.25 x 3.3m in size. This system uses 24 projectors to project image connected from 4 projectors for each wall which improve display quality. 3.1. I3DVL - Gdansk University of Technology Gdansk University of Technology took a few years to create Immersive 3D Visualization Laboratory (I3DVL) with 6-wall CAVE. Immersion of user is enhanced on 6 wall projection. The CAVE is one of the most advanced in Europe and is the top solution in the world. It is unique because it is optionally supported in mobile locomotion platform which can be installed inside the CAVE. The locomotion platform is a big sphere named VirtuSphere where user can go inside and naturally walk around the virtual world [11]. Whole solution is based on high-end technologies which create high quality and realism of the simulation at the highest level. For such system it was built up a 13 meters height building with glass room inside. At each walls there are rear projected images from external sides. It uses 2 projectors for wall to double resolution of image. Computer system is based on 14 computers with 32 GB RAM, NVIDIA Quadro K5000 4GB, fast SSD and fibber network. Each computer is connected into high quality Barco DLP HD 3D 120Hz projector with laser calibration. Fig. 3.1. Proposed room schema of I3DVL (at the moment is a little modified) [11]
  • 20. 20 Technical specification:  CAVE with walls 3.4x3.4m placed 3m above floor,  Spherical locomotion platform with diagonal 3.05m (VirtuSphere),  Acrylic glass all 6 screen, floor with load at least 500kg,  12 DLP HD 3D 120Hz projectors with laser calibration of image (< 0,5 mm),  14 computers, 32GB RAM, NVIDIA Quadro K5000 4GB, HDD SSD,  InfiniBand fibber network, 40 Gb/s full duplex,  Surround sound 8.1,  Tracking system. 3.2. Silesian University of Technology, Poland Silesian University of Technology probably built the first CAVE in Poland. I have visited Silesian University with Ph.D. Jacek Lebiedź and Ph.D. Adam Mazikowski to check it in action. It was my first contact with CAVE. This is simple system created with 3 walls and floor. The image is displayed at 1024x768px resolution which is in middle range. When you come closely you will see little pixels. There are just 4 projectors, each one for different wall. There are no mirrors used for projectors. Screens are created from material which is elastic. The floor is made from wood. Fig. 3.2. Author in CAVE at Silesian University of Technology This is really simple installation which uses powerful Quazar3D application to display simulations. When I wear glasses and come into CAVE the impression was just amazing. I have never seen something like that before. Quazar3D provide high level of visualisations where I
  • 21. 21 feel immersed at all. I feel that what I see it was like a real world. So even of simple CAVE installation it was amazing experience for me. The only minus it was no ceiling and back wall which force me to focus at the front wall and blocked me to watching up. Even that this amazing feeling of immersion is not describable. 3.3. aixCAVE - Aachen University, Germany The solution created by Aachen University in 2012 contains 5-walls CAVE which gives you full freely in 360 degree movement. With size bigger than 5x5m 2 and with rear projection this is the biggest such solution in the Europe. System provides high quality image. The image is bright, uniform and provides 3D active stereo vision which guarantee excellent experience for the user [12]. Fig. 3.3. CAVE installation at Aachen University [12] 3D stereoscopy projection is created through 24 DLP full-HD projectors. There is used four projectors for each wall and eight projectors for floor (which is divided for 2 screens). Rendering system is created by 24 computers with 2 for slave and 1 for master NVIDIA Quadro 6000 graphic cards (the older ones), 2x Intel Xeon with 6 cores 2,7GHz, 24 GB RAM and fast InfiniBand QDR (4x) fibber network. Technical specification:  Five screens with rear projection (4 walls and floor),  24 HD projectors with 3D active stereo NVIDIA 3D Vision Pro 120Hz,  Walls 5.25m x 3.30m,  4 projectors per wall with edge blending image,  Floor 5.25m x 5.25m,  8 projectors for glass floor of thickness 6.5cm,
  • 22. 22  8 camera optical tracking system,  Use of power about ~67kWatt,  Automatic closing door. 3.4. Possible applications of CAVES Possible applications of CAVES will be shown as the examples of use Virtual Reality Center (VRC) of Johannes Kepler Universität, Austria. VRC at Johannes Kepler University was created in 2005 year. At attached DVD there are additional movies and photos in directories: “documentationmoviesVirtual Reality Center - Johannes Kepler Universitat” and “ocumentationphotosVirtual Reality Center - Johannes Kepler Universitat”. 3.4.1. Flooding Crisis Simulation Application is the simulation of flooding based on Grid platform (CrossGrid UE Project) [13]. It's provide ability to simulate different flooding with different parameters. By using CAVE experts may batter estimate ravages of flooding and better counteract them. It's based on OpenSG [14]. Fig. 3.4. Flooding system in action [14] 3.4.2. Molekül Visualisierung (MCE) MCE is a collection of research programs about visualizations the electron density distribution. The application is created to visualize values of calculations X-ray diffraction data. There are available versions for Windows, Linux, IRIX and CAVE [15].
  • 23. 23 Fig. 3.5. Molecules and particle system visualization [15] 3.4.3. Neurochirurgieplanung in immersiven Umgebungen Project was created in cooperation with Medicine Department at University in Insbruck and Institute of Fluid Mechanics. Application is a teacher of medicine education or may help to plan neurosurgical procedures [16]. Fig. 3.6. Anatomical structure in medicine [16] 3.4.4. Virtual Gallery Virtual Gallery provide virtual travelling and study scenes in virtual worlds.
  • 24. 24 Fig. 3.7. Travel in virtual world 3.4.5. Example students projects There is a few students' works for CAVE installation. 3D Kunstwerk Application shows interaction with 3D art. It's based on CAVElib [17]. Fig. 3.8. Interactive 3D art [17]
  • 25. 25 Multi User Maze Application is a maze where a few users may participate at once. It's based on OpenGL Performer [18]. Fig. 3.9. Multi user maze [18] CAVE Skiing Application attempts to move skiing into CAVE. It's based on OpenSG [19].
  • 26. 26 Fig. 3.10. Ski simulator [19]
  • 27. 27 4. PROPOSAL OF USE I3DVL The CAVE provides many possibilities for use. VirtuSphere is movable so it is possible to use just standalone CAVE or CAVE with locomotion platform. This increases possibilities of uses. You can use it in simulations, medicine, prototyping, games, fun, marketing, trainers and other disciplines. Only the imagination limits applications which you can use in CAVE. You can create new one optimised for CAVE or just run existing one with a few modification. You can use full 6-wall environment or just a few walls in it. 4.1. Simulations The first group of possible applications are simulations. During simulations you can train, learn or see how something works. The Gdansk University of Technology with cooperation with Bohemia Interactive Simulations from Czech Republic based on their VBS 3 engine will create "Crisis Management Centre". This kind of simulations provides solutions to prepare or what to do, when some incidents will happen. There you can not only just imagine how it will be looks like but you can see it and prepare for it. 4.2. Medicine Conventional medicine needs a models or organs to work with them. Sometimes there are very small and sometimes is difficult to see at real model how some parts are build or how they are works. Here you can also treat fears. For example when somebody fears something than you can slowly accustom for it. No one wants to be treated or operated by not good trained and experienced man. Medicine in CAVE trainings provides adequate learning paths with exactly showing how organism works and provides some exercises. You can learn how to make some operations, how some of the organs are built and how they are working without possibility to provide real models. This improves medicine experience. 4.3. Prototyping Prototyping is a cost prone and long time process. Usually creating prototype in real is an expensive and single operation. Sometimes it is even impossible to create prototypes in the middle of stage because of costs or time limits. The CAVE is ideal solution for it. You can prototype and verify it in real scale every product. Additionally you can change a prototype in real time and see changes immediately. This provides great possibilities for prototyping. 4.4. Games Every day games are going to be better and to provide immersion that is not a game, but it's real. CAVE increase immersion of such felling and provide more natural and free navigation in virtual worlds. In CAVE you will feel that you are inside virtual world. Every game will look different. Some game which doesn't immerse you at PC here may immerse you at all. You can cooperate with somebody in multiplayer mode. The players may use CAVE or different platforms. This gives rich possibilities for playing games in CAVE.
  • 28. 28 4.5. Fun There are some applications just for fun. CAVE gives new possibilities to feel immersion. In CAVE just simple animations or movie can provide so much immersion and fun like you haven't it never before. This is a place which gives you a lot of fun. You will discover it from beginning. Perhaps you will discover a new type of fun and you will love it. You can travel, play with toys, relax with animals and nature and do many more amazing things. Because of CAVE it will look so real. 4.6. Marketing Marketing is another group of application which you can use. You can plan how advertisement will look like and where it should be placed to get the best result. In real time you can change configuration. You may provide virtual work at new estate. You can present apartment in different styles. Maybe hotel, look through window? That also is possible and it will help you a lot when you want to sell or build PR. 4.7. Trainers You can simulate some vehicles, devices and other thinks which can be supported with real models as e.g. cockpit or control panel. This can teach you what you should do or what you should not to do and why. In big environments this can add some randomness for training paths. In opposite to real, that virtual training may lower the costs of training and sometimes may train in way that in real is not possible. This is a big advantage over the conventional trainings.
  • 29. 29 5. METHODOLOGY OF CREATING SOLUTIONS FOR I3DVL Creating applications for CAVE in many cases is different than creating typical 3D application. Of course you can use existing editors which support CAVE. Then it looks just the same or it needs just a little modification in code. But when you want create some application from scratch, just by using some frameworks then it's need from you more advanced work and a few notices that you should remember. The main goal is that CAVE applications should work in distributed environment with synchronisation in every frame for receive proper stereo 3D image. You should think about it. Some objects will look the same at all distributed nodes which should be just synchronised. This may be for example a transformation and animation of some objects. The first problem will be if you use for it some algorithm which is based on same randomness. Then you need provide synchronisation of each frame for all steps of the algorithm for all nodes, which sometimes may be difficult. The second problem is to provide state of objects which should be different at each node. This may be vector of camera which is different at each node, because of square room projection. The third problem is local node computation. There is no need to make all computation at server and just send result of it for all nodes. This just makes higher network usage. We should think that we have at least 12 computers connected each one to each other. Typical CAVE application has one server which control and synchronize state of objects between nodes. We don't have too much time for every frame. So if we will exceed bandwidth of network then we will see jams at our application. So the first think is that the CAVE application is a server-client type of application. The server control whole application, share state of objects and synchronize each frames. The clients are renderers which render frames, make local calculations and display image at projectors. Server controls input and output devices as manipulators and tracking system, maintains network connections, and setups main camera system based on external sensors. In our CAVE we have 12 positions of cameras. There are 2 cameras for one wall. First we should setup these cameras and then provide for them transformations from eye tracking system that these cameras will react for our movement of head. This transformation we should multiply by data comes from manipulator device and VirtuSphere locomotion platform that we will be able moving around virtual world. Usually frameworks have built file configuration for displays and control devices which shorter time of configuration for different platforms. You should have in mind that when you want to use some framework function or some library, it will work in distributed environment. Is there any possibility to share state of object? This is a requirement for development CAVE applications. Many times you will need to write on your own some functionality to use in CAVE because usually libraries are not designed to use in distributed environments. 5.1. I3DVL as complete platform I3DVL consists of 6-wall CAVE and spherical walk simulator named VirtuSphere. Each wall 3,4m square, displays image from 2 projectors. Each wall contains 2 images with 480px edge blending in the middle. The walls have horizontal split of images for edge blending. Inside CAVE is installed VirtuSphere locomotion platform which may be removed off. VirtuSphere is 3.05m
  • 30. 30 semi-transparent plastic in the form of grid sphere where user can go to inside and walk in virtual world. The VirtuSphere technically works as a mouse. There you have 3D stereo active image which is provided by NVIDIA 3D Vision Pro or Infitec Barco system. Both of them need to use different glasses and drivers. Additionally you have markers at glasses for eye tracking system and cameras with IR sensors to detect head movements in real time. There is also 8 channels sound system based on eight speakers plus one subwoofer. Applications are running at 12 computers plus 2 additional in control room. Computers are connected by fibber and cable network and additional independent cable network for 3D synchronisation. These create whole I3DLV contemporary configuration. 5.2. Creating Virtual Reality applications for CAVE Virtual reality applications most often are created in 3D technology. Applications frequently create virtual reality in 3D. These applications typically consist of many elements such as the scene in 3D, rendering system and image displaying, user interaction, physics or other laws of nature, movement and animation elements, audio and surround sound, special effects such as fog, rain or post effect like e.g. motion blur, etc. [20]. There are also important components at a lower level such as increasing the efficiency of the system through the use of multiple threads and optimal algorithms to calculate the distribution or synchronization of data between clusters, generating and displaying image in 3D stereo, GPU utilization and enhanced instruction for the calculation of whether the use of the advanced capabilities of the latest graphics cards through such implementations like shaders [21]. Solutions for CAVE consider features such as combining edges of images projected from multiple projectors on a single plane (Edge Image Blending), generation and synchronization technology of stereo image using multiple clusters consisting of multiple graphics cards and projectors, the detection of the head position for the observer (Head Tracracking) and on this basis, generating position of the 3D image and support for additional peripherals like gloves or other 3D manipulators such as used at the Gdansk University of Technology locomotion platform so-called VirtuSphere - mostly obtained through opportunities to write and attach own driver for the user. Not every system or application for CAVE should comply with all such requirements, but advanced ones may. There are systems dedicated for just one specified operating system or multi-platform, which in turn extends the field of application. We have some libraries that offer full or partial functionality described above, which can then be used in a newly-created applications or we can use editors with a user interface that help us a lot in creating advanced applications with all aspects of creating application for CAVE. Such editors offer feature of WYSIWYG interface and scripting languages that allow make changes in real-time in the running application without needs to recompile the script or whole application to see result in real time, which significantly speeds up process of application development. At least we can write a complete framework from scratch, editor or an application for use in the CAVE. A key element of the final visual effect is a way of rendering graphics. Such low level graphic may be created using the CPU or GPU. Currently, most graphics cards have very
  • 31. 31 powerful GPU computing units designed for efficient graphic generation and are able to display a much more complex graphics in real-time compared to CPU. Virtual reality applications require real-time interaction and the same requirements apply to the displaying image. For this reason the graphics are not generated at CPU but it is used GPU instead. Therefore, the creation of virtual reality applications for use in the CAVE uses mostly API like OpenGL or DirectX . These two APIs form the backbone of all existing libraries, frameworks, engines to create applications that use 3D graphics, including the CAVE solutions [22]. 5.2.1. Existing libraries and frameworks CAVE solutions are expensive investments, in many times costing hundreds thousands or even millions of dollars. Because of high cost there are not so much such platforms on the world. The most often we can find it at different Universities and Military Areas. There are open- source software developed mainly by Universities and a few commercial rather expensive solutions available on the market. 5.2.1.1. API Graphic At the lower level of rendering graphics there are interfaces like OpenGL and DirectX 2 [23]. API at this level is very thin layer, specialized in just generating and processing computer graphics at GPU. This layer has direct connection to graphic cards via graphic driver. Functionality of such layer sometimes is called as state machine, which means that at this level it is not available whole scene but just base elements like triangles, from scene is build up and displayed without any knowledge about their past and future. Here are also available shaders which provide ability to makes some operations at GPU in streams at many cores simultaneously. Because of such limitations about knowledge of scene, there is need to create a layer of higher level which will take care of creating a scene, lighting, handling input and output devices and interaction in virtual world. The knowledge of whole scene gives possibility to optimize performance of application. We can choose dedicated solutions for specifying applications e.g. games or use general purposes solutions e.g. scene-graph engines. For CAVE solutions the general purpose frameworks are better suited. At the next stage we can use or create an editor with user interface. This will shorten time and make easier of creating application. In editors we can build our scene and manage it in graphical way often through WYSWIG editor. By using editors we can also make simpler configuration of displays, network, tracking and devices to run application in CAVE. 5.2.1.1.1. DirectX Microsoft DirectX is used mainly in games. It's a stable standard which new versions are created rarely. These guarantee that applications will work for long time at many computers. 2 At the moment there is under development Mantle API by AMD which is the lower level graphics API. Microsoft also works to add low level instructions into DirectX in new 12 versions. OpenGL want to add such possibilities as well. Such API’s are not available at the moment that’s why I don’t describe it there.
  • 32. 32 The minus is that is not an open standard, and you should wait a long time for a new functions or improvements. DirectX works only at Windows and XBOX. It is projected to work mainly in one window, but it supports more than one. The main advantage is that NVIDIA 3D Vision works at GeForce GPU's in heuristic way. This enable 3D stereo at the cost of slow-down your application for low end graphic cards. DirectX don't support hardware stereo 3D and there are not many scientists' libraries. This is the main thing that it is rather not used in professional 3D applications, as this one used in CAVE [24]. 5.2.1.1.2. OpenGL OpenGL developed by Khronos is open source library for 3D graphics. Because of open source and many additional libraries, hardware 3D stereo image support, possibility of work with multiple displays and availability for different systems: Windows, Linux, Mac and UNIX it is the mostly chosen for advanced 3D application. The drawback is that not every graphic cards support all extension of the library as is in DirectX so developed applications which use some extension may not work at all computers. This incapability issues contributed to often replace it by DirectX in games. The different situation is that OpenGL ES is a standard in mobile devices. Only newest Windows Phone support DirectX. But the most mobile devices based on Android and Mac OS X support OpenGL ES. Almost all further described frameworks are based on OpenGL. 5.2.1.2. Scene graph engines Scene graph engines provide possibilities to creating and managing whole scene displayed in 3D virtual simulation. They are usually used for general purpose and they are easy integralable for any applications. Using them give us possibility to manage virtual world, adding and removing objects, transform them, generate scene in many threads in cluster environment and display it at many devices like monitors, HMDs or projectors. Scene graph represents logical connections between elements in the scene and is used for performance management and rendering. The most often scene is represented by hierarchical graph contained child nodes and one main root node. Each node may contain other nodes. In advanced systems the node may have a few parents which create directed acyclic graph (DAG). At default each operation performed at parent is performed at all his children as well. Scene graph systems are often specified as retained or deferred rendering. It means that they not just provide content to rendering but keep it in the buffer which adds possibility to additional transformations and optimizations e.g. for use multi-threading just before rendering. These systems often are object-oriented which give possibility to extend their functionality through implementing different modules and plug-ins. This provides easy way to scale the system. OpenSG and OpenSceneGraph are Open-Source solutions which are often used for creating VR and CAVE systems. NVIDIA have own scene-graph framework named SceniX which is very powerful and provide real-time raytracer. SceniX is optimized for NVIDIA graphics cards and have not available source code. The problem with SceniX is that is not prepared to
  • 33. 33 use with CAVE out of the box and there are not currently available libraries which provide integration SceniX for CAVE solutions. So the only way to use SceniX in CAVE is to write own module to use it in such environments. 5.2.1.2.1. OpenGL Performer OpenGL Performer is the one of the first systems for scene-graph management. It was created by SGI. Entirely was available only for SGI graphics stations with IRIX operationg system. The main goal for SGI was hardware, not software. OpenGL Performer not share source code so this factors causes that in the middle-time was arisen other systems as open- source e.g. OpenSG where everyone may add own additional modules. For this reasons OpenGL Performer just disappeared from the market and is currently outdated [25]. 5.2.1.2.2. OpenSG OpenSG is open-source scene-graph management system for creating 3D real-time virtual reality applications. It is available for Windows, Linux, Solaris and MacOS [26]. It extends OpenGL. System was developed across many years. In 2001 it was published the first version of OpenSG. The year 2007 begins the work at second version. At sourceforge.net we can observe that the last version was published in March 2013 and from this time it was just once downloaded. In git 3 repository the changes are added almost every day. For the top advantages we can include cluster and multi-tread support in rather easy way at framework level. Also the ability to render graphics over several computers and graphics cards undoubtedly belongs to the advantages of this solution. With the open code and its availability is still extended. OpenSG is not an application. It is just a library that we can use in our application. This framework may be used with VRJuggler and Open Tracker so it makes easier to prepare applications for running in CAVE solutions. The biggest improvements in OpenSG 2 vs 1.8 is an improvement of the architecture, which currently relies on the shaders. Additionally programming is simplified because some thread synchronization happens in the new version automatically. There are improved handling of pointers by introducing their new types. Properties of geometry have been changed. Many internal implementations have been improved, rebuilt or created in a new way. In new version support for NVIDIA CUDA, CG, EXR, NURBS, VTK and Collada is added. All these changes make it worth to use a newer version of OpenSG. The most importantly OpenSG in second version is faster than the previous one. Documentation for version 1.8 contains about 200 pages OpenSG Starter Guide which describes the entire important topic related to the library. In addition, there are described API for all classes and framework division into modules. There are available on the market some books about OpenSG. Unfortunately, the documentation for version 2 is a little abandoned and much of it is just simply copied of the documentation from first version. 3 Address: git://git.code.sf.net/p/opensg/code.
  • 34. 34 Most of the sample applications from OpenSG 2 were simply carried from previous version. There are no more advanced examples provided with OpenSG 2. Therefore I attached presentations of example programs for both OpenSG 1.8 and 2. Originally OpenSG 1.8 contains example applications provided for Visual Studio 2005. 22 example applications are provided to download and seven additional you can download with the source codes of OpenSG. Each examples I converted to Visual Studio 2012 and included at attached DVD. Otherwise OpenSG 2 provides compiled libraries for Visual Studio 2010 for both framework and supporting libraries. First full compilation on my computer takes about 6 hours. OpenSG project is managed through Cmake. Compiled library size is 25 MB for lib and 15 MB for dll for OpenSG 1.8. For second version we have respectively 20 MB and 120 MB (there are also some extensions that take the extra a few megabytes of data). The dependent libraries for 1.8 take about 30 MB for lib and 5 MB of dll. In contrast, second version weight 600 MB for lib and 30 MB for dll. 5.2.1.2.3. OpenSceneGraph OpenSceneGraph is one of the most frequently used scene management systems in the world. Is used among others by Boeing in Flight Simulator, NASA's Earth Simulator or Gear with Flight Simulator and others such as Sony or ESA in their projects. In spite of its advanced features it is fairly simple to use. The first version of OpenSceneGraph was founded in 1998 year. It was created by Don Burn's, who previously worked for SGI at their scene-graph OpenGL Performer. In the middle time he created a solution of scene-graph named SG, which was the prototype for the OSG. In 1999, the project was officially named OpenSceneGraph [27]. The entire framework is based on several primary and optional libraries. On the other hand, if necessary on-demand dynamic plug-ins are included in the form of dll files which make writing applications simpler. Framework has modular structure. Basic modules include scene operation management, building the graph, math class containing implementations for vectors and matrices, implementation of object-oriented multi-threading management, mechanisms for managing files and streaming 2D and 3D as well as components for dynamic loading graph to handle large scenes and the mechanisms to travel the graph, modifying its elements and call instructions with OpenGL. Additional modules allow to make animations, including skeletal and morphing based on key frames and canals, a module for creating special effects in 3D, the system of multi-platform GUI with support devices, mechanisms of manipulation objects in space (rotation, scale and translation), particle system for rendering explosions, fire, smoke, etc., libraries to add shadows, terrain generation system based on the height maps, vector text rendering in 2D and 3D based on the FreeType font, integration of management systems for Windows Win32, X11, MacOS and other, generation volumes and integration with Qt library, which allows for example to generate Qt components in space (such as a web browser). For tests I used the latest version OpenSceneGraph 3.3.1, developer release published on 29 January 2014. Every few months a new version is released. Previous stable version 3.2.0
  • 35. 35 was released about half year earlier. On this basis, it is easy to conclude that the framework is still being developed. There are provided supporting libraries for across VS 2005 and VS 2013, Linux, Mac OSX and Android. At the time of writing this work the compiled binaries were not available. Compared to OpenSG this framework is managed in terms of a better version releases to users. There are frequently updates. Also here is much better designed website and all codes lay on own servers. Preparation and setup library using CMake in contrast to OpenSG went smoothly and compilation itself is also not encountered additional problems. This looks like more solid release apart to OpenSG. All provided documentation is based on several books. On the OSG website we will find not enough information that will teach us how to use the library. For this reason, we can say that we are forced to buy the books. We can choose from a few items e.g. “Begginer 's Guide” and “Cookbook” later. They are prepared to learn framework from begging. Therefore, they are written in a clear and arranged manner. They fully compensate the lack of documentation not available on the website. The books also describe how to configure and build the library and the method of preparation projects in CMake for Visual Studio. You should start by reading them, then you can analyze the accompanying examples and then start creating a new solutions. For the purposes of this work I created all of the sample applications that are described in the books. Together with a library it's provided a quite large number of sample applications. They show a wide range of available functionality. These examples are much more advanced in contrast of samples provided by OpenSG. By press ‘s’ key we can both turn on and off and switch between various modes of statistics. We have information about amount of frames per second on the busy threads in terms of rendering scenes and information about the complexity of the scene including information on the number of its elements, nodes, vertices, or even the instance objects. Before compilation we should add at least the following environment variables:  OSG_ROOT - pointing to the root directory of the OSG,  OSG_NOTIFY_LEVEL - NOTICE - Setup the level of debug messages for OSG,  OSG_FILE_PATH - indicating on attached files containing resources for the sample applications. 5.2.1.2.4. NVIDIA SceniX - NVSG The scene management in the implementation of NVIDIA is largely dedicated for their solutions and its "strongest" squeezed last power of NVIDIA graphics cards in standard use of the advanced capabilities of graphics cards NVIDIA Quatro [28]. A strong element of framework is work with a range of advanced NVIDIA libraries to render scene and raytracking module, bulk processing or scripting level shader of graphics card. The strength of this framework may indicate that they are used in systems such as Autodesk Showcase [29], which allows for photo- realistic visualization and interaction in the prepared scenes in AutoCAD or Autodesk Inventor
  • 36. 36 or Image Courtesy of Realtime Technology AG (RTT) for application DetlaGen2, which is used for visualization of the highest quality, mainly cars. Unlike competing solutions, this framework was enhanced in a shader layer which is characterized by remarkable speed of operation and the quality of the generated image. Shaders are built on the basis of language CgFX [30]. Also, its use interactively ray tracker based on OptiX or RTFx (Ray Tracing Effect interchange Format). Framework is available only for Windows and Linux in 32 and 64 bit without source code. Moreover, there are available pre-compiled libraries for SceniX 7.3 from August 2012 to use in Visual Studio 2008 and 2010. Based on history of updates we can see that this framework is updated once for every 1.5 years (but the last available version comes from two years ago). We should prepare about 2GB of free disk space. To use the library in VS 2010 you need to install an additional package "Visual Studio 2010 redistributables" 4 and "Service Pack 1" 5 (otherwise you will be not able to properly setup Cmake for VS 2010 project). There are known some issues with troublesome under Linux, which is manifested by the fact that some operations may result in errors. In contrast, the 64-bit Windows cannot load textures in TIFF format (which should not be a problem, because we can load the textures in other formats). For compile examples using CMake Qt and wxWidgets frameworks must be prepared. To compile wxWidgets 2.8.12 locally it's necessary to comment out in windows.cpp file: #if !defined __WXWINCE__ && !defined NEED_PBT_H // #include <pbt.h> #endif and add value to preprocessor "_ALLOW_KEYWORD_MACROS". Fig. 5.1. NVIDIA SceniX viewer 4 You can download it from: http://www.microsoft.com/download/en/details.aspx?id=5555. 5 You can download it form: http://www.microsoft.com/en-us/download/confirmation.aspx?id=23691.
  • 37. 37 Viewer is a complete application based on Qt framework with available source code. Viewer allows you to view scene and 3D graphic objects and components. 5.2.1.2.5. Summary Scene graphs engines helps develop CAVE applications. Through years there were significant changes in architecture of graphics cards which forced serious changes in such frameworks. Because of that the most important are modern frameworks which can provide full power of existing graphics cards. So at the moment the most valuable are OpenSG and OpenSceneGraph which are open sourced and NVIDIA SceniX. Below you can see comparison table where I also included ViSTA framework because of contained scene graph engine described further in this work. Table 5.1. Scene graphs comparison Feature OpenSG 1.8 OpenSG 2 OpenSceneGraph ViSTA SceniX Scenegraph x x x x 6 x Realtime graphics x x x x x Open Source x x x x - Licence LGPL LGPL OSGPL LGPL Own 7 Based on OpenGL OpenGL OpenGL/OpenGL ES OpenSG OpenGL/DirectX Supported platforms Windows, Linux, MacOS X, Solaris Windows, Linux, MacOS X, Solaris Windows, Linux, Mac OSX, FreeBSD, Solaris, Android Windows, Linux, MacOS X Windows, Linux Extensibility x x x x x Multithreading x x x x x Clustering x x x x x Creating Simple geometry x x x x x Support mouse and keyboard events x x x x x Sample applications and tutorials x x x x x Documentations and books x x x - x API documentation x x x x x 6 ViSTA based on OpenSG 1.8 (there are works on implementation of OpenSceneGraph). 7 You can read license during installation.
  • 38. 38 Direct OpenGL drawing - glBegin() x x x x x Materials x x x x x Load scene files 8 VRML97, OBJ, dxf, raw, stl, 3ds, OFF, BIN VRML97, OBJ, dxf, raw, stl, 3ds, dae, OFF, BIN, COLLADA .3dc, .3ds, .obj, .ac3d, .bsp, .dae, .sw., .dxf., .fbx., .geo, Inventor, .ive, .logo, .lwo, .lws, .md2, .ogr, OpenFlight, .osg, .pfb, .shp, .stl, .dds, VRML, .x VRML97, OBJ, dxf, raw, stl, 3ds, OFF, BIN COLLADA, COLLADA FX, VRML2.0/WRL, OpenFlight, OBJ, 3DS, PLY Picking objects x x x x x Lights x x x x x Cameras x x x x x GLSL Shader x 9 x x - 10 Stereo 3D x x x x x OpenGl extensions x x x x x Scene statistics x x x x x Shadows x x x x x NURBS - x 11 x - x OpenEXR 12 - x x - x Cg - x x - x CgFX - x ? - x Nvidia CUDA - x x - x LOD x x x x x Viewports x x x x x Cube map x x x x x Graph traverse x x x x x VTK - x x x - Collada - x x - x Cmake x x x x - VS libraries to compile to compile to compile to compile 2008 or 2010 GUI Toolkit GLUT, Qt, wxWidget, Win32 GLUT, Qt, wxWidget, Win32 GLUT, Qt, wxWidget, Win32 GLUT GLUT, Qt, wxWidget, Win32 NVIDIA OptiX - - - - x RTFx RTFx - - - - x RT raytracer - - - - x 8 In each framework there are supported other file formats through custom plug-in. 9 GLSL is available through ShaderChunk object which is experimental. 10 Shader used as material (extension of OpenSG) or is used for particle system generation. 11 Through OpenNurbs library. 12 OpenEXR is high-dynamic range (HDR) image file format.
  • 39. 39 Ambient Occlusion - - - - x Mobile - - Android/OpenGL ES - - Lib size 25 MB 20 MB 8 MB 4,5 MB 12 MB Dll size 15 MB 120 MB 44 MB + 780MB 26 MB + 5,5 MB 16 MB Support lib size 30 MB 600 MB 1,6 GB - Support dll size 5 MB 30 MB 64 MB 32 MB As we can see in comparison table the functionality of selected scene graphs engines are very similar to each other. The base functionality are almost the same for all of them. The main difference is between NVIDIA SceniX and other. SceniX have no provided source code, but is very powerful. It's specialised for NVIDIA graphics cards and as only one can works with DirectX and OpenGL and have real-time ray-trace engine. SceniX is the most advanced scene graph engine. OpenSceneGraph (OSG) is the only one which supports mobile. OSG contains the most number of additional modules and natively support shaders. It makes it good choice to use too. Then we have OpenSG which looks like a little forgotten framework and at the moment is not so functional as OpenSceneGraph. And at the least there is ViSTA framework which based at old fundaments as OpenSG 1.8 which makes it a little bit depreciated at the moment. 5.2.1.3. Frameworks for CAVE solutions In this chapter I will describe frameworks that extend the possibilities of scene management engines. This extension concerns above all the possibility of image rendering by multiple computers and multi GPU rendered image into several instances. In addition, these systems synchronize the user's camera head tracking system using mechanisms to detect the position of the head in order to properly render the image. It is concerned with the management of various manipulators, so that each server receives consistent information about its properties. Using the presented solutions, we can write an application with distributed rendering processing, rendering units (separate computers as clusters) and where both the output image, the input devices and events will be synchronized in the resulting application. These frameworks provide advanced mechanisms of network connections and serialization of objects. Sometimes you need the given object to make available to read for all rendering unit (e.g. containing initialization data) and sometimes it led to a renderer to each unit have its own state of mind not shared with other machines (e.g. for storing data information about the configuration of the camera). 5.2.1.3.1. ViSTA VIRTUAL REALITY for SCIENTIFIC TECHNICAL APPLICATIONS - ViSTA framework created by Virtual Reality Group at RWTH Aachen University in Germany. University has set up several applications in the CAVE. Framework is available as Open-Source project. This solution was developed for about 15 years. During this time, several generations of graphics cards
  • 40. 40 architectures have passed in the era of information technology and the framework itself was strongly changed. Initially it was available at the super computers like SGI Irix, HP-UX, Sun Solaris and now is available for Windows, Linux and Mac. At the moment, at least when it comes the latest version of framework no anyone outside of the University of Aachen benefited from this solution [31]. The biggest advantage of framework is its integration with various existing libraries which broadens its area of application and the fact that it fully supports CAVE systems and display image in stereoscopic 3D technology combining images from multiple projectors (Image Blending), tracking and adjusting the position of the user's head (Head Tracking), support calculations on multiple clusters and multiple input-output devices. All these features give us the basis for the creation of a dedicated application in the CAVE. Main features of ViSTA framework:  scene management,  support input and output devices (e.g. manipulators, tracking camera and haptic devices),  is based on OpenSG 1.8 (in the future will be support OpenSceneGraph as well),  support for cluster computing (VistaDataFlow),  support for multiple screens (including video monitors and stereo 3D),  tools for managing threads, links, files, network, etc.,  the ability to write and add own drivers for input and output devices,  integration with many available Open Source libraries,  contains own mechanisms to handle the keyboard (mainly via events),  allows to create basic 3D geometric solid objects,  import 3D objects and scenes created in other applications,  allows coloring and texturing objects,  support lighting and its management,  display text in both 3D space and on the GUI layer,  allows to add interactivity to objects created (e.g. you can select an object and move it to another location),  create and manage the camera (set its parameters, location, etc.),  add a layer overlay containing other scenes both in 2D and 3D rendered in real-time,  implementation of the events on the phenomenon in the application (for example, after obtaining the position of a given object is generated the event),  communication with other applications in C/C++,  debugging tools that display information on both the console and on the scene. Integration with the following arrangements:  OpenSG - allows you to manage and display a 3D scene in real time,  OpenSG Ext - extension OpenSG (e.g. particle system or fog),
  • 41. 41  VTK (The Visualization Toolkit) - adds a lot of graphics functions for working with graphics,  OpenGL - enables native OpenGL command execution within the node,  Python - allows you to write dynamic scripts. Initially, the biggest problem to start with the framework is total lack of any documentation. There are only comment at the source code, generated API documentation based on classes and several sample very simple applications that show the basic capabilities of the framework. Knowledge of OpenSG 1.8 will help a lot because many of the framework functionality expand and use mechanisms of it. Configuration is been based on text files, which can detect changes in running application. This configuration allows you to easily move the application between different environments, e.g. between developer station consisting of two monitors and CAVE like systems. For that you only need to specify how many walls and projection system is composed of. Here you can configure the network addresses for communication between computers in clusters and input- output devices. This allows to separate application itself on its configuration depending where it has to be launched. Same configuration files can consist of multiple files so there is a possibility to prepare the so-called configuration modules and plug-in them to streamline the configuration. A key element of framework is scene management system, which is based on OpenSG 1.8. The OpenSG system is described in the chapter devoted to it - this is the main mechanism that is responsible for displaying the scene in real time. OpenSG directly sends data to the graphics card to the GPUs via OpenGL, which then renders the data held in the form of an image. OpenSG 1.8 was completed in 2007, which greatly reduces the possibility of the internals of the ViSTA framework. Hope is in the ongoing work on the replacement of the old OpenSG 1.8 for competitive solution OpenSceneGraph. At this point I just want to point out that currently ViSTA is not able to fully exploit the potential of the latest computers. On the official website is information that ViSTA has additional libraries (VistaAddonLibs) that add additional functionality, offering among others use of physics and collision detection, soft body simulation and sound support. But those shared libraries are not shared to download. Without them it can be implemented by own self or we may use other existing libraries through their independent implementation. For the purposes of this document, I described how to build both ViSTA framework and supporting libraries as well as sample applications. I attached the workspace containing both all the projects and source codes and compiled versions of the applications. Also I created mini- framework "FirstTry" using ViSTA for make easier to create a new applications in this technology (located on the accompanying CD in catalog "workspacemyvistaCAVE_PG_VS2012FirstTry"). The framework consists of several modules: communication for interfacing with external applications, providing support for the keyboard controller, circulation and transformation of objects, allowing for the interaction with objects. Scene stage manager allows you to add more objects to the scene and text, which allows you to add text both in 2D and 3D. In the framework the main file is Application.cpp that sets and
  • 42. 42 initializes the initial stage of application. By this way I prepared a solution that divided ViSTA frameworks for functional modules, so you can quickly begin to create a new scene with it. 5.2.1.3.2. VR Juggler VR Juggler is the one of first library specialised for implementing CAVE applications. It is scalable system which supports complex multi-screen systems running on clusters. The flexibility of VR Juggler allows applications to execute in many VR system configurations including desktop VR, HMD, CAVE-like and powerwall-like devices. VR Juggler supports IRIX, Linux, Windows, FreeBSD, Solaris, and Mac OS X. Library contains Gadgeteer which is a plug- in system to support local or remote devices. The configuration is based on .xml files. It can work standalone as a scene graph based on OpenGL or can cooperate with existing scene graphs engines like OpenGL Performer, OpenSG and OpenSceneGraph. This sounds good but unfortunately it doesn't work with newest version of such engines and it cannot be compiled in 64 bit mode. This solution is simple to implement and configure to work in CAVE. But it is outdated [32]. 5.2.1.3.3. Equalizer Equalizer is a framework that allows parallelization of OpenGL-based applications [33]. Thanks to it we can benefit from the use of multiple graphics cards, processors and even computers to improve the efficiency and quality of the running applications. Applications based on this framework can be run without modification on both single computer and virtual reality systems consisting of a number of computers. It is a proven solution because many open- source application and commercial products are based on this framework. These include known applications such as RTT DeltaGen or 3D player Bino. It is available for Windows, Linux and Mac. The solution is based on GLUT. At the moment creators are working on adding the administrative library, which will allow the addition, configuration for new windows and changing their templates from separate application. There is available Sequel project, which simplifies the process of creating applications using Equalizer by introducing mechanisms of modules. Sequel can reduce the amount of code written as a ratio of 1 to 10. It is recommended to start with Sequel project and then move on Equalizer with more advanced projects. The main possibilities of framework include distributed rendering based on clusters, support stereo 3D, tracking head position (Head Ttracking), support for virtual HMD helmets, synchronization display on multiple screens, software combining edges (Edge Blending), automatic configuration and one based on ASCII files, compression of the image sent over the network, load-balanced mechanism for renderers units (Load-Balancing) and which is important for the project to I3DVL support InfiniBand network and G-Sync image hardware synchronization (using barriers "NV group" and " NV barrier"). Supported modes of parallel rendering image:
  • 43. 43  2D (SFR - Sort-First Compounds) - each module renderer renders a portion of the target image and display in a single window. This mode is used for example when 4 computers will render an image of the scene (each client after the fourth screen) and then whole image is joined side by side which in turn will give us full screen display,  DB (SLC - Sort-Last Compounds) - lies on the fact that each module render part of the scene in parallel, which is then assembled into a whole image. In this mode, there may be problems with anti-aliasing, transparency and shadows,  Stereo Compounds - image for each eye is attributed to the rendering into an independent entity. The resulting image is copied into the stereo buffer. This mode supports virtually every available stereo 3D image modes e.g. among others active mode (quad-buffer), anaglyphic stereo 3D displays with multi-fold course,  DPlex Compounds (AFR or Time-Multiplex) - in this mode, different grids are assigned to different units renderer. Based on them is reproduced image. This method allows you to increase the number of frames displayed per second,  Tile Compounds - mode similar to the previously described 2D mode, with the difference that each unit renderer renders a few tiles of which the complete picture is created. Rendering the tiles used queuing provides load balancing,  Pixel Compounds - split image rendering unit to render different part of the pixels at each unit,  Subpixel Compounds - This mode assigns separate samples for units rendering to create effects such as anti-aliasing, depth of field, etc in order to speed up rendering the desired effect. For 2D and DB Compounds modes we can take advantage of the "Load Equalizer", which is based on the actual resource utilization of the unit to keep the rendering, adjust the size distribution of the image data to enhance rendering performance of the whole image. In contrast, the "View Equalizer" will use the "Cross-Segment Load-Balancing" with the most current division will adjust the rendering of the image at the level of the GPU to achieve high performance. This option is recommended for use in the CAVE like systems, in order to free resources for the GPU to pass it on to render an image where these resources are missing. An interesting option is the "DFT Equalizer" (Dynamic Frame Transform) which in the case of an overload and too little FPS renders the image at a lower resolution and then rescales it to actively display resolution which will help in improving productivity through the picture at the lower quality. In the event of inactivity, or when the data resources of computing a given image will be generated at full resolution. "Monitor Equalizer" will allow us to scale and display a picture of the system of multi-screen display on the monitor of your computer. Solution architecture is based on a client-server model. It is used here Collage project to build a distributed applications. Each client is controlled by the server. Both the client and server can be the same application (file). The server can respond for only application logic (called the "master"), or participate in the rendering of the 3D image as does the client.
  • 44. 44 For several years there are not already supplied binary libraries on Windows - so you should compile framework from source-code. To compile the source code Equalizer must either use Buildyard package, which contains the entire framework with all dependencies, or you can do it manually one by one starting with the compilation of projects: vmmlib (this is a set of mathematical operations at the level of vectors and matrices), Lunchbox (which is an abstract Connects functionality performed on the operating system level, among others. processor clock, etc.) and Collage (this is a library to manage connections at the network level) and then you can compile the Equalizer. Additional modules include Hardware Service Discovery (hwsd) which allows for automatic detection and configuration of both the network and the machine rendering of the GPU. During working with the framework a good feature is that you can run multiple clients on a single computer (individual's renderers) for the purposes of the developer tests. However, for reasons of performance requirements, it is recommended that each client will be running on a separate computer. Applications can be running centrally from the server using the ssh protocol (then on each client and server the application should be exactly in the same folder) or fostered run them on clients and then call on the server. This library is quite divided into logical modules: "eq :: Node" represents a physical computer, "eq :: Pipe" represents the GPU, "eq :: Window" is the window in which the image is displayed from a single computer, which can be divided into separate parts, the channels "eq :: Channel" can share and send one image on multiple projectors. Using the class "eq :: Canvas" configures the displayed image on any surface including CAVE as well. When displayed flat surfaces such powerwall must configure frustum for all screens while the systems in which the screens do not form a frustum lines should be set up for each screen separately. Properly configured frustum should be the same one as used in application in the calculation of the transformation matrix for head tracking system. Each canvas composed of segments, which already represents the projected image on the screen. Segment should be assigned to each screen or projector. Segments can overlap for use in projectors with the option of combining images (Edge-Blend) and may have cracks for use in so-called walls system of display (Display Walls). To configure frustrum we use segments of the Viewport. For passive 3D stereo installation we must configure the segments "eq :: Segment" for each eye. The two channels (left and right) should be assigned to the same viewport. For active stereo 3D display is used framelock mechanism using software or the hardware barrers. Only hardware barriers (e.g., those in the G-Sync) give confidence properly and correctly synchronization of the image at the right time.
  • 45. 45 Fig. 5.2. Application osgScaleViewer integrates Equalizer with OpenSceneGraph From my point of view, a very important element is the integration with the OpenSceneGraph. Until 2010 there was prepared and provided sample application osgScaleViewer with Equalizer which shows integration with OpenSceneGraph. This project renders the node OSG through load 3D object as cow in example show in Fig. 5.2. This example is an extension of the demo eqPly application and advanced management at the level of multiple clusters in a distributed system graphics rendering. In addition to that still respectively part of the functions of the OSG has been replaced by a side Equalizer so you need to learn the proper application development based on OSG and Equalizer. On the basis of the source code it can be seen that the framework is still being developed. The latest version of the code for the moment of writing work was released in late 2013. Also, the latest version of the accompanying documentation "Equalizer - Programming and User Guide" dated July 2013. 5.2.1.3.4. Summary As you can see there are not so much available frameworks dedicated for CAVE development. In the past we can saw VR Juggler which is great suited for it. But unhappiness it's not developed for long time and doesn't support modern scene graph engines. Similar situation we see with ViSTA framework which was well developed in the past and currently is outdated. Here we have only Equalizer which is very advanced and difficult to use. It is working with OpenSceneGraph but do not work witch OpenSG. The integration with OSG was done with group of students so we can see that module is overgrown and is very difficult to use making chance to do something wrong. But we don't have many more possibilities to choose. We can also use just OpenSG or OpenSceneGraph cluster modules and implement own CAVE support functionalities.
  • 46. 46 Table 5.2. Comparison of CAVE frameworks integrations Description ViSTA VRJuggler Equalizer Distributed computing x x x Static Distributed Object x x x Versioned Distributed Object x x x Head Tracking x x x CAVE support x x x File based configuration x x x CAVE simulation mode - - x OpenGL Performer support - x - OpenSG support v. 1.8 v. 1.8 - OpenSceneGraph support - 13 v. 2 v. 2 and 3 Advanced scalability - - x Table 5.2 shows that base functionality to provide CAVE solutions is provided by each library. The main difference is in supporting modern scene graphs engines and with advanced functionality. Based on supported scene graphs engines we have one winner which is Equalizer. Equalizer contain CAVE simulation mode support which display 5 window at your desktop. This will give you some view how the result applications will looks like. Also Equalizer contains advanced scalability which gives you possibility to scale application between different nodes by splitting your image. 5.2.1.4. Support libraries As support libraries you can use physics, animation, scientists, graphics and other. Here I want only focus at two of them which are used in some of the scene graphs. Cg is used in OpenSceneGraph and NVIDIA SceniX. NVIDIA OptiX is used only in NVIDIA SceniX. Cg is well known and currently is marked as depreciated so I will write only a few sentences about it. But OptiX looks great and is still developed. This is not famous library which provide foto-realistic result almost in real-time. That's the reason why is noticed here. 5.2.1.4.1. Cg Toolkit Cg toolkit is obsolete framework for writing applications that runs on the GPU for OpenGL and DirectX on Windows, MacOSX and Linux. It is no longer developed and supported by NVIDIA. The last version comes from April 2012 which has been developed from 2005. In the pleace NVIDIA recommends using GLSL shaders directly, HLSL or just developed nvFX [34], lua [35] or glfx [36]. 5.2.1.4.2. NVIDIA OptiX 13 There is planned future integration ViSTA with Open Scene Graph.