SlideShare a Scribd company logo
1 of 101
Download to read offline
FAKULT ¨AT F ¨UR INFORMATIK
DER TECHNISCHEN UNIVERSIT ¨AT M ¨UNCHEN
Bachelorarbeit in Informatik
Analyzing the Velocity Fields in
Cosmological Simulations using the
ParticleEngine
Dimitar Dimitrov
FAKULT ¨AT F ¨UR INFORMATIK
DER TECHNISCHEN UNIVERSIT ¨AT M ¨UNCHEN
Bachelorarbeit in Informatik
Analyzing the Velocity Fields in Cosmological
Simulations using the ParticleEngine
Analyse von Vektorfeldern kosmologischer
Simulationen mit Hilfe der ParticleEngine
Author: Dimitar Dimitrov
Supervisor: Prof. Dr. R¨udiger Westermann
Advisor: Kai B¨urger
Date: August 16, 2010
I assure the single handed composition of this bachelor thesis only supported by
declared resources.
M¨unchen, den 16. August 2010 Dimitar Dimitrov
Abstract
The cosmology group led by Prof. Avishai Dekel at the Hebrew University of Jerusalem
(HU), Israel, in collaboration with German scientists in MPE, MPA and LSU in Munich, are
running state-of-the-art cosmological, gravo-hydrodynamical simulations to study galaxy
formation within the new standard cosmological model LCDM.
Their goal is to try to understand the complex, three-dimensional flow pattern of their
simulations, using visualization tools, developed by the visualization group at TUM led
by Prof. Westermann. In particular, they have been using the ParticleEngine to visualize
the velocity field in individual simulation snapshots.
This thesis presents implemented upgrades to the ParticleEngine tool, which directly
address the wishes of the astrophysicists at MPA and Jerusalem, and try to further enhance
the usability of the tool for their research.
vii
viii
Contents
Abstract vii
I. Introduction and Background 1
1. Introduction 3
1.1. Experimental Flow Visualization . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2. Computational Fluid Dynamics and Velocity Fields . . . . . . . . . . . . . . 4
1.3. Particle Tracing for 3-D Flow Visualization . . . . . . . . . . . . . . . . . . . 6
1.4. Volume Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2. The ParticleEngine 11
2.1. Basic Principles behind the ParticleEngine . . . . . . . . . . . . . . . . . . . . 11
2.1.1. Vector Field Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.2. GPU Particle Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.3. Additional Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3. Thesis Goals 17
II. Analyzing Velocity Fields in Cosmological Simulations 21
4. Multiple Sources of Particles 23
4.1. Initial Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2. The Probe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2.1. Source of Particles as a Stand-Alone Entity. . . . . . . . . . . . . . . . 25
4.2.2. Particles’ Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2.3. Adjustable Options and ’Change Detectors’ . . . . . . . . . . . . . . . 26
4.2.4. 4th Component Aware Operations . . . . . . . . . . . . . . . . . . . . 27
4.2.5. Shared Shader Variables and Effect Pools . . . . . . . . . . . . . . . . 29
4.3. Probe Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3.1. The ParticleProbeContainer Class . . . . . . . . . . . . . . . . . . . . 30
4.3.2. The ParticleProbeController Class . . . . . . . . . . . . . . . . . . . . 31
4.3.3. Multi-probe Management Architecture . . . . . . . . . . . . . . . . . 32
4.4. User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.4.1. Tracer Parameters UI Reference . . . . . . . . . . . . . . . . . . . . . . 33
4.4.2. Probe Parameters UI Reference . . . . . . . . . . . . . . . . . . . . . . 35
4.4.3. User Input Modes UI Reference . . . . . . . . . . . . . . . . . . . . . . 38
4.5. The Lense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
ix
Contents
4.6. Lense Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.7. Lense UI Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5. Transfer Function Editor 43
5.1. The Raycaster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.2. The Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.2.1. The Transfer Function Control . . . . . . . . . . . . . . . . . . . . . . 45
5.2.2. The TFEditor Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.3. The RaycastController Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.4. User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.4.1. Raycast Controller UI Reference . . . . . . . . . . . . . . . . . . . . . 49
5.4.2. Raycaster UI Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.4.3. Transfer Function Editor UI Reference . . . . . . . . . . . . . . . . . . 51
6. Fourth-component Recalculation 55
6.1. ParticleTracer3D class upgraded . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.2. Updating the Volume Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.3. Recalculation Fragment Shaders . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.4. User Interface Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
6.4.1. Supported Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7. Physical Units Display 63
7.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.2. Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.3. Projecting the Physical Units on the Screen . . . . . . . . . . . . . . . . . . . 65
7.4. User Interface Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8. Exporting Particles 69
8.1. Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.2. ParticleProbe’s ExportParticles Method . . . . . . . . . . . . . . . . . . . . . 70
8.2.1. Geometry Shader for Export . . . . . . . . . . . . . . . . . . . . . . . 71
8.3. Multi-probe Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
8.3.1. Example Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
III. Results and Conclusion 75
9. Visualizations 77
9.1. Multi-probe Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
9.2. Direct Volume Rendering and Fourth-component Recalculation . . . . . . . 81
10. Conclusion 85
Bibliography 87
x
List of Figures
1.1. Different methods for flow visualization . . . . . . . . . . . . . . . . . . . . . 4
1.2. Images of CFD simulation visualizations [?]. . . . . . . . . . . . . . . . . . . 4
1.3. A snap-shot of a two-dimensional fluid with some of the velocity vectors
shown [?]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4. On recent GPUs, textures can be accessed in the vertex units, and render-
ing can be directed into textures and vertex arrays. This and other features
enable these GPUs to advect and render large amounts of particles [?]. . . . 6
1.5. Hurricane Isabelle is visualized with transparent point sprites [?]. . . . . . . 7
1.6. Different particle-based strategies are used to visualize 3D flow fields by the
ParticleEngine. (Left) Focus+context visualization using an importance mea-
sure based on helicity density and user-defined region of interest. (Middle)
Particles seeded in the vicinity of anchor lines show the extend and speed at
which particles separate over time. (Right) Cluster arrows are used to show
regions of coherent motion [?]. . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7. Volume ray casting is a direct volume rendering technique to visualize vol-
ume data. 2D image is produced by shooting a ray from the eye position
into the volume, and accumulating the sampled values along this ray onto
the 2D image plane [?]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.8. Images of isosurfaces rendered from a volume data set. . . . . . . . . . . . . 10
2.1. The ParticleEngine displaying a rough approximation of its underlying vec-
tor field. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2. The advection-rendering cycle of the ParticleEngine . . . . . . . . . . . . . . 12
2.3. User defined probe injecting particles into the field. Partialy transparent
point primitives are used for rendering. . . . . . . . . . . . . . . . . . . . . . 14
2.4. The ParticeEngine in Clearview mode. Two isosurfaces blended together us-
ing a user defined ’lense’. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.1. Example of an experiment utilizing multi-probe configuration . . . . . . . . 23
4.2. Initial application architecture. The ParticleTracerBase class hosts all vari-
ables holding particle parameters. The advection and rendering steps are
performed by methods of the class. The ParticleTracer3D class is responsi-
ble for managing the vector field data. . . . . . . . . . . . . . . . . . . . . . . 24
4.3. ParticleProbe class and its ParticleProbeOptions member . . . . . . . . . . 25
4.4. Some change detectors of the ParticleProbeOptions class . . . . . . . . . . . 27
4.5. Adjustable options, controlling the 4th component aware operations, and
their effect variables. Additionally, there is a flag to enable or disable them
according to the format of the vector field data . . . . . . . . . . . . . . . . . 28
xi
List of Figures
4.6. Shared shader variables and effect in the ParticleTracerBase, and the child
advection effect in ParticleProbe . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.7. The container class, responsible for probe management. . . . . . . . . . . . . 30
4.8. The ParticleProbeController class, methods for registering and unregister-
ing a probe. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.9. Multi-probe management architecture. . . . . . . . . . . . . . . . . . . . . . . 32
4.10. Tracer Parameters UI, found in the lower right corner of the application win-
dow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.11. Probe Parameters UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.12. User Input Mode UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.13. The new lense is a special kind of probe. The lense’s parameters are also
contained by the ParticleProbeOptions class. . . . . . . . . . . . . . . . . . . 39
4.14. Experiment, demonstrating the clip-plane functionality of the new lense.
The particles from each probe get projected onto the lense’s plane. . . . . . . 40
4.15. The ParticleProbeContainer is used also to manage lenses. . . . . . . . . . . 41
4.16. Probe Parameters UI - Lense selected . . . . . . . . . . . . . . . . . . . . . . . 42
5.1. Direct volume rendering by the ParticleEngine, visualizing the vector lenght
as a 4th component. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.2. The Raycaster class, represented in the ParticleTracerBase. . . . . . . . . . . 44
5.3. The transfer function control element, displaying a transfer function . . . . 45
5.4. The TFEditor class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.5. RaycastController organizes all volume rendering options in a new UI . . . 47
5.6. Raycast Controller UI, and the Transfer Function Editor UI displayed below
it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.7. Raycast Controller UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.8. The Raycaster UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.9. The Transfer Function Editor UI . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.1. ParticleTracer3D is upgraded to support the 4th component recalculation. . 56
6.2. The 4th Component Recalculation UI. . . . . . . . . . . . . . . . . . . . . . . 60
7.1. Vector field domain, rendered with its physical coordinates and dimensions,
projected onto its bounding box. . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.2. Upgrade for physical units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.3. Physical Units
display UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.1. Multi-probe configuration classes extended to support particles exporting. . 69
8.2. Export file, created by the ParticleEngine, opened in Microsoft®Excel® . . . 73
9.1. Multiple small probes displaying streamlines. . . . . . . . . . . . . . . . . . 77
9.2. Two-probe configurations, using 4th component aware injection. Below, the
lifetime of the particles is set to one to show the modulated injection density. 78
9.3. Multi-probe configurations. Above, using 4th component aware injection.
Below, using 4th component aware modulation. . . . . . . . . . . . . . . . . 79
xii
List of Figures
9.4. Three-probe configuration, using 4th component aware modulation. The
two big probes are displaying sprites (above), and points (below). The third
probe is displaying streamlines. . . . . . . . . . . . . . . . . . . . . . . . . . . 80
9.5. Direct volume renderings of the temperature values, encoded in the 4th,
using different transfer functions. . . . . . . . . . . . . . . . . . . . . . . . . . 81
9.6. Direct volume rendering of the curl (above) and divergence (below), calcu-
lated with the 4th component recalculation feature. . . . . . . . . . . . . . . . 82
9.7. Direct volume rendering of the vector lenght (above) and divergence (be-
low), focused in the selected probe’s boundaries. . . . . . . . . . . . . . . . . 83
xiii
Part I.
Introduction and Background
1
1. Introduction
1.1. Experimental Flow Visualization
Fluid mechanics is the branch of the physical sciences concerned with how fluids behave
at rest or in motion. Its uses are very broad - fluid mechanics examines the behavior of
everything that is not solid, including liquids, gases, and plasma. This makes it one of the
most important physical sciences in engineering [?].
As fluid mechanics is an active field of research with many unsolved or partly solved
problems [?], the methods of gaining insight into flow patterns are playing a major role in
the pursuit for further understanding of the topic.
Flow visualization is the study of methods to display dynamic behavior in liquids and
gases. Most fluids (air, water, etc.) are transparent, making their flow patterns unrecogniz-
able to the naked eye. Thus, techniques for flow visualization must be applied to enable
observation [?, ?].
In experimental fluid dynamics, three approaches are most commonly used for this task
[?, ?]:
• Surface flow visualization: This method reveals the flow streamlines in the limit as
the flow approaches a solid surface. For example, applying colored oil to the surface
of a wind tunnel model forms patterns as the oil responds to the surface shear stress
(fig. 1.1(b)).
• Particle tracing: Particles, such as smoke or water bubbles, can be added to a flow to
trace the its motion. The particles can be then illuminated with a sheet of laser light
in order to visualize a slice of a complicated fluid flow pattern. Assuming that the
particles faithfully follow the streamlines of the flow, measuring its velocity using the
particle image velocimetry or particle tracking velocimetry methods is also possible
(fig. 1.1(a)).
• Optical methods: Some flows reveal their patterns by way of changes in their opti-
cal refractive index. These are visualized by optical methods known as the shadow-
graph, schlieren photography, and interferometry. More directly, dyes can be added
to (usually liquid) flows to measure concentrations; typically employing the light
attenuation or laser-induced fluorescence techniques (fig. 1.1(c)).
3
1. Introduction
(a) Using air bubbles, generated
by electrolysis of water to trace
water flows [?].
(b) Surface oil flow visualization [?]. (c) Shadowgram of the turbulent
plume of hot air rising from a
home-barbecue gas grill [?].
Figure 1.1.: Different methods for flow visualization
1.2. Computational Fluid Dynamics and Velocity Fields
Fluid mechanics problems can be mathematically complex. Most of the time, they are
best solved by numerical methods, typically using computers. A modern discipline, called
computational fluid dynamics (CFD), is devoted to this approach [?]. It extends the abil-
ities of scientists to study flow by creating simulations of fluids under a wide range of
conditions [?, ?].
(a) A computer simulation of high velocity air
flow around the Space Shuttle during re-entry.
(b) A simulation of the Hyper-X scramjet vehicle
in operation at Mach-7.
Figure 1.2.: Images of CFD simulation visualizations [?].
The fundamental basis of almost all CFD problems are the Navier-Strokes equations,
which define any single-phase fluid flow. Removing terms describing viscosity yields the
Euler equations. Further simplification, by removing terms describing vorticity yields the
full potential equations. Finally, these equations can be linearized to yield the linearized
potential equations. Historically, methods were first developed to solve the linearized
potential equations. Then, Euler equations were also implemented. Ultimately, the Navier-
Strokes equations were incorporated in a number of commercial packages [?]. In many
cases, however, the complexity of the problems is larger than even the most powerful
4
1.2. Computational Fluid Dynamics and Velocity Fields
computer systems of today can model.
The most fundamental consideration in CFD is how one treats a continuous fluid in a
discretized fashion on a computer. One method is to discretize the spatial domain into
small cells to form a volume mesh or grid, and then apply a suitable algorithm to solve the
equations of motion. Such a mesh can be either irregular (for instance consisting of trian-
gles in 2D, or pyramidal solids in 3D) or regular. There are also a number of alternatives
that are not mesh-based. Some of them are Smoothed particle hydrodynamics, Spectral
methods, and Lattice Boltzmann methods [?].
Several variations of field data can be generated by CFD experiments, based on its time-
dependency. A static field is one in which there is only a single, unchanging velocity field.
Time-varying fields may either have fixed positions with changing vector values or both
changing positions and changing vectors. These latter types are referred to as unsteady
[?].
Throughout this thesis, only steady flow fields are considered. A regular 3-D grid of
velocity vectors is assumed as the format, in which vector field data is present. The pri-
mary reasons to choose such format is its simplicity and the ability to parallel process data
stored in it. This velocity vector field describes mathematically the motion of a fluid. The
length of the flow velocity vector at a particular position in the field corresponds to the
flow speed at that position.
Figure 1.3.: A snap-shot of a two-dimensional fluid with some of the velocity vectors
shown [?].
5
1. Introduction
1.3. Particle Tracing for 3-D Flow Visualization
Advances in experimental and CFD flow analysis are generating unprecedented amount
of fluid flow data from physical phenomena. The ever increasing computational power
and the dedicated graphics hardware solutions now available are enabling new advanced
ways for visualizing this data in digital 3-D environments.
In experimental flow analysis, particle tracing has been established as a powerful tech-
nique to show the dynamics of fluid flows. Its main principles can be easily adapted for
simulation in a digital environment, making it a potent technique for computer-aided fluid
flow visualization.
Presented with experimental data, discretized as a finite grid of vector quantities, de-
scribing the flow speed and direction at given coordinates, a particle system can be nu-
merically advected to approximate real-world experiment. Then, graphical primitives,
such as arrows, motion particles, particle lines, stream ribbons, and stream tubes, can be
produced to emphasize flow properties and to act as depth cues to assist in the exploration
of complex spatial fields.
Such system is able to deal with large amounts of vector-valued information at inter-
active rates. When implemented to exploit the functionality of recent graphics hardware,
millions of particles can be traced through the flow at interactive frame rates. This makes
the exploration of complex fluid flows on a consumer hardware possible and greatly ex-
tends its applicability [?].
Figure 1.4.: On recent GPUs, textures can be accessed in the vertex units, and rendering
can be directed into textures and vertex arrays. This and other features enable
these GPUs to advect and render large amounts of particles [?].
6
1.3. Particle Tracing for 3-D Flow Visualization
Figure 1.5.: Hurricane Isabelle is visualized with transparent point sprites [?].
Importance-driven particle visualization
The capability to handle large system of particles, however, quickly overextends the
viewer due to the massive amount of visual information produced by this technique. With
the help of importance-driven strategies, interesting structures in the flow can be revealed
by reducing the visual information and allowing the viewer to concentrate on important
regions.
In [?] a number of importance-driven visualization techniques are proposed. They make
experiment exploration less prone to perceptual artifacts and minimize visual clutter, pro-
duced by frequent positional changes of large amounts of particles. Relevant structures
in the flow are emphasized by integrating user-controlled and feature-based importance
measures. These measures are used to control the shape, the appearance, and the density
of particles in such a way that the user can focus on the dynamics in important regions
and at the same time preserve context information.
Improvements for particle-based 3D flow visualization, proposed by [?]:
• Automatically adapt the shape, the appearance, and the density of particle primitives
with respect to user-defined and feature-based regions of interest.
• Using vorticity, helicity density and finite time Lyapunov exponent as an importance
measures. The finite time Lyapunov exponent is particularly useful for the selec-
tion of characteristic trajectories in the flow, called anchor lines, and only visualizing
those particles, that leave an anchor.
• Clustering approach is applied to determine regions of coherent motion in the flow.
Sparse set of static cluster arrows emphasize these regions. Cluster arrows are geo-
metric primitives that represent regions of constant motion in the flow.
• Focus+context visualization. This means that, within the focus region, the flow field
is visualized at the highest resolution level, and contextual information is preserved
by visualizing a sparse set of primitives outside this region.
7
1. Introduction
Figure 1.6.: Different particle-based strategies are used to visualize 3D flow fields by the
ParticleEngine. (Left) Focus+context visualization using an importance mea-
sure based on helicity density and user-defined region of interest. (Middle)
Particles seeded in the vicinity of anchor lines show the extend and speed at
which particles separate over time. (Right) Cluster arrows are used to show
regions of coherent motion [?].
Streamlines, streaklines, and pathlines
Streamlines, streaklines and pathlines are field lines resulting from a given vector field
description of a flow. They can serve as additional visual clues for flow patterns [?].
The different types of lines differ only when the flow changes with time: that is, when
the flow is not steady.
• Streamlines are a family of curves that are instantaneously tangent to the velocity
vector of the flow. These show the direction a fluid element will travel in at any
point in time.
• Streaklines are the locus of points of all the fluid particles that have passed contin-
uously through a particular spatial point in the past. Dye steadily injected into the
fluid at a fixed point extends along a streakline.
• Pathlines are the trajectories that individual fluid particles follow. These can be
thought of as a ”recording” of the path a fluid element in the flow takes over a cer-
tain period. The direction the path takes will be determined by the streamlines of the
fluid at each moment in time.
• Timelines are the lines formed by a set of fluid particles that were marked at a previ-
ous instant in time, creating a line or a curve that is displaced in time as the particles
move.
1.4. Volume Rendering
Here, a short introduction to volume rendering is made. The ParticleEngine tool, de-
scribed in chapter 2, applies this powerful technique for visualizing additional spacial
properties of a flow field, and this is one of the aspects, which are to be addressed through-
out this thesis.
8
1.4. Volume Rendering
Volume rendering is a technique used to display a 2D projection of a 3D discretely sam-
pled data set [?]. A typical 3D data set is a group of 2D slice images acquired by a CT, MRI,
or MicroCT scanner. These are usually acquired in a regular pattern (e.g., one slice every
millimeter) and usually have a regular number of image pixels in a regular pattern. This
makes the technique very suitable for the case of 3-D regular vector field grids, used by
the ParticleEngine.
To render a 2D projection of the 3D data set, the opacity and color of every voxel (a
volumetric pixel in a 3-D texture) must be defined. This is usually done using an RGBA
(for red, green, blue, alpha) transfer function that maps a RGBA value for every possible
voxel value. This transfer function is then applied with, for example, the volume ray
casting technique to obtain the final 2D image. This way of visualizing volume data is
called direct volume rendering.
Figure 1.7.: Volume ray casting is a direct volume rendering technique to visualize vol-
ume data. 2D image is produced by shooting a ray from the eye position into
the volume, and accumulating the sampled values along this ray onto the 2D
image plane [?].
A volume may be also viewed by extracting surfaces of equal values from the volume
and rendering them as polygonal meshes. Such surface is called isosurface. Isosurfaces
are used as data visualization methods in CFD, allowing engineers to study features of a
fluid flow (gas or liquid) around objects, such as aircraft wings [?].
9
1. Introduction
(a) A (smoothed) rendering of a data set of
voxels for a macromolecule [?].
(b) An isosurface, rendered by the ParticleEngine.
Figure 1.8.: Images of isosurfaces rendered from a volume data set.
10
2. The ParticleEngine
The ParticleEngine is a particle system for interactive visualization of 3D flow fields on
uniform grids. It exploits features of recent graphics hardware to advect particles in the
graphical processing unit (GPU), save the new positions in the graphics memory and send
them back through the GPU to obtain images in the frame buffer. This approach allows for
interactive streaming and rendering of millions of particles and enables virtual exploration
of high resolution fields in a way similar to real-world experiments. To provide additional
visual clues, the GPU constructs and displays visualization geometry like particle lines
and stream ribbons.
2.1. Basic Principles behind the ParticleEngine
2.1.1. Vector Field Data
The ParticleEngine operates on a 3-D uniform Cartesian grid, with each cell containing
three to four floating point components. The discretized velocity field data for particular
experiment is loaded from a file into this grid. The first three components of every grid
cell contain the speed (magnitude) and the direction of the flow at this cell’s position. The
4th component of the grid can be utilized to store some scalar physical characteristic of the
flow field, such as density or temperature.
Figure 2.1.: The ParticleEngine displaying a rough approximation of its underlying vector
field.
11
2. The ParticleEngine
2.1.2. GPU Particle Tracing
The ParticleEngine traces massless particles in a flow field over time, computing their
trajectory by solving the ordinary differential equation of the field
∂˜x
∂t
= ˜v(˜x(t), t) (2.1)
with the initial condition ˜x(t) = x0. Here, the ˜x(t) is the time-varying particle position, ∂˜x
∂t
is the tangent to the particle trajectory, and ˜v is an approximation to the real vector field
v. As v is sampled on a discrete lattice, interpolation must be performed to reconstruct
particle velocities along their characteristic lines.
Modern GPUs expose capabilities, such as the possibility to access texture maps in the
vertex units (see figure 1.4), programmable geometry and fragment shaders, and ability
to stream vertex data from the geometry-shader stage (or the vertex-shader stage if the
geometry-shader stage is inactive) to one or more buffers in memory.
The ParticleEngine computes intermediate results on the GPU, saves these results in
graphics memory, and uses them again as input to the geometry units to render images
in the frame buffer. This process requires application control over the allocation and use of
graphics memory; intermediate results are ’drawn’ into invisible buffers, and these buffers
are subsequently used to present vertex data or textures to the GPU.
Figure 2.2.: The advection-rendering cycle of the ParticleEngine
Initial particle positions are stored in the RGB color components of a floating point tex-
ture of size M×N. These positions are distributed regularly or randomly in the unit cube.
In the alpha component, each particle carries a random floating point value that is uni-
formly distributed over a predefined range. This value is multiplied by a user defined
global lifetime to give each particle an individual lifetime. By letting particles die - and
thus reincarnate - after different numbers of time steps, particle distributions very similar
to those generated in real-world experiments can be simulated.
12
2.1. Basic Principles behind the ParticleEngine
Injection
Particles are initially uploaded to the GPU in a particle buffer. The elements of this buffer
are structures, containing all the needed attributes to define a particle in the flow. Some
of these parameters are: initial position, current position, direction for the next advection
step, and lifetime. Another empty particle buffer is also created. This two buffers form the
ping-pong buffer system, used for the advection step.
The user can interactively position and resize a 3-D probe that injects particles into the
flow. All particles are initialy born, and subsequently reincarnated within this region. The
birth of a particle consists of reading its starting position from the M×N texture described
above, and initializing its lifetime timer.
Advection
The advection step is performed by a geometry shader, using the RK3(2) integration
scheme. The geometry shader updates the positions and the timer of each particle in the
source ping-pong buffer, and streams them out to the receiving ping-pong buffer, using
the stream-output pipeline stage. This effectively moves all particles one time step further
along the field. To complete the advection step, the two buffers are exchanged - the target
becomes the source and vice versa.
Additionally, the advection step is also responsible to make a ’death test’ for each par-
ticle. This test checks the lifetime of a particle and if it is still in the vector field domain
boundaries. According to the test results, the particle can be reinjected, following the steps,
specified in the ’Injection’ section above.
Rendering
The particle buffer, containing the new positions of the particles is then used to render
them. This buffer is bound to the pipeline as a vertex buffer, containing a list of points.
Then, each particle’s current position gets transformed with the momentary model-view-
projection matrix and rendered onto the frame buffer.
Additionally, a number of user adjustable options play a role on which particles get dis-
played and how. Different display modes are available to aid the understanding of the
field. Rendering one pixel for every particle can be used to simulate smoke-like matter
distribution. To add more geometry to the scene, sprites can be rendered for each par-
ticle, using the geometry shader unit. Oriented sprites, for example, are very useful for
visualizing the directional properties of the field.
13
2. The ParticleEngine
Figure 2.3.: User defined probe injecting particles into the field. Partialy transparent point
primitives are used for rendering.
2.1.3. Additional Features
Importance-driven Particle Visualization
The ParticleEngine also incorporates all the importance-driven particle visualization tech-
niques, described in section 1.3. This includes the importance measures, the user de-
fined focus+context regions, the anchor lines, the cluster arrows and the different kinds
of streamlines.
Volume Rendering
As already mentioned in the Vector Field Data section(2.1.1), the grid on which the Parti-
cleEngine operates has an additional 4th component to every of its cells. Ignoring the three
other components, this can be interpreted as a volume data set, subjecting it to volume
rendering techniques.
This makes possible the visualizing of any flow characteristic, loaded into this 4th com-
ponent. The direct volume rendering and isosurfaces are both supported (see section 1.4).
Additionally, the so called Clearview mode is also available within the ParticleEngine. The
Clearview mode enables the user to compare two isosurfaces, by rendering them and then
blending them according to user specified options.
14
2.1. Basic Principles behind the ParticleEngine
Figure 2.4.: The ParticeEngine in Clearview mode. Two isosurfaces blended together using
a user defined ’lense’.
15
2. The ParticleEngine
16
3. Thesis Goals
After using the ParticleEngine tool on individual snapshots of velocity field data of in-
terest, the research group members acknowledged its potential, but also did notice, that in
order for it to be seriously considered for the project, upgrades must be realized.
Primary goal of this project is to address the wishes of the astrophysicists, and upgrade
the tool to best suit their needs. In the process, the application’s existing functionality
should be optimized and partly redesigned to seamlessly work with the new features.
The following outlines the stated upgrade wishes and gives more details about each one:
Implement a way to start particles from more than one box and color them separately.
Currently, the user can inject particles into the flow field by positioning a cuboid-shaped
source of particles, called probe, within the field’s spacial domain. The particles’ start
positions are distributed inside the probe’s boundaries according to a predefined scheme -
either randomly, or uniformly. These positions are then used to initially inject the particles,
or to reincarnate these, which left the domain, or reached the end of their lifetime.
The user can move and resize the probe, and observe how the particles move through
the flow. Different parameters can be changed, such as number of particles, their color or
size.
Having just one probe places several limitations on the user:
• Only one region for particle injection can be defined. The user is not able to simul-
taneously observe two or more features at different locations within the field. For
example, two particular streams within the flow. Making the probe large enough
is not a solution, because the particles are distributed in the whole probe and will
eventually have too much space in between to adequately represent the movement
of the mass. Increase of particles number will distract the user from the important
regions.
The ParticleEngine’s feature injection capability can be useful in this case. It populates
the probe according to the field’s 4th component, given an upper and lower thresh-
olds. It, however, is computationally expensive, making movement and resizing of
the probe problematic. Also, useful values for the 4th component must be preloaded,
making interactive experimentation impossible.
• There cannot be more than one type of particles at a time in the flow. A ’type’ of
particle is defined as a combination of the different parameters, controlling how the
particles are injected, advected and displayed. These include injection mode, display
mode, color, and lifetime to name a few.
• All particles start their advection simultaneously.
These limitations should be addressed and alleviated. Chapter 4 is devoted to this topic.
17
3. Thesis Goals
Populate the starting positions of particles weighted by the local 4th-component value
of the field
In addition to enabling multiple probes, the feature injection mode of the probe, already
mentioned above, should be adapted and optimized. The present implementation de-
pends on upper and lower thresholds to decide, where the particles must be born. This
introduces sharp borders between regions, meeting the threshold condition, and the oth-
ers.
Being able to modulate the density of the particles according to the 4th-component will
allow regions of interest to have more particles than others, and will make for smooth
transition between them.
Furthermore, the particle density should update in respect of probe movement and re-
sizing. To avoid hampering the exploration, this update should happen at interactive rates.
This is addressed in Chapter 4. In particular in section 4.2.4.
Improve/generalize ways in which to define the transfer function of the 4th-component
inside the ParticleEngine
The ray casting volume rendering technique, integrated in the ParticleEngine, provides
high quality images of the field’s 4th component. This makes it powerful tool for gaining
insights of the field’s characteristics, thus aiding the understanding of flow patterns.
In particular, the direct volume rendering capability should be addressed here. For in-
troduction to volume rendering, see 1.4.
At the moment, the ParticleEngine has no general way of defining a transfer function
for direct volume rendering. The main interface allows the user to load a shader code
fragment from file, which must include particular functions. The integrated ray caster
then uses these functions to map values to color.
This has several shortcomings:
• It does not support interactive exploration. The user must guess which transfer func-
tion will yield usable results. For every different transfer function, there must be a
code fragment, saved as a file.
• It requires programming skills, and also knowledge of the names and arguments of
the functions, called by the ray caster.
• Loading files with inappropriate format may produce unexpected results. This re-
quires extra care when creating the code fragments.
To unleash the full potential of the build-in volume renderer, a new way of defining
a transfer function should be introduced. It should improve the exploration of the 4th
component by supplying intuitive user interface.
Chapter 5 is dedicated to this goal.
18
Allow snapshots of particles to be exported to file. For each particle store original and
final location, flag particles that have left the domain.
The use of multiple specialized tools for different purposes, to produce results, not ob-
tainable by any of the tools alone, is always very a important consideration and sometimes
even unavoidable in large projects.
To make the ParticleEngine more applicable in such synergetic environment, a way of
exporting a snapshot of the particles, currently in the flow, should be implemented. For
further analysis or more advanced visualization, the particle start positions and current
positions should be written out to a file on the disk.
To widen the interoperability, a standardized and simple format for the file should be
chosen.
This is discussed in chapter 8.
Add physical coordinates to the display on screen. Do the same for the location and
size of the box that acts as source of particles.
Without some way of specifying real physical coordinates and scale, the usefulness of
the visualized data for scientific research quickly reaches its limit.
The user should be able to set the coordinates and the dimensions of the field’s domain
in specified from him physical units. User interface should be presented to facilitate this
task. Additionally, visual clues should be displayed on-screen, hinting the user of the
physical dimensions and scale of the data visualized.
Furthermore, the user should be presented with a way to define an exact position and
dimensions of each probe currently in the scene, in the physical coordinates specified. The
functionality for exporting particles, described above should also be made aware of these
units.
The solution to this requirement is discussed in chapter 7.
Allow calculation of properties of velocity fields and use it as ”fourth component”
As the power of the build-in volume renderer gets more accessible with the introduc-
tion of user-definable transfer function, the 4th component visualization gains even more
on relevance. As of now, if the user wants to visualize some field characteristic, such as
temperature or density, it must be encoded as the 4th component and loaded with the
vector field.
A flow field can have many characteristics, delivered by the experimental data. This
requires many data sets, which only differ in their 4th components, to be managed. As a
consequence, oft reloads of vector field data must be taken into consideration. Further-
more, some important characteristics are derivable from the vector field itself, or through
some function of present ones.
Realizing a way to dynamically at run-time recalculate the data, stored into the 4th com-
ponent, will make the ParticleEngine much more versatile and reduce the burden of man-
aging many data sets. As a first step, calculation of local properties of the field, such as
19
3. Thesis Goals
divergence and curl, should be implemented. Then, additional functions, like applying
some function on the present 4th component values, or loading a 4th component from ex-
ternal field and using it to change in some way the present values can be considered.
This requirement is addressed in chapter 6.
20
Part II.
Analyzing Velocity Fields in
Cosmological Simulations
21
4. Multiple Sources of Particles
This chapter addresses the first two upgrade requirements, described in the section ’The-
sis Goals’ (3). In order to alleviate the drawbacks pointed out there, changes to the ap-
plication’s architecture were realized. The user-friendly interaction with the multi-probe
configurations has been ensured by a new user interface. Also, already present functional-
ity, which is considered useful for the project, has been seamlessly integrated into the new
environment.
First, an introduction to the current architecture is made. This is intended to make clear
what a source of particles is from application’s standpoint. Then, a new definition for
probe is given, and technical details about the architecture changes enabling the support
for multi-probe configurations are presented. Finally, the new user interface is discussed
in detail.
Figure 4.1.: Example of an experiment utilizing multi-probe configuration
23
4. Multiple Sources of Particles
4.1. Initial Architecture
Here, the application’s architecture, as it was at the beginning if this project, is intro-
duced. As it served as the base for the development, insight into it will help to fully
understand the reasons behind the performed application changes, described in the next
sections.
The ParticleTracer Object
The ParticleTracer is the main internal object of the ParticleEngine. It consists of the
ParticleTracerBase class, and an extending class. In this project, only the ParticleTracer3D
is considered. Figure 4.2 shows its structure as a class diagram. Only some members and
methods are displayed for readability.
Figure 4.2.: Initial application architecture. The ParticleTracerBase class hosts all variables
holding particle parameters. The advection and rendering steps are performed
by methods of the class. The ParticleTracer3D class is responsible for manag-
ing the vector field data.
The ParticleTracerBase class hosts all the variables holding particle parameters, such as
start positions (m pParticleStartposTex), injection mode (m StartPosMode), initial count
(m iStartParticles), lifetime (m iMaxLifetime) and color (m vPartColor). The methods
performing the advection (AdvanceParticles()) and rendering (RenderParticles()), and the
ping-pong buffers (m pParticleDrawFrom and m pParticleStreamTo) used by them, are
also integral for the class.
The ParticleTracer3D class extends the ParticleTracerBase class by adding functional-
ity for loading and manipulating the 3D vector field data. The data is loaded into the
m pTexture3D variable, by the CreateVolumeTexture() method.
24
4.2. The Probe
Next, an extension over this architecture is presented. The driving idea behind the new
design is to make it more flexible by allowing easier extensibility. To achieve this, the
architecture must be modularized. As a first step, the particle source is defined as a stand-
alone entity. Then, additional structures are introduced to simplify the management of
multiple particle sources.
4.2. The Probe
A source of particles within the new architecture is referred to as probe. This name is
adopted from the current implementation, which also calls the source of particles a probe.
However, this term is to be extended and more clearly separated from other ParticleTracer
functionality.
4.2.1. Source of Particles as a Stand-Alone Entity.
The basic features and visual appearance of the ’old’ probe are to be kept. These are as
follows:
• cuboid-shaped region which defines possible start positions for particle injection.
• it can change its position and dimensions along the main axes. No rotation is sup-
ported.
• all the particles, injected from a probe, are of the same ’type’. The type of particles
is the collection of all adjustable particle parameters and is to be defined in the next
section.
A new class called ParticleProbe (see figure 4.3) is created to represent the probe entity.
It encapsulates all variables and defines all the functionality of a particle source. This
includes most notably the advection and rendering, and shader management functions.
Figure 4.3.: ParticleProbe class and its ParticleProbeOptions member
25
4. Multiple Sources of Particles
.
The ParticleProbe class has all its parameters encapsulated by the m ppOptions mem-
ber from the ParticleProbeOptions class (This class is explained in detail in the next sec-
tion). The probe contains the particles’ start positions (m v4aParticlesStartParams), the
ping-pong buffers (m pParticlesContainer and m pParticlesContainerNew), and it is re-
sponsible for advection of its own particles with the methods AdvectParticles() and Ren-
derParticles() 1
4.2.2. Particles’ Type
As already mentioned above, a probe can inject only particles of the same type. This
section defines what a particle type is.
A particle type is the collection of all parameters, which can be adjusted to get different
particle behavior, or visual appearance. To separate the variables, representing such pa-
rameters, from those, controlling internal processes, such as the advection and rendering,
the class ParticleProbeOptions is introduced (see figure 4.3). Encapsulating all adjustable
options in their own class enables the probe to expose only these for outside manipulation,
ensuring its internal integrity.
Adjustable particles’ options The following list summarizes all the adjustable options 1:
• Injection Mode (InjectionMode);
• Particles color / opacity (vProbeColor and fParticlesAlpha);
• Particles count (iParticlesCount);
• Particles lifetime (iParticlesLifetime);
• Particles size (for sprites) (fSpriteSize);
• Particles Display Mode (PVisMode);
• 4th component aware operations options (see section 4.2.4)
Along the other adjustable particle options, the ParticleProbeOptions class exposes also
the matrix, used to save the positions and dimensions of the probe (mProbeWorldMatrix),
thus allowing outside manipulation of it.
4.2.3. Adjustable Options and ’Change Detectors’
The ParticleProbeOptions class also introduces the concept of change detectors. The
change detectors are functions which indicate a change of an option since the was previ-
ously checked. This allows the probe to keep track of the adjustable options and react to
changes, which require adjusting of internal structures.
1
Additionally, there are variables and methods for trajectories, which are the new version of streamlines, and
are no longer understood by the application as different display mode, but can coexist with the particles
within a probe. However, for simplicity, these are not going to be further considered.
26
4.2. The Probe
Figure 4.4.: Some change detectors of the ParticleProbeOptions class
Figure 4.4 shows some of the available detectors. These are triggered, if the respective
property’s set accessor was used. After reading the detectors, the probe is responsible to
reset them by invoking the ResetDetectors() method.
4.2.4. 4th
Component Aware Operations
This section focuses on the second upgrade requirement from the ’Thesis Goals’ section
(3), for enabling modulation of particle density according to particular field characteristic.
Moreover, it describes the integration of some of the existent importance-driven visual-
ization techniques in the new probe concept. This is referred as the 4th component aware
parameter modulation.
The use of the 4th component of the vector field data to procedurally adjust a particle’s
parameters is referred to as a 4th component aware operation. Those operations are ap-
plied on per particle basis, but their threshold parameters are defined per probe. There
are currently two types of 4th component aware operations - the 4th component aware
injection and the 4th component aware modulation.
4th component aware particle injection.
When this mode is enabled, only particles, whose positions satisfy the injection require-
ment are born and subsequently advected. This injection requirement is defined by the
three adjustable options - scale (f4thCompAwareInjectionScale), min
(f4thCompAwareInjectionMin) and max (f4thCompAwareInjectionMax) - and the 4th
component of the field at the particle’s position.
The scale, min and max options are used to adjust the interval of 4th component values,
which is of interest for a particular experiment. This is done according to the following
formula:
min ∗ scale < 4th
component < max ∗ scale (4.1)
For all particles, whose 4th component values are bigger than max ∗ scale the chance of
birth is 100%. If the value is lower than min ∗ scale, the chance is zero. The chance for
particles, lying in the specified range, is calculated with the HLSL’s smoothstep function,
which uses Hermite interpolation to return a number between the specified lower and
upper bound.
27
4. Multiple Sources of Particles
Figure 4.5.: Adjustable options, controlling the 4th component aware operations, and their
effect variables. Additionally, there is a flag to enable or disable them according
to the format of the vector field data
The effect of this method of injection is, that it allows the increase of particle density in
regions of interest.
The 4th component aware injection options act in the advection phase. The geometry
shader, responsible for particle birth is ignoring all particles which failed the described
test. Thus, only born particles are considered by subsequent advection steps.
This method of particle injection density modulation is very fast, because the decisions
are taken on the GPU. This allows for real-time dynamic density adjustment as the probe
changes its position and dimensions. However, in most cases many particles must be
transferred to the GPU but only a handful are used in the advection process.
For an example, see 9.2.
4th component aware parameter modulation.
In this mode, some of the parameters of the particles are changed according to the 4th
component value at each particle’s momentary position. Which parameters get modu-
lated depends on the particle’s display type - for points, the 4th component modulates the
particle’s opacity; for sprites - the size and opacity.
The modulation exposes the same adjustable options and conforms to the same formula
as described for the 4th component aware injection. If the 4th component value lies below
scale∗min the opacity / size is set to null. If otherwise higher than scale∗max the opacity
/ size are set to the maximal value given in the ParticleProbeOptions of the respective
probe.
The 4th component aware parameter modulation happens during the rendering phase.
This means it doesn’t require the advection to be activated to see its effects. Also, no
internal changes are caused by its options, making it very fast and interactive.
For an example, see 9.4.
28
4.3. Probe Management
4.2.5. Shared Shader Variables and Effect Pools
Every probe advects and renders its own particles. This requires it to manage the effects
and shader variables needed for that itself.
Figure 4.6.: Shared shader variables and effect in the ParticleTracerBase, and the child ad-
vection effect in ParticleProbe
The advection phase depends on many probe specific parameters, and also it needs
access to the vector field data. This data, and other resources must be shared between
probes, otherwise the inappropriate resource usage will defy the multiple-probe concept.
This problem is solved with the help of effect pools and shared shader variables. Figure
4.6 shows the employed structure. The ParticleTracer manages an effect pool
(m pProbeParticlesAdvectionEffectPool), and the vector field data
(m pAEP VolumeTex shared), along other shared resources, is a shared variable, man-
aged by this pool. Every probe then extends this pool with its own effect file
(m pParticlesAdvectionChildEffect), which manages the probe specific variables.
The rendering effect (m pProbeParticlesRenderingEffect) on the other hand is the same
for all probes and is hosted by the ParticleTracer. The probe gets a pointer to the respective
effect to create internal pointers to the technique and shader variables it needs to set.
4.3. Probe Management
Multiple probes in the ParticleEngine are now possible by creating and maintaining an
array of ParticleProbe instances. For this purpose, the vector type from C++’s Standard
Template Library can be used. Utilizing a vector to store the probe instances has the ad-
vantage of being flexible and extensible. This vector will then be a member of the Particle-
Tracer.
However, as the probe concept grows more complex, handling this vector becomes in-
creasingly difficult. This was the reason to incorporate the vector in a container class,
devoted to the task of managing the probes.
Another aspect of probe management is exposing the available adjustable options of all
currently instantiated probes for manipulation by the user. Also, supplying user-friendly
29
4. Multiple Sources of Particles
and intuitive interface to these options is vital to making the concept practical. This is
done by a controller class, which is responsible for ensuring the accessibility and easy
manipulation of multi-probe configurations.
4.3.1. The ParticleProbeContainer Class
Figure 4.7 depicts a simplified diagram of the ParticleProbeContainer class.
Figure 4.7.: The container class, responsible for probe management.
The ParticleProbeContainer is a member of the ParticleTracerBase. It hosts the vector
with probe instances as a private member (m vParticleProbes), and defines interface for
accessing it. The adding (AddProbe()) and removing (RemoveProbe()) of probes are the
basic methods of the class. Additionally, methods for saving and loading probe configura-
tions, and for exporting probe particles are supported (ResetProbesParticles()).
Saving and Loading Probe Configurations
The methods ExportProbesLayout() and ImportProbesLayout() implement saving and
loading of probe configurations. Every probe exposes two methods allowing the import
(ImportProbeOptions()) and export (ExportProbeOptions()) of its options. The container
then wraps these methods to facilitate the process for multiple probes.
The SaveProbeLayout() and LoadProbeLayout() methods of the ParticleTracerBase are
triggered by the main user interface (see section 4.4.1). They display dialog for file selec-
tion, and subsequently call the container’s methods to take the needed operations to save
or recreate a probe configuration.
30
4.3. Probe Management
4.3.2. The ParticleProbeController Class
The ParticleProbeController class concentrates on presenting intuitive user interface
giving access to the adjustable options of all the probes currently instantiated. Drawing
bounding boxes around the probes is also the job of the controller.
Figure 4.8.: The ParticleProbeController class, methods for registering and unregistering
a probe.
Internally, the controller maintains a vector of ParticleProbeOptions instances
(m vPPOptions). This implies, that all options, which should be able to be changed by the
user must be present in the ParticleProbeOptions class.
To use the controller to manage a probe, it must be registered first. The registration
process adds a pointer to the ParticleProbeOptions instance of the probe to the controller’s
vector, thus allowing the controller to display the interface for adjusting its options.
The probe’s method RegisterToController() is used to register a probe. This method
takes a pointer to the controller class as an argument. It then uses this pointer to call the
RegisterParticleProbe() method of the controller class, supplying it with a pointer to its
ParticeProbeOptions instance. This ensures that only a particle controller class instance
can get a pointer to the adjustable options of the probe.
Unregistering a probe is the reverse process. It identifies the probe which has requested
to unregister, and removes it from the options vector.
31
4. Multiple Sources of Particles
4.3.3. Multi-probe Management Architecture
The processes of registering and unregistering a probe are abstracted by the container
class. When it is initialized, it get a pointer to the controller class (m pPPC). Afterwards,
the methods AddProbe() and RemoveProbe() are responsible for creating a new probe
and registering it to the controller, respectively unregistering and destroying it (figure 4.9).
Figure 4.9.: Multi-probe management architecture.
32
4.4. User Interface
4.4. User Interface
The user interface is very important part of every software solution. The usability of
a tool is characterized by the power of its user interface. To make the process of ma-
nipulating multiple probes and creating probe configurations a pleasant experience, new,
redesigned interface is proposed. It tries to optimize over the already present solution, by
making it more compact and intuitive.
This section is intended to explain the new user interface’s control elements in more
detail.
4.4.1. Tracer Parameters UI Reference
The main UI, or the Tracer Parameters UI, is found in the bottom-right corner of the
main application window. It is created and maintained within the ParticleTracer object.
Figure 4.10.: Tracer Parameters UI, found in the lower right corner of the application win-
dow.
Reference
Bounding Box (checkbox) Turn on/off the bounding box for the domain, containing the
vector field data.
Advect (checkbox) Turn on/off the advection of particles.
If the advection is off, the advection phase is omitted, thus pausing the particles at their
current positions. In this case the adjustable probe options, such as injection mode and
lifetime, which act in the advection phase, will not have any effect. Some will reset the
particle’s buffer, causing all probe’s particles to disappear. All options acting in the ren-
dering phase, such as probe color, display type and sprite size, will have their usual effect.
For thorough explanation of the different particle options refer to section 4.4.2.
Step Scale (slider) Increase / decrease the step length for each advection step.
The advection phase is permanently repeated. Every time, each particle is moved a
small amount in the direction, pointed by the vector of the field at the particle’s position.
How small this movement is is controlled by this slider. Increasing its value causes faster
advection, but lower precision for the next particle position.
33
4. Multiple Sources of Particles
Probes ’+’ (button) Adds a new probe to the current probe configuration.
Probes ’-’ (button) Removes a probe from the current probe configuration.
The probe must be selected in the probe interface first. Otherwise nothing happens.
Probe interface is described in section 4.4.2.
Probes ’Sv’ (button) Saves current probe configuration to file.
This saves the positions, and all adjustable options of all probes currently in the scene.
The current particles’ positions are not saved.
Probes ’Ld’ (button) Load probe configuration from file.
Already saved probe configurations can be loaded from file. Probe’s position and di-
mension, and also all the adjustable options are loaded. The particle buffers are recreated
from scratch for each probe and the advection starts from the probe.
Lense ’+’ (button) Adds a new lense to the current probe configuration. For detailed
information about lenses, see section 4.5.
Lense ’-’ (button) Removes a lense from the current probe configuration. For detailed
information about lenses, see section 4.5.
Particles Reset (button) Forces the particles of all probes to reborn and start advection
from their initial position within their probe.
Particles Export (button) Exports the positions of selected probe’s particles currently in
the field to a file. To export all particles at once, deselect all probes (for detailed information
about export functionality, see chapter 8).
Sprites ’Load 1’ (button) Loads a sprite from image file or geometry definition file. This
sprite is then used to display particles in ’Sprites’ and ’Oriented sprites’ display modes.
Sprites ’Load 2’ (button) Loads a second sprite from image file or geometry definition
file.
depth info (checkbox) Indicates if depth information is to be generated when loading
spites from a geometry definition file.
Render Volume (checkbox) Turns on volume rendering.
Volume rendering is discussed in more detail in section 2.1.3.
Show UI (checkbox) Turns on the volume rendering user interface. It gives access to all
of the ray caster settings.
The new volume rendering UI of the ParticleEngine is discussed in section 5.4
34
4.4. User Interface
4.4.2. Probe Parameters UI Reference
The ParticleProbeController’s UI, or the Probe Parameters UI, is displayed in the upper-
left corner of the application window. It is maintained by the ParticleProbeController
object.
The probe interface presents the user with control elements to modify all available ad-
justable probe options. Additionally, it allows the user to select a probe, turn on and off
the bounding boxes, and displays information about the selected probe’s current position
and dimensions. The interface is divided in a static and a dynamic part. The dynamic
part changes with the selection of a probe. Thus, to be able to see all the below described
elements, a probe must be first selected.
Reference
Figure 4.11.: Probe Parameters UI
Bounding Box (checkbox) Turns on/off the probes’
bounding boxes. Apart from presenting the user
with a visual clue of each probe’s position and di-
mensions, the bounding boxes allow mouse manip-
ulations. For detailed information about mouse con-
trol and user input modes, refer to section 4.4.3.
Only selected probe (checkbox) When active,
only the bounding box of the selected probe is dis-
played. This allows easier probe manipulation in a
complex multi-probe configurations.
Select probe (drop-down menu) This menu con-
tains all the probes in the current probe configura-
tion. It is used to select a probe for editing its op-
tions. Selecting a probe makes all probe specific
control elements visible, and changes its bounding
box color to yellow. Selecting ’None’ will deselect
all probes.
Probe position and dimensions (label) Displays
the current probe position and dimensions within
the domain’s [0,1] range and in the given physical
units (see chapter 7). Three sliders represent the
three axes, about which the probe can be moved and
resized. The radio buttons at the top choose what
these sliders control - position or dimensions.
Injection Mode (drop-down menu) Defines how
the particles are initially placed within the probe.
35
4. Multiple Sources of Particles
Random will distribute the particles pseudo-
randomly inside the probe, the uniform placement
will place them in an uniform distance to each other.
R G B (Particles color) (sliders) Sets the color of
the probe, and of the injected particles.
4th comp. aware inj. (checkbox and sliders) The checkbox enables the 4th component
aware injection. The sliders control the chance of particles to be born according to the 4th
component of the field. In particular: scale scales the field down (dividing its values by the
scale value) and the min/max sliders define the range for respectively 0 (0%) and 1 (100%)
chance of birth. For the exact formula, see 4.1.
Display particles (checkbox) Turn on/off the particles. This allows detailed control over
which probe inserts particles into the field. When turned off, the advection phase for the
probe is being omitted, improving the performance of the ParticleEngine.
Particles type (no name on the UI) (drop-down menu) Selects which particle type to be
displayed.
There are currently three types of particle types supported - points, sprites and oriented
sprites. When choosing one of the two sprite types, performance hit should be expected,
because additional geometry is being introduced.
Count (slider) How many particles will be initially injected into the flow.
Activating 4th component aware injection will reduces this number upon injection, as
some particles will eventually not meet the birth condition.
Lifetime (slider) The number of advection steps before the particle gets reborn. Exiting
the domain restarts the particle regardless of this setting.
Opacity (slider) Controls the transparency of the particles. Different particle types are
rendered with different blending modes, so this setting behavior is different according to
the chosen particle type.
Sprite size (slider) Meaningful only when displaying sprites. Gives the size of each
sprite.
4th comp. parameter mod. (checkbox and sliders) Activates the 4th component parame-
ter modulation. The sliders have the same function as described by 4th component aware
injection above.
Here, however, not the chance of birth is calculated, but the value of the size and opacity
parameters. As the size is meaningless for Points particle type, only opacity is modulated
for this type. The minimum value of a parameter is 0, and the maximum is the value,
currently set by the respective slider.
36
4.4. User Interface
Display trajectories (checkbox) Turns on/off the display of trajectories.
Trajectories are preemptively advected particles a given number of steps. Lines are then
used to connect the particles from one step to the next, creating a trajectory in the vector
field.
Trajectory type (no name on the UI) (drop-down menu) Currently, only streamlines are
supported, as only steady flows are considered in this project.
Trajectory Opacity (slider) The transparency of the trajectory lines.
Trajectories Count (slider) How many particles are traced simultaneously.
Trajectory Lifetime (slider) How many preemptive advection steps are used to construct
the trajectories.
37
4. Multiple Sources of Particles
4.4.3. User Input Modes UI Reference
The different user input modes control the camera and mouse behavior. There are cur-
rently three different modes, which assign different functionality to the mouse and the
keyboard.
Figure 4.12.: User Input Mode UI
View This mode uses model-view camera. The right and left mouse buttons are assigned
to rotate the camera around the field domain. The scroll wheel zooms in or out. This mode
can be quickly selected by pressing the F5 key.
Probe Edit The same as the view mode. The difference is, that the left mouse button is
used for probe selection and manipulation, rather than camera rotation.
In this mode the left mouse button events are handled by the probe controller. The
bounding boxes of the probes are used to catch the cursor. Thus, to enable this functional-
ity, the bounding boxes must be turned on.
Upon hovering the mouse over a bounding box, its sides turn yellow to indicate which
side is currently catching the cursor. A single click selects a probe and allows further
manipulation. To change the position of the probe, hover the mouse over a side, and then
drag. The probe will be repositioned in the plane, containing the picked side.
To resize the probe, hover over a side and Ctrl+drag. The probe’s dimensions will be
changed along the plane, containing the picked side.
The two operations can be seamlessly combined. By pressing Ctrl the dragging oper-
ation change over to Ctrl+dragging without the need of releasing the mouse button, and
vice versa.
First Person This mode uses the first-person camera. The keys W, A, S, D are used to
move the camera around. By dragging the mouse, the user can look around.
38
4.5. The Lense
4.5. The Lense
The lense concept is taken from the current version of the ParticleEngine and is a part of
the importance-driven visualization techniques. A lense allows the user to concentrate on
important regions within the vector field domain in a complex probe configuration. This
is achieved by manipulating the opacity/size of the particles based on their distance to a
user-defined lense center.
The old lense definition consists of a center point and a radius. The particles fade away
with increasing of their distance to lense’s center. The position of the lense center is set
by the user using the mouse and the scroll wheel to adjust the depth in view space. The
radius is adjusted through the user interface.
In the new architecture, the lense got extended and abstracted as a stand-alone entity. It
is now defined as a special kind of probe (fig. 4.13). In that case, all the controls used to
manipulate the probes, including the mouse, can be directly used on the lense, too. This
allows for consistent and more simple user interface, and for more precise placement and
resizing.
Figure 4.13.: The new lense is a special kind of probe. The lense’s parameters are also
contained by the ParticleProbeOptions class.
New set of adjustable options is defined specially for lenses. These options can be used
to simulate the old lense’s spherical shape behavior, and the clip-plane functionality.
Lense Redefined
• As a special kind of probe, it encloses is a cuboid-shaped region within the vector
field domain.
39
4. Multiple Sources of Particles
• Fading of the particles is controlled for each axis separately. It depends on the parti-
cle’s distance from the lense’s edge along a particular axis.
• clip-plane functionality is simulated by the lense’s projection capabilities. The pro-
jection can be turned on for one of the three main planes. Then, the particles are
projected onto the respective plane, going through the lense’s center.
(a) Multi-probe configuration with a lense turned off.
(b) Multi-probe configuration with a lense turned on.
Figure 4.14.: Experiment, demonstrating the clip-plane functionality of the new lense. The
particles from each probe get projected onto the lense’s plane.
40
4.6. Lense Management
4.6. Lense Management
The same method for managing probes is applied also to lenses. There is no limit placed
on how many lenses can be created in a probe configuration.
Figure 4.15.: The ParticleProbeContainer is used also to manage lenses.
The probe container maintains a separate vector only for lenses (m vLenses). The reason
for this is that the rendering phase in the presence of lenses requires special care. The con-
tainer also has special functions for adding (AddLense()) and removing (RemoveLense())
lenses.
The probe controller on the other hand doesn’t differentiate between lenses and probes
when registering. It checks dynamically upon selection, if the selected object is a lense or
a probe, by calling the IsLense() method of the ParticleProbe class. Then, it presents the
user with the appropriate interface.
Rendering phase in the presence of lenses
The rendering phase is managed by the particle container. It calls each probe’s OnRen-
der() method sequentially. If a lense is introduced into the configuration, its OnRender()
method must be called first. That is because the lense uses it to set the shader variables,
which then manipulate the appearance of the particles. For every additional lense, all the
probes must be rendered once more, because each lense can set different rendering param-
eters.
41
4. Multiple Sources of Particles
4.7. Lense UI Reference
Figure 4.16.: Probe Parameters UI -
Lense selected
The lense UI is also displayed by the Parti-
cleProbeController. Only the dynamic part, below
the position/dimensions sliders is changed, when a
lense is selected. Thus, only this part will be dis-
cussed here.
Reference
Turn on/off (chackbox) Allows the lense to be
turned on or off.
Fading ranges (sliders) For every axis there are
two sliders, controlling the minimal, respectively
maximal, distance from the lense’s edge along this
axis. If a particle’s position is at a distance bigger
than the maximum, its opacity/size is set to 0. If this
distance is smaller than the minimum, its opacity/-
size is taken from the probe parameters. In-between,
interpolated value is calculated, using Hermite in-
terpolation.
Projection (radio buttons) Enables projection onto
the specified plane, going through the lense’s center.
The projection is enabling the lense to act as a
clip-plane (included in the old version of the Par-
ticleEngine). The fading ranges are also acting in this
mode. Thus, modifying them will have an effect on
how many particles get projected onto the specified plane.
Clip plane presets To further facilitate the setup of a clip-plane lense, these three buttons
automatically set the projection, and the position and dimensions of the lense. The dimen-
sions along the two selected axes are maximized, and along the third - the plane is made
very thin.
42
5. Transfer Function Editor
This chapter describes the improvements made to the ParticleEngine to the ways of defin-
ing a transfer function for its direct volume rendering capability. The transfer function dic-
tates how the integrated ray caster should map values, sampled from the field, to colors
for the frame buffer. This corresponds to the third project goal in chapter 3 ’Thesis Goals’.
Currently, the only way to assign color of the sampled values is to load a shader code
fragment from file, which then gets injected directly into the ray caster effect. In the next
sections, a new user interface component is introduced, with the primary purpose of the
precise, yet interactive setup of a transfer function. This greatly enhances the usability and
the user experience, as it enables the user to try out many combinations, keeping track of
the results, as the feedback is seen immediately.
Figure 5.1.: Direct volume rendering by the ParticleEngine, visualizing the vector lenght as
a 4th component.
43
5. Transfer Function Editor
5.1. The Raycaster
The volume rendering capabilities of the ParticleEngine are provided by the Raycaster
object. It is responsible for generating the images from the data, loaded in the 4th compo-
nent. The Raycaster supports three different modes of rendering - direct volume rendering
(DVR), isosurfaces and Clearview (see section 2.1.3). The DVR is only discussed in this
chapter, as only it needs a transfer function.
Figure 5.2.: The Raycaster class, represented in the ParticleTracerBase.
The Raycaster is a member of the ParticleTracerBase. The m bDoRaycasting flag con-
trols if the Raycaster is on or off, that is, if it renders an image or not. The default is off.
When the vector field data is initially loaded, it is transferred to the Raycaster by means
of the m Volume variable. This variable is responsible for interpreting the vector field data
as a volume texture, to be used for volume rendering.
When it is activated, the Raycaster creates images by casting rays through the field do-
main and sampling the volume texture (m Volume) along these rays at discrete steps. The
sampled values are the 4th component values of the vector field. In DVR mode, these sam-
pled values are then mapped to colors by the transfer function, and blended together to
produce the final color for the frame buffer.
The actual mapping of values to color happens in the fragment shader of the Raycaster.
The transfer function is internally represented by a 1-D texture. To get the particular color,
the sampled value is first adjusted to the range from 0 to 1, and then used to sample the tex-
ture. Each element of this texture has the DirectX DXGI FORMAT R32G32B32A32 FLOAT
format. Thus, the returned value corresponds directly to the RGBA color components. The
transfer function is maintained by the TFEditor class (see section 5.2.2), and is fed to the
Raycaster as a ID3D10ShaderResourceView pointer by the SetTransferFunctionSRV()
method.
44
5.2. The Editor
To save time and resources, the casted rays are only sampled within the vector field
domain. To determine the entry end exit positions, the Raycaster is running invisible ren-
dering passes, used to render the bounding box of the domain (m Box) and saves the entry
and exit depth values of each pixel covering the domain.
Probe Volume Rendering
An additional feature of the Raycaster, introduced with the new probe concept, is the
probe volume rendering. The probe volume rendering casts rays only through a the region
of the field domain, covered by the selected probe. This greatly speeds the process up,
and is very useful for large data sets or slower machines. Also, this feature can be used
to concentrate on particular regions within the field, and maybe define different transfer
functions for different regions. Examples can be seen in section ’Visualizations’ (Figure
9.7).
5.2. The Editor
The Transfer Function Editor is the user interface component, which is responsible for
displaying the currently defined transfer function and providing a way to adjust it. It is a
stand-alone class, hosted within the ParticleTracerBase class. After it handles user action,
the Transfer Function Editor updates the texture, containing the transfer function. The
updated texture is then set to the Raycaster.
5.2.1. The Transfer Function Control
The transfer function control is a user interface control. It is maintained by the Transfer
Function Editor, and it is incorporated within its user interface (UI). It displays a partially
linear function for each color channel. The Y coordinates of the points represent the color
channel’s value at this position in the texture. The value in-between points is a linear inter-
polation of the two neighboring points’ Y coordinates. Each of the four color channels can
be divided by the user to as many linear parts as needed for good enough approximation
of the mapping.
Figure 5.3.: The transfer function control element, displaying a transfer function
This control element handles all mouse messages, when the mouse is within its area.
Different user actions are supported. For example, the user can drag a control point to
change its position. Double click will add or remove a point, depending on the cursor’s
position. For extensive reference, see 5.4.3.
45
5. Transfer Function Editor
5.2.2. The TFEditor Class
The TFEditor is responsible for wrapping the transfer function control inside a UI, and
providing extended transfer function editing capabilities. Also, it defines a method for
making the resulting transfer function available as a texture. This texture is then supplied
to the Raycaster.
Figure 5.4.: The TFEditor class
Figure 5.4 depicts simplified diagram of the TFEditor class. First, there are the four
LineStripe members. They store the user-defined control points for each color channel of
the transfer function. When the user changes the control point configuration, the method
updateTexture() is called to update the m texTransFunc texture. It uses LineStripe class’s
methods to reconstruct the values of the transfer function in the space between two points
with linear interpolation of their Y coordinates.
The drawTransferFunction() method is handling the actual drawing of the transfer func-
tion control. The TFEditor uses this method to insert it in its UI (m transferFuncEditorUI).
This UI contains also additional interface elements for facilitating the editing of the transfer
function.
The method getTransferFunctionResource() returns the produced texture, which is then
set to the Raycaster’s m pTransferFuncSRV variable.
46
5.3. The RaycastController Class
5.3. The RaycastController Class
The RaycastController class has similar functionality to the ParticleProbeController,
described in section 4.3.2. It generalizes and simplifies the management of the Raycaster
options, and the Transfer Function Editor.
Currently, the ParticleEngine exposes the Raycaster options directly on the main UI.
However, the addition of all the new options there, would have caused too much clutter.
To prevent that, the RaycastController is introduced. It organizes all the UIs, responsible
for the control over the volume rendering, in one place. This requires not only DVR, but
also all other options to be moved to the new UI.
Figure 5.5.: RaycastController organizes all volume rendering options in a new UI
Figure 5.5 depicts the new architecture involving the Raycaster and the Transfer Func-
tion Editor. The RaycastController exists parallel to the Raycaster as a member of the
ParticleTracerBase. Unlike the probe management structure, where the probes are expos-
ing their options by means of another class, the Raycaster instance is controlled through
its get/set accessor methods, which are available for all the Raycaster parameter variables
(m eRendermode, m fBorder, etc.).
To enable control over it, the Raycaster instance must be first registered to the controller.
This is done by the RegisterRaycaster() method. The registration supplies a pointer to the
Raycaster (m pRaycaster), which is then used by the UI to control its parameters.
The Transfer function Editor (m pTFEditor) is hosted by the RaycastController as a pri-
vate member. This allows the controller to integrate its UI with the other volume rendering
options, building a system of UIs. There are three UIs, maintained by the controller - the
47
5. Transfer Function Editor
Raycast Controller UI m RaycastControllerUI, the Raycaster UI m RaycasterUI and the
Transfer Function Editor UI. The RaycastControllerUI is the main UI in this system. It
provides means to select between the two other sub-UIs. The RaycasterUI contains all the
controls for the Raycaster parameters.
Save and load functionality is also exposed by the RaycastController. The both methods
ExportRaycastControllerSettings() and ImportRaycastControllerSettings() are linked to
the Raycast Controller UI. Currently, these functions only support export and import of
the transfer function.
48
5.4. User Interface
5.4. User Interface
As mentioned in the previous section, the RaycastController builds a system of three
UIs. The Raycast Controller UI is the main one, providing the means to select which of the
other two sub-UIs is displayed. The main UI can’t be displayed alone - one of the sub-UIs
is always visible just below it (see figure 5.6).
Figure 5.6.: Raycast Controller UI, and the Transfer Function Editor UI displayed below it.
The Raycast Controller UIs are by default hidden. The main ParticleEngine interface’s
checkbox ’Show UI’ is used to turn it on or off (see section 4.4.1).
Each of the three UIs will be discussed in detail in the next sections.
5.4.1. Raycast Controller UI Reference
The Raycast Controller UI is rendered in the bottom middle of the application window,
just above the currently visuble sub-UI.
Figure 5.7.: Raycast Controller UI
Reference
Raycaster (radio button) Selects the Raycaster sub-UI.
Transfer function (radio button) Selects the Transfer Function Editor UI.
Save (button) Allows the user to save the current transfer function to a file.
Load (button) Loads transfer function from file into the Editor and updates the display.
5.4.2. Raycaster UI Reference
The Raycaster UI is displayed just below the Raycast Controller UI, if the ’Raycaster’ radio
button is selected.
49
5. Transfer Function Editor
Figure 5.8.: The Raycaster UI
Reference
Mode (drop-down) Selects the render mode for the volume renderer. There are three ren-
der modes available - ISO (Isosurfaces), DVR (Direct Volume Rendering) and Clearview.
For more information refer to section 2.1.3.
Step size (slider) Controls the quality of the image produced by the volume renderer.
Smaller step corresponds to higher quality, and lower display update rate (performance of
the ParticleEngine when volume rendering activated).
Internally, this controls the length of the sample step along a ray, shooted by the Ray-
caster. Longer step means fewer samples, and consequently higher performance, and
lower image quality.
Load Fragment (button) Allows the user to load custom shader code fragment. This is
the old way to setup a transfer function, and it is deprecated.
Custom (slider) The custom slider controls a shader variable, which is normally unused.
It is meant to be incorporated in a custom fragment code, to control some arbitrary option.
Controls for ISO render mode
ISO Value 1 (slider) Sets the ISO value for the isosurface. The volume renderer is build-
ing an isosurface of all values higher than this setting.
Controls for DVR render mode
TF Scale (slider) Scales the transfer function range. Labels on the Transfer Function Edi-
tor UI show the actual range covered by the transfer function.
TF Offset (slider) Offsets the transfer function range. Labels on the Transfer Function
Editor UI show the actual range covered by the transfer function.
50
5.4. User Interface
Controls for Clearview render mode
ISO Value 2 (slider) Sets the ISO value for the second isosurface.
Context scale (slider) Controls the blending of the two isosurfaces in the Clearview lense.
Increasing this value, the second isosurface gets more visible. Decreasing it makes the first
isosurface more visible.
Size (slider) Controls the size of the Clearview lense.
Border (slider) Controls the border width of the Clearview lense.
Edge (slider) Sharpens the contours of the isosurfaces.
5.4.3. Transfer Function Editor UI Reference
The Transfer Function Editor UI is displayed just below the Raycast Controller UI, if the
’Transfer function’ radio button is selected. The UI wraps the transfer function control,
which editing functions are discussed below.
Figure 5.9.: The Transfer Function Editor UI
Reference
R (button) Selects the red channel in the transfer function control, and brings it on top of
the others.
G (button) Selects the green channel in the transfer function control, and brings it on top
of the others.
B (button) Selects the blue channel in the transfer function control, and brings it on top
of the others.
51
5. Transfer Function Editor
Alpha (button) Selects the alpha channel in the transfer function control, and brings it on
top of the others.
These three buttons are made for easier access to the channels. Selecting channels with
the mouse is also possible (see ’transfer function control’ below).
Alpha Scale (slider) Scales the alpha channel Y axis down.
This is needed to be able to fine tune the alpha component. As the alpha components of
the sampled values along a ray are accumulated, using many samples will require a very
small alpha value to be assigned to each sample.
Reset (button) Resets the channels in the transfer function control to their default config-
uration. In this state, every channel has two points in the bottom left and top right corners
of the transfer function control area. These two points cannot be removed.
Axis labels just below of the transfer function control area (labels) Show the range of
4th component values, covered by the transfer function. This range is setup by the ’TF
Scale’ and ’TF Offset’ sliders in the Raycaster UI.
Cursor (labels) Show the current position of the cursor within the transfer function con-
trol area. The X coordinate is respecting the range of the transfer function, shown by the
labels, described above. The Y coordinate is respecting the alpha slider value for the alpha
channel. The other channels Y coordinates lie in the [0, 1] range, and they are assumed to
be known.
Selected point coordinates (no name on the UI) (textboxes) These are the two text boxes
next to the ’Cursor’ labels. They show the exact position of the currently selected control
point in the transfer function control (the white point on figure 5.9). Like the cursor labels,
they respect the range of the transfer function. Additionally, they allow the user to input
exact coordinates of the selected point manually.
Transfer function control
The area, which encloses the transfer function, is referred to as the transfer function
control. It provides the user with additional editing capabilities.
Select a channel for editing Click anywhere on the channel’s line or control points. This
will bring the channel on top of the others and allow further manipulations.
Select and move a control point Click on a control point to select it, and then drag the
mouse to move it around. The exact coordinates are displayed below in the respective
Transfer Function Editor UI controls.
52
5.4. User Interface
Add and remove control points Double-clicking in any empty space within the trans-
fer function control’s area will add new control point to the currently selected channel.
Double-clicking on a existent point will remove it.
53
5. Transfer Function Editor
54
6. Fourth-component Recalculation
As the power of the build-in volume renderer gets more accessible with the introduc-
tion of user-definable transfer function, the 4th component visualization gains even more
on relevance. The fourth-component recalculation feature creates the possibility for ex-
perimenting with the 4th component at run-time. This, in combination with the Transfer
Function Editor present new ways to explore flow field characteristics.
This chapter presents the upgrades, taken on the ParticleEngine, which enable the 4th
component to be recalculated at run-time. Three distinct recalculation techniques are in-
troduced:
• Calculating of entirely new values for the 4th component, using the underlying vec-
tor information. All field characteristics, which can be derived for every cell by some
function of the vector at the cell’s position and its neighbors, can be calculated using
this technique.
• Updating every 4th component values by means of a mathematical function, accept-
ing one argument. This technique replaces each value with the result of the specified
function, given as argument this value.
• Updating every 4th component values by means of a mathematical function, accept-
ing two arguments. The second argument of the function for each grid position is
taken from the 4th components of another vector field, loaded extra from a file.
Furthermore, arbitrary concatenations of those three techniques are possible. This can be
very useful in the case, when a function of two already present field characteristics yields
a new interesting characteristic.
55
6. Fourth-component Recalculation
6.1. ParticleTracer3D class upgraded
The new feature is directly integrated into the ParticleTracer3D class. The recalcula-
tion system consists of two parts - the method, which actually updates the 3-D texture,
used to store the vector field, and the effect file, defining all possible functions on the 4th
component.
Figure 6.1.: ParticleTracer3D is upgraded to support the 4th component recalculation.
The ParticleTracer3D class is managing the 3-D texture, which stores the vector field
(m pTexture3D). For the updating of the 4th component of this texture, the method
Calc4thComp VolumeTexture() is developed.
This method starts by creating a render target (m pC4CE VolumeTex) and binding it
to the GPU pipeline. The 3-D texture is also bound by the variable m pC4CE VolumeTex.
The method processes the volume a single slice at a time. Which slice is currently to be pro-
cessed is given with m pC4CE iSliceDepth. A pixel shader for previously chosen function
from the m pCalc4thComponentEffect is applied for the current slice, rendering the new
values for the slice onto the render target.
After the slice has been processed, the render target contains all the cells from this slice
with the updated 4th component values. Then, a staging texture with the usage flag set to
D3D10 USAGE STAGING and CPU access set to D3D10 CPU ACCESS READ is used
to access these values, and subsequently update the respective slice of the 3-D texture.
Before executing the Calc4thComp VolumeTexture() method, the user must choose which
function should be applied. This is done through a UI, hosted in the ParticleTracerBase,
56
6.2. Updating the Volume Texture
but managed by the ParticleTracer3D to display the 4th component recalculation function-
ality (m TracerUI).
Normalizing the 4th component
Normalization of the 4th component is necessary to ensure that all the values will stay
in the range of the user interface controls, presented in pervious chapters. The controls for
the 4th component aware operations, and for volume rendering are all only able to handle
values in the range -1 to 1.
The normalization is done by the Norm4thComp VolumeTexture() method. It is real-
ized as a special case of the Calc4thComp VolumeTexture() method, so it functions the
same way.
This method also saves the factor, used for normalization. This is needed to denormalize
the field before running other functions on it, as some functions, like logarithm, would
produce incorrect results.
Combining 4th components
The combining the 4th components of the current field and an external field, loaded on-
demand, is performed by the Combine4thCompWithExternal VolumeTexture() method.
This method is realized the same way as the Calc4thComp VolumeTexture(), differing
only in minor aspects - the method loads a new field from file, it binds it to the pipeline as
the variable m pC4CE VolumeTex compose.
6.2. Updating the Volume Texture
This section goes into more detail as how the Calc4thComp VolumeTexture() functions.
It shows the most important code snippets to outline its algorithm.
First, the render target is created with the X and Y dimensions of the vector field. A
staging texture to read the data from the render target is also setup.
Listing 6.1: Calc4thComp VolumeTexture() method
m_p4thCompRT->Bind(false, false);
pRenderTargetTex = m_p4thCompRT->GetTex();
D3D10_TEXTURE2D_DESC desc;
pRenderTargetTex->GetDesc(&desc);
desc.Usage = D3D10_USAGE_STAGING;
desc.BindFlags = 0;
desc.CPUAccessFlags = D3D10_CPU_ACCESS_READ;
m_pd3dDevice->CreateTexture2D( &desc, NULL, &pStagingTexture );
For each mip level and depth level (corresponds to slice), the chosen pass is executed.
57
6. Fourth-component Recalculation
Listing 6.2: Calc4thComp VolumeTexture() method
float ClearColor[4] = { 0.0f, 0.0f, 0.0f, 0.0f };
m_p4thCompRT->Clear(m_pd3dDevice, ClearColor);
m_pC4CE_iMipLevel->SetInt( iMIPLevel );
m_pC4CE_iSliceDepth->SetInt( iDepthLevel );
m_pRenderSliceTq->GetPassByName( sPassName.c_str() )->Apply(0);
m_pd3dDevice->Draw( 3, 0 );
Then, the render target is copied to the staging texture, and subsequently mapped.
Listing 6.3: Calc4thComp VolumeTexture() method
m_pd3dDevice->CopyResource(pStagingTexture, pRenderTargetTex);
D3D10_MAPPED_TEXTURE2D mappedTex;
V_RETURN( pStagingTexture->Map( D3D10CalcSubresource(0,0,1), D3D10_MAP_READ,
NULL, &mappedTex ) );
The mapped data is then uploaded to the 3-D texture.
Listing 6.4: Calc4thComp VolumeTexture() method
D3D10_BOX box;
ZeroMemory(&box, sizeof(D3D10_BOX));
box.left = 0;
box.right = iSize.x;
box.top = 0;
box.bottom = iSize.y;
box.front = iDepthLevel;
box.back = iDepthLevel+1;
m_pd3dDevice->UpdateSubresource(m_pTexture3D, D3D10CalcSubresource(iMIPLevel
,0,1), &box, mappedTex.pData, mappedTex.RowPitch, 0);
58
6.3. Recalculation Fragment Shaders
6.3. Recalculation Fragment Shaders
Here, the internal structure of the m pCalc4thComponentEffect will be shown. This ef-
fect contains all the functions from which the user can choose. The chosen function is then
translated into a effect technique pass name, and supplied to the Calc4thComp VolumeTexture()
method. It then runs the appropriate pixel shader on the 3-D texture values.
The vertex shader is generating a triangle covering the whole screen. This would call
the fragment shader for every pixel of the render target.
Listing 6.5: The vertex shader used by the calculation of the new 4th components
float4 VS_FullScreenTri( uint id : SV_VertexID ) : SV_POSITION
{
return float4(
+ ((id << 1) & 2) * 2.0f - 1.0f, // x (-1, 3,-1)
( id & 2) * -2.0f + 1.0f, // y ( 1, 1,-3)
0.0f,
1.0f );
}
Then, the chosen fragment shader is run on each pixel, producing the new 4th compo-
nents.
Listing 6.6: The fragment shader which saves the vector lenght at a particular position in
the 4th component
float4 PS_Calc4thComp_Length( float4 pos : SV_POSITION ) : SV_Target
{
float4 val = g_txVolume.Load( int4(pos.xy, g_iSliceDepth, g_iMipLevel) );
return float4(val.xyz, length(val.xyz) );
}
It is first loading the current values from the 3-D texture for the particular slice and mip
level. Then, it returns the first three components unchanged, and the fourth equal to the
length of the vector, represented by them.
59
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)

More Related Content

What's hot

Final Report - Major Project - MAP
Final Report - Major Project - MAPFinal Report - Major Project - MAP
Final Report - Major Project - MAPArjun Aravind
 
The C Preprocessor
The C PreprocessorThe C Preprocessor
The C Preprocessoriuui
 
Micazxpl - Intelligent Sensors Network project report
Micazxpl - Intelligent Sensors Network project reportMicazxpl - Intelligent Sensors Network project report
Micazxpl - Intelligent Sensors Network project reportAnkit Singh
 
Francois fleuret -_c++_lecture_notes
Francois fleuret -_c++_lecture_notesFrancois fleuret -_c++_lecture_notes
Francois fleuret -_c++_lecture_noteshamza239523
 
Automatic Detection of Performance Design and Deployment Antipatterns in Comp...
Automatic Detection of Performance Design and Deployment Antipatterns in Comp...Automatic Detection of Performance Design and Deployment Antipatterns in Comp...
Automatic Detection of Performance Design and Deployment Antipatterns in Comp...Trevor Parsons
 
Augmented Reality Video Playlist - Computer Vision Project
Augmented Reality Video Playlist - Computer Vision ProjectAugmented Reality Video Playlist - Computer Vision Project
Augmented Reality Video Playlist - Computer Vision ProjectSurya Chandra
 
An Introduction to Computational Networks and the Computational Network Toolk...
An Introduction to Computational Networks and the Computational Network Toolk...An Introduction to Computational Networks and the Computational Network Toolk...
An Introduction to Computational Networks and the Computational Network Toolk...Willy Marroquin (WillyDevNET)
 
Team Omni L2 Requirements Revised
Team Omni L2 Requirements RevisedTeam Omni L2 Requirements Revised
Team Omni L2 Requirements RevisedAndrew Daws
 
Modlica an introduction by Arun Umrao
Modlica an introduction by Arun UmraoModlica an introduction by Arun Umrao
Modlica an introduction by Arun Umraossuserd6b1fd
 
Lower Bound methods for the Shakedown problem of WC-Co composites
Lower Bound methods for the Shakedown problem of WC-Co compositesLower Bound methods for the Shakedown problem of WC-Co composites
Lower Bound methods for the Shakedown problem of WC-Co compositesBasavaRaju Akula
 

What's hot (18)

Software guide 3.20.0
Software guide 3.20.0Software guide 3.20.0
Software guide 3.20.0
 
Rapidminer 4.4-tutorial
Rapidminer 4.4-tutorialRapidminer 4.4-tutorial
Rapidminer 4.4-tutorial
 
Project Dissertation
Project DissertationProject Dissertation
Project Dissertation
 
Scikit learn 0.16.0 user guide
Scikit learn 0.16.0 user guideScikit learn 0.16.0 user guide
Scikit learn 0.16.0 user guide
 
Final Report - Major Project - MAP
Final Report - Major Project - MAPFinal Report - Major Project - MAP
Final Report - Major Project - MAP
 
MSc_Thesis
MSc_ThesisMSc_Thesis
MSc_Thesis
 
The C Preprocessor
The C PreprocessorThe C Preprocessor
The C Preprocessor
 
Intro photo
Intro photoIntro photo
Intro photo
 
Micazxpl - Intelligent Sensors Network project report
Micazxpl - Intelligent Sensors Network project reportMicazxpl - Intelligent Sensors Network project report
Micazxpl - Intelligent Sensors Network project report
 
Francois fleuret -_c++_lecture_notes
Francois fleuret -_c++_lecture_notesFrancois fleuret -_c++_lecture_notes
Francois fleuret -_c++_lecture_notes
 
thesis
thesisthesis
thesis
 
Jung.Rapport
Jung.RapportJung.Rapport
Jung.Rapport
 
Automatic Detection of Performance Design and Deployment Antipatterns in Comp...
Automatic Detection of Performance Design and Deployment Antipatterns in Comp...Automatic Detection of Performance Design and Deployment Antipatterns in Comp...
Automatic Detection of Performance Design and Deployment Antipatterns in Comp...
 
Augmented Reality Video Playlist - Computer Vision Project
Augmented Reality Video Playlist - Computer Vision ProjectAugmented Reality Video Playlist - Computer Vision Project
Augmented Reality Video Playlist - Computer Vision Project
 
An Introduction to Computational Networks and the Computational Network Toolk...
An Introduction to Computational Networks and the Computational Network Toolk...An Introduction to Computational Networks and the Computational Network Toolk...
An Introduction to Computational Networks and the Computational Network Toolk...
 
Team Omni L2 Requirements Revised
Team Omni L2 Requirements RevisedTeam Omni L2 Requirements Revised
Team Omni L2 Requirements Revised
 
Modlica an introduction by Arun Umrao
Modlica an introduction by Arun UmraoModlica an introduction by Arun Umrao
Modlica an introduction by Arun Umrao
 
Lower Bound methods for the Shakedown problem of WC-Co composites
Lower Bound methods for the Shakedown problem of WC-Co compositesLower Bound methods for the Shakedown problem of WC-Co composites
Lower Bound methods for the Shakedown problem of WC-Co composites
 

Viewers also liked

Alvará de olhão 15 de novembro de 1808
Alvará de olhão 15 de novembro de 1808Alvará de olhão 15 de novembro de 1808
Alvará de olhão 15 de novembro de 1808Ana Tendinha
 
Protección jurídica del software y la controversia doctrinal
Protección jurídica del software y la controversia doctrinalProtección jurídica del software y la controversia doctrinal
Protección jurídica del software y la controversia doctrinalDaniella Bedoya Ortega
 
махамбет макулов+тренажерный зал+идея
махамбет макулов+тренажерный зал+идеямахамбет макулов+тренажерный зал+идея
махамбет макулов+тренажерный зал+идеяЕлена Вайгандт
 
Classic Resume-Lisa, LPN-1
Classic Resume-Lisa, LPN-1Classic Resume-Lisa, LPN-1
Classic Resume-Lisa, LPN-1Lisa Bradford
 
финальная призентация Gold team
финальная призентация Gold teamфинальная призентация Gold team
финальная призентация Gold teamЕлена Вайгандт
 
Img 0946
Img 0946Img 0946
Img 0946cmckoy
 
Trust-Pro-Contracting-company-profile
Trust-Pro-Contracting-company-profileTrust-Pro-Contracting-company-profile
Trust-Pro-Contracting-company-profileManar Farid Koutrach
 
Turn Your Credit Card Processing Into An Investment
Turn Your Credit Card Processing Into An InvestmentTurn Your Credit Card Processing Into An Investment
Turn Your Credit Card Processing Into An InvestmentUpserve
 

Viewers also liked (12)

Alvará de olhão 15 de novembro de 1808
Alvará de olhão 15 de novembro de 1808Alvará de olhão 15 de novembro de 1808
Alvará de olhão 15 de novembro de 1808
 
CV
CVCV
CV
 
Protección jurídica del software y la controversia doctrinal
Protección jurídica del software y la controversia doctrinalProtección jurídica del software y la controversia doctrinal
Protección jurídica del software y la controversia doctrinal
 
M.Hedgepeth Resume 2016 1
M.Hedgepeth Resume 2016 1M.Hedgepeth Resume 2016 1
M.Hedgepeth Resume 2016 1
 
махамбет макулов+тренажерный зал+идея
махамбет макулов+тренажерный зал+идеямахамбет макулов+тренажерный зал+идея
махамбет макулов+тренажерный зал+идея
 
Classic Resume-Lisa, LPN-1
Classic Resume-Lisa, LPN-1Classic Resume-Lisa, LPN-1
Classic Resume-Lisa, LPN-1
 
финальная призентация Gold team
финальная призентация Gold teamфинальная призентация Gold team
финальная призентация Gold team
 
Img 0946
Img 0946Img 0946
Img 0946
 
Trust-Pro-Contracting-company-profile
Trust-Pro-Contracting-company-profileTrust-Pro-Contracting-company-profile
Trust-Pro-Contracting-company-profile
 
Turn Your Credit Card Processing Into An Investment
Turn Your Credit Card Processing Into An InvestmentTurn Your Credit Card Processing Into An Investment
Turn Your Credit Card Processing Into An Investment
 
Ciencia tecnologia, visión general
Ciencia tecnologia, visión generalCiencia tecnologia, visión general
Ciencia tecnologia, visión general
 
Slideshare
SlideshareSlideshare
Slideshare
 

Similar to Bachelor Thesis .pdf (2010)

Bast digital Marketing angency in shivagghan soraon prayagraj 212502
Bast digital Marketing angency in shivagghan soraon prayagraj 212502Bast digital Marketing angency in shivagghan soraon prayagraj 212502
Bast digital Marketing angency in shivagghan soraon prayagraj 212502digigreatidea2024
 
Au anthea-ws-201011-ma sc-thesis
Au anthea-ws-201011-ma sc-thesisAu anthea-ws-201011-ma sc-thesis
Au anthea-ws-201011-ma sc-thesisevegod
 
Distributed Mobile Graphics
Distributed Mobile GraphicsDistributed Mobile Graphics
Distributed Mobile GraphicsJiri Danihelka
 
High Performance Traffic Sign Detection
High Performance Traffic Sign DetectionHigh Performance Traffic Sign Detection
High Performance Traffic Sign DetectionCraig Ferguson
 
eclipse.pdf
eclipse.pdfeclipse.pdf
eclipse.pdfPerPerso
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemkurkute1994
 
UCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalUCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalGustavo Pabon
 
UCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalUCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalGustavo Pabon
 
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...Artur Filipowicz
 
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...Ed Kelley
 
Master_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_LiuMaster_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_LiuJiaqi Liu
 
AUGUMENTED REALITY FOR SPACE.pdf
AUGUMENTED REALITY FOR SPACE.pdfAUGUMENTED REALITY FOR SPACE.pdf
AUGUMENTED REALITY FOR SPACE.pdfjeevanbasnyat1
 
Integrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
Integrating IoT Sensory Inputs For Cloud Manufacturing Based ParadigmIntegrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
Integrating IoT Sensory Inputs For Cloud Manufacturing Based ParadigmKavita Pillai
 

Similar to Bachelor Thesis .pdf (2010) (20)

Bast digital Marketing angency in shivagghan soraon prayagraj 212502
Bast digital Marketing angency in shivagghan soraon prayagraj 212502Bast digital Marketing angency in shivagghan soraon prayagraj 212502
Bast digital Marketing angency in shivagghan soraon prayagraj 212502
 
Thesis
ThesisThesis
Thesis
 
Au anthea-ws-201011-ma sc-thesis
Au anthea-ws-201011-ma sc-thesisAu anthea-ws-201011-ma sc-thesis
Au anthea-ws-201011-ma sc-thesis
 
Technical report
Technical reportTechnical report
Technical report
 
Mak ms
Mak msMak ms
Mak ms
 
Distributed Mobile Graphics
Distributed Mobile GraphicsDistributed Mobile Graphics
Distributed Mobile Graphics
 
High Performance Traffic Sign Detection
High Performance Traffic Sign DetectionHigh Performance Traffic Sign Detection
High Performance Traffic Sign Detection
 
eclipse.pdf
eclipse.pdfeclipse.pdf
eclipse.pdf
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation system
 
Thesis_Report
Thesis_ReportThesis_Report
Thesis_Report
 
UCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalUCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_final
 
UCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalUCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_final
 
Milan_thesis.pdf
Milan_thesis.pdfMilan_thesis.pdf
Milan_thesis.pdf
 
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
 
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
 
BA1_Breitenfellner_RC4
BA1_Breitenfellner_RC4BA1_Breitenfellner_RC4
BA1_Breitenfellner_RC4
 
Master_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_LiuMaster_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_Liu
 
Thesis_Prakash
Thesis_PrakashThesis_Prakash
Thesis_Prakash
 
AUGUMENTED REALITY FOR SPACE.pdf
AUGUMENTED REALITY FOR SPACE.pdfAUGUMENTED REALITY FOR SPACE.pdf
AUGUMENTED REALITY FOR SPACE.pdf
 
Integrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
Integrating IoT Sensory Inputs For Cloud Manufacturing Based ParadigmIntegrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
Integrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm
 

Recently uploaded

Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Klinik kandungan
 
Reconciling Conflicting Data Curation Actions: Transparency Through Argument...
Reconciling Conflicting Data Curation Actions:  Transparency Through Argument...Reconciling Conflicting Data Curation Actions:  Transparency Through Argument...
Reconciling Conflicting Data Curation Actions: Transparency Through Argument...Bertram Ludäscher
 
Harnessing the Power of GenAI for BI and Reporting.pptx
Harnessing the Power of GenAI for BI and Reporting.pptxHarnessing the Power of GenAI for BI and Reporting.pptx
Harnessing the Power of GenAI for BI and Reporting.pptxParas Gupta
 
PLE-statistics document for primary schs
PLE-statistics document for primary schsPLE-statistics document for primary schs
PLE-statistics document for primary schscnajjemba
 
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteedamy56318795
 
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样wsppdmt
 
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With OrangePredicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With OrangeThinkInnovation
 
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...nirzagarg
 
怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制
怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制
怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制vexqp
 
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...gajnagarg
 
Digital Transformation Playbook by Graham Ware
Digital Transformation Playbook by Graham WareDigital Transformation Playbook by Graham Ware
Digital Transformation Playbook by Graham WareGraham Ware
 
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptx
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptxThe-boAt-Story-Navigating-the-Waves-of-Innovation.pptx
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptxVivek487417
 
Capstone in Interprofessional Informatic // IMPACT OF COVID 19 ON EDUCATION
Capstone in Interprofessional Informatic  // IMPACT OF COVID 19 ON EDUCATIONCapstone in Interprofessional Informatic  // IMPACT OF COVID 19 ON EDUCATION
Capstone in Interprofessional Informatic // IMPACT OF COVID 19 ON EDUCATIONLakpaYanziSherpa
 
Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...nirzagarg
 
Data Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdfData Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdftheeltifs
 
Jual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
Jual Cytotec Asli Obat Aborsi No. 1 Paling ManjurJual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
Jual Cytotec Asli Obat Aborsi No. 1 Paling Manjurptikerjasaptiker
 
Discover Why Less is More in B2B Research
Discover Why Less is More in B2B ResearchDiscover Why Less is More in B2B Research
Discover Why Less is More in B2B Researchmichael115558
 
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book now
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book nowVadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book now
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book nowgargpaaro
 
Aspirational Block Program Block Syaldey District - Almora
Aspirational Block Program Block Syaldey District - AlmoraAspirational Block Program Block Syaldey District - Almora
Aspirational Block Program Block Syaldey District - AlmoraGovindSinghDasila
 

Recently uploaded (20)

Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
 
Reconciling Conflicting Data Curation Actions: Transparency Through Argument...
Reconciling Conflicting Data Curation Actions:  Transparency Through Argument...Reconciling Conflicting Data Curation Actions:  Transparency Through Argument...
Reconciling Conflicting Data Curation Actions: Transparency Through Argument...
 
Harnessing the Power of GenAI for BI and Reporting.pptx
Harnessing the Power of GenAI for BI and Reporting.pptxHarnessing the Power of GenAI for BI and Reporting.pptx
Harnessing the Power of GenAI for BI and Reporting.pptx
 
PLE-statistics document for primary schs
PLE-statistics document for primary schsPLE-statistics document for primary schs
PLE-statistics document for primary schs
 
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
 
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
 
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With OrangePredicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
 
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
 
怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制
怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制
怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制
 
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
 
Digital Transformation Playbook by Graham Ware
Digital Transformation Playbook by Graham WareDigital Transformation Playbook by Graham Ware
Digital Transformation Playbook by Graham Ware
 
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptx
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptxThe-boAt-Story-Navigating-the-Waves-of-Innovation.pptx
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptx
 
Capstone in Interprofessional Informatic // IMPACT OF COVID 19 ON EDUCATION
Capstone in Interprofessional Informatic  // IMPACT OF COVID 19 ON EDUCATIONCapstone in Interprofessional Informatic  // IMPACT OF COVID 19 ON EDUCATION
Capstone in Interprofessional Informatic // IMPACT OF COVID 19 ON EDUCATION
 
Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Purnia [ 7014168258 ] Call Me For Genuine Models We...
 
Data Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdfData Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdf
 
Jual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
Jual Cytotec Asli Obat Aborsi No. 1 Paling ManjurJual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
Jual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
 
Discover Why Less is More in B2B Research
Discover Why Less is More in B2B ResearchDiscover Why Less is More in B2B Research
Discover Why Less is More in B2B Research
 
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book now
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book nowVadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book now
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book now
 
Sequential and reinforcement learning for demand side management by Margaux B...
Sequential and reinforcement learning for demand side management by Margaux B...Sequential and reinforcement learning for demand side management by Margaux B...
Sequential and reinforcement learning for demand side management by Margaux B...
 
Aspirational Block Program Block Syaldey District - Almora
Aspirational Block Program Block Syaldey District - AlmoraAspirational Block Program Block Syaldey District - Almora
Aspirational Block Program Block Syaldey District - Almora
 

Bachelor Thesis .pdf (2010)

  • 1. FAKULT ¨AT F ¨UR INFORMATIK DER TECHNISCHEN UNIVERSIT ¨AT M ¨UNCHEN Bachelorarbeit in Informatik Analyzing the Velocity Fields in Cosmological Simulations using the ParticleEngine Dimitar Dimitrov
  • 2.
  • 3. FAKULT ¨AT F ¨UR INFORMATIK DER TECHNISCHEN UNIVERSIT ¨AT M ¨UNCHEN Bachelorarbeit in Informatik Analyzing the Velocity Fields in Cosmological Simulations using the ParticleEngine Analyse von Vektorfeldern kosmologischer Simulationen mit Hilfe der ParticleEngine Author: Dimitar Dimitrov Supervisor: Prof. Dr. R¨udiger Westermann Advisor: Kai B¨urger Date: August 16, 2010
  • 4.
  • 5. I assure the single handed composition of this bachelor thesis only supported by declared resources. M¨unchen, den 16. August 2010 Dimitar Dimitrov
  • 6.
  • 7. Abstract The cosmology group led by Prof. Avishai Dekel at the Hebrew University of Jerusalem (HU), Israel, in collaboration with German scientists in MPE, MPA and LSU in Munich, are running state-of-the-art cosmological, gravo-hydrodynamical simulations to study galaxy formation within the new standard cosmological model LCDM. Their goal is to try to understand the complex, three-dimensional flow pattern of their simulations, using visualization tools, developed by the visualization group at TUM led by Prof. Westermann. In particular, they have been using the ParticleEngine to visualize the velocity field in individual simulation snapshots. This thesis presents implemented upgrades to the ParticleEngine tool, which directly address the wishes of the astrophysicists at MPA and Jerusalem, and try to further enhance the usability of the tool for their research. vii
  • 9. Contents Abstract vii I. Introduction and Background 1 1. Introduction 3 1.1. Experimental Flow Visualization . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2. Computational Fluid Dynamics and Velocity Fields . . . . . . . . . . . . . . 4 1.3. Particle Tracing for 3-D Flow Visualization . . . . . . . . . . . . . . . . . . . 6 1.4. Volume Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2. The ParticleEngine 11 2.1. Basic Principles behind the ParticleEngine . . . . . . . . . . . . . . . . . . . . 11 2.1.1. Vector Field Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1.2. GPU Particle Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.3. Additional Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3. Thesis Goals 17 II. Analyzing Velocity Fields in Cosmological Simulations 21 4. Multiple Sources of Particles 23 4.1. Initial Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.2. The Probe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2.1. Source of Particles as a Stand-Alone Entity. . . . . . . . . . . . . . . . 25 4.2.2. Particles’ Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.2.3. Adjustable Options and ’Change Detectors’ . . . . . . . . . . . . . . . 26 4.2.4. 4th Component Aware Operations . . . . . . . . . . . . . . . . . . . . 27 4.2.5. Shared Shader Variables and Effect Pools . . . . . . . . . . . . . . . . 29 4.3. Probe Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.3.1. The ParticleProbeContainer Class . . . . . . . . . . . . . . . . . . . . 30 4.3.2. The ParticleProbeController Class . . . . . . . . . . . . . . . . . . . . 31 4.3.3. Multi-probe Management Architecture . . . . . . . . . . . . . . . . . 32 4.4. User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.4.1. Tracer Parameters UI Reference . . . . . . . . . . . . . . . . . . . . . . 33 4.4.2. Probe Parameters UI Reference . . . . . . . . . . . . . . . . . . . . . . 35 4.4.3. User Input Modes UI Reference . . . . . . . . . . . . . . . . . . . . . . 38 4.5. The Lense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 ix
  • 10. Contents 4.6. Lense Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.7. Lense UI Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5. Transfer Function Editor 43 5.1. The Raycaster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.2. The Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.2.1. The Transfer Function Control . . . . . . . . . . . . . . . . . . . . . . 45 5.2.2. The TFEditor Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.3. The RaycastController Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.4. User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.4.1. Raycast Controller UI Reference . . . . . . . . . . . . . . . . . . . . . 49 5.4.2. Raycaster UI Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.4.3. Transfer Function Editor UI Reference . . . . . . . . . . . . . . . . . . 51 6. Fourth-component Recalculation 55 6.1. ParticleTracer3D class upgraded . . . . . . . . . . . . . . . . . . . . . . . . . 56 6.2. Updating the Volume Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 6.3. Recalculation Fragment Shaders . . . . . . . . . . . . . . . . . . . . . . . . . 59 6.4. User Interface Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 6.4.1. Supported Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 7. Physical Units Display 63 7.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 7.2. Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 7.3. Projecting the Physical Units on the Screen . . . . . . . . . . . . . . . . . . . 65 7.4. User Interface Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 8. Exporting Particles 69 8.1. Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 8.2. ParticleProbe’s ExportParticles Method . . . . . . . . . . . . . . . . . . . . . 70 8.2.1. Geometry Shader for Export . . . . . . . . . . . . . . . . . . . . . . . 71 8.3. Multi-probe Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 8.3.1. Example Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 III. Results and Conclusion 75 9. Visualizations 77 9.1. Multi-probe Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 9.2. Direct Volume Rendering and Fourth-component Recalculation . . . . . . . 81 10. Conclusion 85 Bibliography 87 x
  • 11. List of Figures 1.1. Different methods for flow visualization . . . . . . . . . . . . . . . . . . . . . 4 1.2. Images of CFD simulation visualizations [?]. . . . . . . . . . . . . . . . . . . 4 1.3. A snap-shot of a two-dimensional fluid with some of the velocity vectors shown [?]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4. On recent GPUs, textures can be accessed in the vertex units, and render- ing can be directed into textures and vertex arrays. This and other features enable these GPUs to advect and render large amounts of particles [?]. . . . 6 1.5. Hurricane Isabelle is visualized with transparent point sprites [?]. . . . . . . 7 1.6. Different particle-based strategies are used to visualize 3D flow fields by the ParticleEngine. (Left) Focus+context visualization using an importance mea- sure based on helicity density and user-defined region of interest. (Middle) Particles seeded in the vicinity of anchor lines show the extend and speed at which particles separate over time. (Right) Cluster arrows are used to show regions of coherent motion [?]. . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.7. Volume ray casting is a direct volume rendering technique to visualize vol- ume data. 2D image is produced by shooting a ray from the eye position into the volume, and accumulating the sampled values along this ray onto the 2D image plane [?]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.8. Images of isosurfaces rendered from a volume data set. . . . . . . . . . . . . 10 2.1. The ParticleEngine displaying a rough approximation of its underlying vec- tor field. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2. The advection-rendering cycle of the ParticleEngine . . . . . . . . . . . . . . 12 2.3. User defined probe injecting particles into the field. Partialy transparent point primitives are used for rendering. . . . . . . . . . . . . . . . . . . . . . 14 2.4. The ParticeEngine in Clearview mode. Two isosurfaces blended together us- ing a user defined ’lense’. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.1. Example of an experiment utilizing multi-probe configuration . . . . . . . . 23 4.2. Initial application architecture. The ParticleTracerBase class hosts all vari- ables holding particle parameters. The advection and rendering steps are performed by methods of the class. The ParticleTracer3D class is responsi- ble for managing the vector field data. . . . . . . . . . . . . . . . . . . . . . . 24 4.3. ParticleProbe class and its ParticleProbeOptions member . . . . . . . . . . 25 4.4. Some change detectors of the ParticleProbeOptions class . . . . . . . . . . . 27 4.5. Adjustable options, controlling the 4th component aware operations, and their effect variables. Additionally, there is a flag to enable or disable them according to the format of the vector field data . . . . . . . . . . . . . . . . . 28 xi
  • 12. List of Figures 4.6. Shared shader variables and effect in the ParticleTracerBase, and the child advection effect in ParticleProbe . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.7. The container class, responsible for probe management. . . . . . . . . . . . . 30 4.8. The ParticleProbeController class, methods for registering and unregister- ing a probe. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.9. Multi-probe management architecture. . . . . . . . . . . . . . . . . . . . . . . 32 4.10. Tracer Parameters UI, found in the lower right corner of the application win- dow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.11. Probe Parameters UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.12. User Input Mode UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.13. The new lense is a special kind of probe. The lense’s parameters are also contained by the ParticleProbeOptions class. . . . . . . . . . . . . . . . . . . 39 4.14. Experiment, demonstrating the clip-plane functionality of the new lense. The particles from each probe get projected onto the lense’s plane. . . . . . . 40 4.15. The ParticleProbeContainer is used also to manage lenses. . . . . . . . . . . 41 4.16. Probe Parameters UI - Lense selected . . . . . . . . . . . . . . . . . . . . . . . 42 5.1. Direct volume rendering by the ParticleEngine, visualizing the vector lenght as a 4th component. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.2. The Raycaster class, represented in the ParticleTracerBase. . . . . . . . . . . 44 5.3. The transfer function control element, displaying a transfer function . . . . 45 5.4. The TFEditor class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.5. RaycastController organizes all volume rendering options in a new UI . . . 47 5.6. Raycast Controller UI, and the Transfer Function Editor UI displayed below it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.7. Raycast Controller UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.8. The Raycaster UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.9. The Transfer Function Editor UI . . . . . . . . . . . . . . . . . . . . . . . . . . 51 6.1. ParticleTracer3D is upgraded to support the 4th component recalculation. . 56 6.2. The 4th Component Recalculation UI. . . . . . . . . . . . . . . . . . . . . . . 60 7.1. Vector field domain, rendered with its physical coordinates and dimensions, projected onto its bounding box. . . . . . . . . . . . . . . . . . . . . . . . . . 63 7.2. Upgrade for physical units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 7.3. Physical Units display UI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 8.1. Multi-probe configuration classes extended to support particles exporting. . 69 8.2. Export file, created by the ParticleEngine, opened in Microsoft®Excel® . . . 73 9.1. Multiple small probes displaying streamlines. . . . . . . . . . . . . . . . . . 77 9.2. Two-probe configurations, using 4th component aware injection. Below, the lifetime of the particles is set to one to show the modulated injection density. 78 9.3. Multi-probe configurations. Above, using 4th component aware injection. Below, using 4th component aware modulation. . . . . . . . . . . . . . . . . 79 xii
  • 13. List of Figures 9.4. Three-probe configuration, using 4th component aware modulation. The two big probes are displaying sprites (above), and points (below). The third probe is displaying streamlines. . . . . . . . . . . . . . . . . . . . . . . . . . . 80 9.5. Direct volume renderings of the temperature values, encoded in the 4th, using different transfer functions. . . . . . . . . . . . . . . . . . . . . . . . . . 81 9.6. Direct volume rendering of the curl (above) and divergence (below), calcu- lated with the 4th component recalculation feature. . . . . . . . . . . . . . . . 82 9.7. Direct volume rendering of the vector lenght (above) and divergence (be- low), focused in the selected probe’s boundaries. . . . . . . . . . . . . . . . . 83 xiii
  • 14.
  • 15. Part I. Introduction and Background 1
  • 16.
  • 17. 1. Introduction 1.1. Experimental Flow Visualization Fluid mechanics is the branch of the physical sciences concerned with how fluids behave at rest or in motion. Its uses are very broad - fluid mechanics examines the behavior of everything that is not solid, including liquids, gases, and plasma. This makes it one of the most important physical sciences in engineering [?]. As fluid mechanics is an active field of research with many unsolved or partly solved problems [?], the methods of gaining insight into flow patterns are playing a major role in the pursuit for further understanding of the topic. Flow visualization is the study of methods to display dynamic behavior in liquids and gases. Most fluids (air, water, etc.) are transparent, making their flow patterns unrecogniz- able to the naked eye. Thus, techniques for flow visualization must be applied to enable observation [?, ?]. In experimental fluid dynamics, three approaches are most commonly used for this task [?, ?]: • Surface flow visualization: This method reveals the flow streamlines in the limit as the flow approaches a solid surface. For example, applying colored oil to the surface of a wind tunnel model forms patterns as the oil responds to the surface shear stress (fig. 1.1(b)). • Particle tracing: Particles, such as smoke or water bubbles, can be added to a flow to trace the its motion. The particles can be then illuminated with a sheet of laser light in order to visualize a slice of a complicated fluid flow pattern. Assuming that the particles faithfully follow the streamlines of the flow, measuring its velocity using the particle image velocimetry or particle tracking velocimetry methods is also possible (fig. 1.1(a)). • Optical methods: Some flows reveal their patterns by way of changes in their opti- cal refractive index. These are visualized by optical methods known as the shadow- graph, schlieren photography, and interferometry. More directly, dyes can be added to (usually liquid) flows to measure concentrations; typically employing the light attenuation or laser-induced fluorescence techniques (fig. 1.1(c)). 3
  • 18. 1. Introduction (a) Using air bubbles, generated by electrolysis of water to trace water flows [?]. (b) Surface oil flow visualization [?]. (c) Shadowgram of the turbulent plume of hot air rising from a home-barbecue gas grill [?]. Figure 1.1.: Different methods for flow visualization 1.2. Computational Fluid Dynamics and Velocity Fields Fluid mechanics problems can be mathematically complex. Most of the time, they are best solved by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics (CFD), is devoted to this approach [?]. It extends the abil- ities of scientists to study flow by creating simulations of fluids under a wide range of conditions [?, ?]. (a) A computer simulation of high velocity air flow around the Space Shuttle during re-entry. (b) A simulation of the Hyper-X scramjet vehicle in operation at Mach-7. Figure 1.2.: Images of CFD simulation visualizations [?]. The fundamental basis of almost all CFD problems are the Navier-Strokes equations, which define any single-phase fluid flow. Removing terms describing viscosity yields the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations. Finally, these equations can be linearized to yield the linearized potential equations. Historically, methods were first developed to solve the linearized potential equations. Then, Euler equations were also implemented. Ultimately, the Navier- Strokes equations were incorporated in a number of commercial packages [?]. In many cases, however, the complexity of the problems is larger than even the most powerful 4
  • 19. 1.2. Computational Fluid Dynamics and Velocity Fields computer systems of today can model. The most fundamental consideration in CFD is how one treats a continuous fluid in a discretized fashion on a computer. One method is to discretize the spatial domain into small cells to form a volume mesh or grid, and then apply a suitable algorithm to solve the equations of motion. Such a mesh can be either irregular (for instance consisting of trian- gles in 2D, or pyramidal solids in 3D) or regular. There are also a number of alternatives that are not mesh-based. Some of them are Smoothed particle hydrodynamics, Spectral methods, and Lattice Boltzmann methods [?]. Several variations of field data can be generated by CFD experiments, based on its time- dependency. A static field is one in which there is only a single, unchanging velocity field. Time-varying fields may either have fixed positions with changing vector values or both changing positions and changing vectors. These latter types are referred to as unsteady [?]. Throughout this thesis, only steady flow fields are considered. A regular 3-D grid of velocity vectors is assumed as the format, in which vector field data is present. The pri- mary reasons to choose such format is its simplicity and the ability to parallel process data stored in it. This velocity vector field describes mathematically the motion of a fluid. The length of the flow velocity vector at a particular position in the field corresponds to the flow speed at that position. Figure 1.3.: A snap-shot of a two-dimensional fluid with some of the velocity vectors shown [?]. 5
  • 20. 1. Introduction 1.3. Particle Tracing for 3-D Flow Visualization Advances in experimental and CFD flow analysis are generating unprecedented amount of fluid flow data from physical phenomena. The ever increasing computational power and the dedicated graphics hardware solutions now available are enabling new advanced ways for visualizing this data in digital 3-D environments. In experimental flow analysis, particle tracing has been established as a powerful tech- nique to show the dynamics of fluid flows. Its main principles can be easily adapted for simulation in a digital environment, making it a potent technique for computer-aided fluid flow visualization. Presented with experimental data, discretized as a finite grid of vector quantities, de- scribing the flow speed and direction at given coordinates, a particle system can be nu- merically advected to approximate real-world experiment. Then, graphical primitives, such as arrows, motion particles, particle lines, stream ribbons, and stream tubes, can be produced to emphasize flow properties and to act as depth cues to assist in the exploration of complex spatial fields. Such system is able to deal with large amounts of vector-valued information at inter- active rates. When implemented to exploit the functionality of recent graphics hardware, millions of particles can be traced through the flow at interactive frame rates. This makes the exploration of complex fluid flows on a consumer hardware possible and greatly ex- tends its applicability [?]. Figure 1.4.: On recent GPUs, textures can be accessed in the vertex units, and rendering can be directed into textures and vertex arrays. This and other features enable these GPUs to advect and render large amounts of particles [?]. 6
  • 21. 1.3. Particle Tracing for 3-D Flow Visualization Figure 1.5.: Hurricane Isabelle is visualized with transparent point sprites [?]. Importance-driven particle visualization The capability to handle large system of particles, however, quickly overextends the viewer due to the massive amount of visual information produced by this technique. With the help of importance-driven strategies, interesting structures in the flow can be revealed by reducing the visual information and allowing the viewer to concentrate on important regions. In [?] a number of importance-driven visualization techniques are proposed. They make experiment exploration less prone to perceptual artifacts and minimize visual clutter, pro- duced by frequent positional changes of large amounts of particles. Relevant structures in the flow are emphasized by integrating user-controlled and feature-based importance measures. These measures are used to control the shape, the appearance, and the density of particles in such a way that the user can focus on the dynamics in important regions and at the same time preserve context information. Improvements for particle-based 3D flow visualization, proposed by [?]: • Automatically adapt the shape, the appearance, and the density of particle primitives with respect to user-defined and feature-based regions of interest. • Using vorticity, helicity density and finite time Lyapunov exponent as an importance measures. The finite time Lyapunov exponent is particularly useful for the selec- tion of characteristic trajectories in the flow, called anchor lines, and only visualizing those particles, that leave an anchor. • Clustering approach is applied to determine regions of coherent motion in the flow. Sparse set of static cluster arrows emphasize these regions. Cluster arrows are geo- metric primitives that represent regions of constant motion in the flow. • Focus+context visualization. This means that, within the focus region, the flow field is visualized at the highest resolution level, and contextual information is preserved by visualizing a sparse set of primitives outside this region. 7
  • 22. 1. Introduction Figure 1.6.: Different particle-based strategies are used to visualize 3D flow fields by the ParticleEngine. (Left) Focus+context visualization using an importance mea- sure based on helicity density and user-defined region of interest. (Middle) Particles seeded in the vicinity of anchor lines show the extend and speed at which particles separate over time. (Right) Cluster arrows are used to show regions of coherent motion [?]. Streamlines, streaklines, and pathlines Streamlines, streaklines and pathlines are field lines resulting from a given vector field description of a flow. They can serve as additional visual clues for flow patterns [?]. The different types of lines differ only when the flow changes with time: that is, when the flow is not steady. • Streamlines are a family of curves that are instantaneously tangent to the velocity vector of the flow. These show the direction a fluid element will travel in at any point in time. • Streaklines are the locus of points of all the fluid particles that have passed contin- uously through a particular spatial point in the past. Dye steadily injected into the fluid at a fixed point extends along a streakline. • Pathlines are the trajectories that individual fluid particles follow. These can be thought of as a ”recording” of the path a fluid element in the flow takes over a cer- tain period. The direction the path takes will be determined by the streamlines of the fluid at each moment in time. • Timelines are the lines formed by a set of fluid particles that were marked at a previ- ous instant in time, creating a line or a curve that is displaced in time as the particles move. 1.4. Volume Rendering Here, a short introduction to volume rendering is made. The ParticleEngine tool, de- scribed in chapter 2, applies this powerful technique for visualizing additional spacial properties of a flow field, and this is one of the aspects, which are to be addressed through- out this thesis. 8
  • 23. 1.4. Volume Rendering Volume rendering is a technique used to display a 2D projection of a 3D discretely sam- pled data set [?]. A typical 3D data set is a group of 2D slice images acquired by a CT, MRI, or MicroCT scanner. These are usually acquired in a regular pattern (e.g., one slice every millimeter) and usually have a regular number of image pixels in a regular pattern. This makes the technique very suitable for the case of 3-D regular vector field grids, used by the ParticleEngine. To render a 2D projection of the 3D data set, the opacity and color of every voxel (a volumetric pixel in a 3-D texture) must be defined. This is usually done using an RGBA (for red, green, blue, alpha) transfer function that maps a RGBA value for every possible voxel value. This transfer function is then applied with, for example, the volume ray casting technique to obtain the final 2D image. This way of visualizing volume data is called direct volume rendering. Figure 1.7.: Volume ray casting is a direct volume rendering technique to visualize vol- ume data. 2D image is produced by shooting a ray from the eye position into the volume, and accumulating the sampled values along this ray onto the 2D image plane [?]. A volume may be also viewed by extracting surfaces of equal values from the volume and rendering them as polygonal meshes. Such surface is called isosurface. Isosurfaces are used as data visualization methods in CFD, allowing engineers to study features of a fluid flow (gas or liquid) around objects, such as aircraft wings [?]. 9
  • 24. 1. Introduction (a) A (smoothed) rendering of a data set of voxels for a macromolecule [?]. (b) An isosurface, rendered by the ParticleEngine. Figure 1.8.: Images of isosurfaces rendered from a volume data set. 10
  • 25. 2. The ParticleEngine The ParticleEngine is a particle system for interactive visualization of 3D flow fields on uniform grids. It exploits features of recent graphics hardware to advect particles in the graphical processing unit (GPU), save the new positions in the graphics memory and send them back through the GPU to obtain images in the frame buffer. This approach allows for interactive streaming and rendering of millions of particles and enables virtual exploration of high resolution fields in a way similar to real-world experiments. To provide additional visual clues, the GPU constructs and displays visualization geometry like particle lines and stream ribbons. 2.1. Basic Principles behind the ParticleEngine 2.1.1. Vector Field Data The ParticleEngine operates on a 3-D uniform Cartesian grid, with each cell containing three to four floating point components. The discretized velocity field data for particular experiment is loaded from a file into this grid. The first three components of every grid cell contain the speed (magnitude) and the direction of the flow at this cell’s position. The 4th component of the grid can be utilized to store some scalar physical characteristic of the flow field, such as density or temperature. Figure 2.1.: The ParticleEngine displaying a rough approximation of its underlying vector field. 11
  • 26. 2. The ParticleEngine 2.1.2. GPU Particle Tracing The ParticleEngine traces massless particles in a flow field over time, computing their trajectory by solving the ordinary differential equation of the field ∂˜x ∂t = ˜v(˜x(t), t) (2.1) with the initial condition ˜x(t) = x0. Here, the ˜x(t) is the time-varying particle position, ∂˜x ∂t is the tangent to the particle trajectory, and ˜v is an approximation to the real vector field v. As v is sampled on a discrete lattice, interpolation must be performed to reconstruct particle velocities along their characteristic lines. Modern GPUs expose capabilities, such as the possibility to access texture maps in the vertex units (see figure 1.4), programmable geometry and fragment shaders, and ability to stream vertex data from the geometry-shader stage (or the vertex-shader stage if the geometry-shader stage is inactive) to one or more buffers in memory. The ParticleEngine computes intermediate results on the GPU, saves these results in graphics memory, and uses them again as input to the geometry units to render images in the frame buffer. This process requires application control over the allocation and use of graphics memory; intermediate results are ’drawn’ into invisible buffers, and these buffers are subsequently used to present vertex data or textures to the GPU. Figure 2.2.: The advection-rendering cycle of the ParticleEngine Initial particle positions are stored in the RGB color components of a floating point tex- ture of size M×N. These positions are distributed regularly or randomly in the unit cube. In the alpha component, each particle carries a random floating point value that is uni- formly distributed over a predefined range. This value is multiplied by a user defined global lifetime to give each particle an individual lifetime. By letting particles die - and thus reincarnate - after different numbers of time steps, particle distributions very similar to those generated in real-world experiments can be simulated. 12
  • 27. 2.1. Basic Principles behind the ParticleEngine Injection Particles are initially uploaded to the GPU in a particle buffer. The elements of this buffer are structures, containing all the needed attributes to define a particle in the flow. Some of these parameters are: initial position, current position, direction for the next advection step, and lifetime. Another empty particle buffer is also created. This two buffers form the ping-pong buffer system, used for the advection step. The user can interactively position and resize a 3-D probe that injects particles into the flow. All particles are initialy born, and subsequently reincarnated within this region. The birth of a particle consists of reading its starting position from the M×N texture described above, and initializing its lifetime timer. Advection The advection step is performed by a geometry shader, using the RK3(2) integration scheme. The geometry shader updates the positions and the timer of each particle in the source ping-pong buffer, and streams them out to the receiving ping-pong buffer, using the stream-output pipeline stage. This effectively moves all particles one time step further along the field. To complete the advection step, the two buffers are exchanged - the target becomes the source and vice versa. Additionally, the advection step is also responsible to make a ’death test’ for each par- ticle. This test checks the lifetime of a particle and if it is still in the vector field domain boundaries. According to the test results, the particle can be reinjected, following the steps, specified in the ’Injection’ section above. Rendering The particle buffer, containing the new positions of the particles is then used to render them. This buffer is bound to the pipeline as a vertex buffer, containing a list of points. Then, each particle’s current position gets transformed with the momentary model-view- projection matrix and rendered onto the frame buffer. Additionally, a number of user adjustable options play a role on which particles get dis- played and how. Different display modes are available to aid the understanding of the field. Rendering one pixel for every particle can be used to simulate smoke-like matter distribution. To add more geometry to the scene, sprites can be rendered for each par- ticle, using the geometry shader unit. Oriented sprites, for example, are very useful for visualizing the directional properties of the field. 13
  • 28. 2. The ParticleEngine Figure 2.3.: User defined probe injecting particles into the field. Partialy transparent point primitives are used for rendering. 2.1.3. Additional Features Importance-driven Particle Visualization The ParticleEngine also incorporates all the importance-driven particle visualization tech- niques, described in section 1.3. This includes the importance measures, the user de- fined focus+context regions, the anchor lines, the cluster arrows and the different kinds of streamlines. Volume Rendering As already mentioned in the Vector Field Data section(2.1.1), the grid on which the Parti- cleEngine operates has an additional 4th component to every of its cells. Ignoring the three other components, this can be interpreted as a volume data set, subjecting it to volume rendering techniques. This makes possible the visualizing of any flow characteristic, loaded into this 4th com- ponent. The direct volume rendering and isosurfaces are both supported (see section 1.4). Additionally, the so called Clearview mode is also available within the ParticleEngine. The Clearview mode enables the user to compare two isosurfaces, by rendering them and then blending them according to user specified options. 14
  • 29. 2.1. Basic Principles behind the ParticleEngine Figure 2.4.: The ParticeEngine in Clearview mode. Two isosurfaces blended together using a user defined ’lense’. 15
  • 31. 3. Thesis Goals After using the ParticleEngine tool on individual snapshots of velocity field data of in- terest, the research group members acknowledged its potential, but also did notice, that in order for it to be seriously considered for the project, upgrades must be realized. Primary goal of this project is to address the wishes of the astrophysicists, and upgrade the tool to best suit their needs. In the process, the application’s existing functionality should be optimized and partly redesigned to seamlessly work with the new features. The following outlines the stated upgrade wishes and gives more details about each one: Implement a way to start particles from more than one box and color them separately. Currently, the user can inject particles into the flow field by positioning a cuboid-shaped source of particles, called probe, within the field’s spacial domain. The particles’ start positions are distributed inside the probe’s boundaries according to a predefined scheme - either randomly, or uniformly. These positions are then used to initially inject the particles, or to reincarnate these, which left the domain, or reached the end of their lifetime. The user can move and resize the probe, and observe how the particles move through the flow. Different parameters can be changed, such as number of particles, their color or size. Having just one probe places several limitations on the user: • Only one region for particle injection can be defined. The user is not able to simul- taneously observe two or more features at different locations within the field. For example, two particular streams within the flow. Making the probe large enough is not a solution, because the particles are distributed in the whole probe and will eventually have too much space in between to adequately represent the movement of the mass. Increase of particles number will distract the user from the important regions. The ParticleEngine’s feature injection capability can be useful in this case. It populates the probe according to the field’s 4th component, given an upper and lower thresh- olds. It, however, is computationally expensive, making movement and resizing of the probe problematic. Also, useful values for the 4th component must be preloaded, making interactive experimentation impossible. • There cannot be more than one type of particles at a time in the flow. A ’type’ of particle is defined as a combination of the different parameters, controlling how the particles are injected, advected and displayed. These include injection mode, display mode, color, and lifetime to name a few. • All particles start their advection simultaneously. These limitations should be addressed and alleviated. Chapter 4 is devoted to this topic. 17
  • 32. 3. Thesis Goals Populate the starting positions of particles weighted by the local 4th-component value of the field In addition to enabling multiple probes, the feature injection mode of the probe, already mentioned above, should be adapted and optimized. The present implementation de- pends on upper and lower thresholds to decide, where the particles must be born. This introduces sharp borders between regions, meeting the threshold condition, and the oth- ers. Being able to modulate the density of the particles according to the 4th-component will allow regions of interest to have more particles than others, and will make for smooth transition between them. Furthermore, the particle density should update in respect of probe movement and re- sizing. To avoid hampering the exploration, this update should happen at interactive rates. This is addressed in Chapter 4. In particular in section 4.2.4. Improve/generalize ways in which to define the transfer function of the 4th-component inside the ParticleEngine The ray casting volume rendering technique, integrated in the ParticleEngine, provides high quality images of the field’s 4th component. This makes it powerful tool for gaining insights of the field’s characteristics, thus aiding the understanding of flow patterns. In particular, the direct volume rendering capability should be addressed here. For in- troduction to volume rendering, see 1.4. At the moment, the ParticleEngine has no general way of defining a transfer function for direct volume rendering. The main interface allows the user to load a shader code fragment from file, which must include particular functions. The integrated ray caster then uses these functions to map values to color. This has several shortcomings: • It does not support interactive exploration. The user must guess which transfer func- tion will yield usable results. For every different transfer function, there must be a code fragment, saved as a file. • It requires programming skills, and also knowledge of the names and arguments of the functions, called by the ray caster. • Loading files with inappropriate format may produce unexpected results. This re- quires extra care when creating the code fragments. To unleash the full potential of the build-in volume renderer, a new way of defining a transfer function should be introduced. It should improve the exploration of the 4th component by supplying intuitive user interface. Chapter 5 is dedicated to this goal. 18
  • 33. Allow snapshots of particles to be exported to file. For each particle store original and final location, flag particles that have left the domain. The use of multiple specialized tools for different purposes, to produce results, not ob- tainable by any of the tools alone, is always very a important consideration and sometimes even unavoidable in large projects. To make the ParticleEngine more applicable in such synergetic environment, a way of exporting a snapshot of the particles, currently in the flow, should be implemented. For further analysis or more advanced visualization, the particle start positions and current positions should be written out to a file on the disk. To widen the interoperability, a standardized and simple format for the file should be chosen. This is discussed in chapter 8. Add physical coordinates to the display on screen. Do the same for the location and size of the box that acts as source of particles. Without some way of specifying real physical coordinates and scale, the usefulness of the visualized data for scientific research quickly reaches its limit. The user should be able to set the coordinates and the dimensions of the field’s domain in specified from him physical units. User interface should be presented to facilitate this task. Additionally, visual clues should be displayed on-screen, hinting the user of the physical dimensions and scale of the data visualized. Furthermore, the user should be presented with a way to define an exact position and dimensions of each probe currently in the scene, in the physical coordinates specified. The functionality for exporting particles, described above should also be made aware of these units. The solution to this requirement is discussed in chapter 7. Allow calculation of properties of velocity fields and use it as ”fourth component” As the power of the build-in volume renderer gets more accessible with the introduc- tion of user-definable transfer function, the 4th component visualization gains even more on relevance. As of now, if the user wants to visualize some field characteristic, such as temperature or density, it must be encoded as the 4th component and loaded with the vector field. A flow field can have many characteristics, delivered by the experimental data. This requires many data sets, which only differ in their 4th components, to be managed. As a consequence, oft reloads of vector field data must be taken into consideration. Further- more, some important characteristics are derivable from the vector field itself, or through some function of present ones. Realizing a way to dynamically at run-time recalculate the data, stored into the 4th com- ponent, will make the ParticleEngine much more versatile and reduce the burden of man- aging many data sets. As a first step, calculation of local properties of the field, such as 19
  • 34. 3. Thesis Goals divergence and curl, should be implemented. Then, additional functions, like applying some function on the present 4th component values, or loading a 4th component from ex- ternal field and using it to change in some way the present values can be considered. This requirement is addressed in chapter 6. 20
  • 35. Part II. Analyzing Velocity Fields in Cosmological Simulations 21
  • 36.
  • 37. 4. Multiple Sources of Particles This chapter addresses the first two upgrade requirements, described in the section ’The- sis Goals’ (3). In order to alleviate the drawbacks pointed out there, changes to the ap- plication’s architecture were realized. The user-friendly interaction with the multi-probe configurations has been ensured by a new user interface. Also, already present functional- ity, which is considered useful for the project, has been seamlessly integrated into the new environment. First, an introduction to the current architecture is made. This is intended to make clear what a source of particles is from application’s standpoint. Then, a new definition for probe is given, and technical details about the architecture changes enabling the support for multi-probe configurations are presented. Finally, the new user interface is discussed in detail. Figure 4.1.: Example of an experiment utilizing multi-probe configuration 23
  • 38. 4. Multiple Sources of Particles 4.1. Initial Architecture Here, the application’s architecture, as it was at the beginning if this project, is intro- duced. As it served as the base for the development, insight into it will help to fully understand the reasons behind the performed application changes, described in the next sections. The ParticleTracer Object The ParticleTracer is the main internal object of the ParticleEngine. It consists of the ParticleTracerBase class, and an extending class. In this project, only the ParticleTracer3D is considered. Figure 4.2 shows its structure as a class diagram. Only some members and methods are displayed for readability. Figure 4.2.: Initial application architecture. The ParticleTracerBase class hosts all variables holding particle parameters. The advection and rendering steps are performed by methods of the class. The ParticleTracer3D class is responsible for manag- ing the vector field data. The ParticleTracerBase class hosts all the variables holding particle parameters, such as start positions (m pParticleStartposTex), injection mode (m StartPosMode), initial count (m iStartParticles), lifetime (m iMaxLifetime) and color (m vPartColor). The methods performing the advection (AdvanceParticles()) and rendering (RenderParticles()), and the ping-pong buffers (m pParticleDrawFrom and m pParticleStreamTo) used by them, are also integral for the class. The ParticleTracer3D class extends the ParticleTracerBase class by adding functional- ity for loading and manipulating the 3D vector field data. The data is loaded into the m pTexture3D variable, by the CreateVolumeTexture() method. 24
  • 39. 4.2. The Probe Next, an extension over this architecture is presented. The driving idea behind the new design is to make it more flexible by allowing easier extensibility. To achieve this, the architecture must be modularized. As a first step, the particle source is defined as a stand- alone entity. Then, additional structures are introduced to simplify the management of multiple particle sources. 4.2. The Probe A source of particles within the new architecture is referred to as probe. This name is adopted from the current implementation, which also calls the source of particles a probe. However, this term is to be extended and more clearly separated from other ParticleTracer functionality. 4.2.1. Source of Particles as a Stand-Alone Entity. The basic features and visual appearance of the ’old’ probe are to be kept. These are as follows: • cuboid-shaped region which defines possible start positions for particle injection. • it can change its position and dimensions along the main axes. No rotation is sup- ported. • all the particles, injected from a probe, are of the same ’type’. The type of particles is the collection of all adjustable particle parameters and is to be defined in the next section. A new class called ParticleProbe (see figure 4.3) is created to represent the probe entity. It encapsulates all variables and defines all the functionality of a particle source. This includes most notably the advection and rendering, and shader management functions. Figure 4.3.: ParticleProbe class and its ParticleProbeOptions member 25
  • 40. 4. Multiple Sources of Particles . The ParticleProbe class has all its parameters encapsulated by the m ppOptions mem- ber from the ParticleProbeOptions class (This class is explained in detail in the next sec- tion). The probe contains the particles’ start positions (m v4aParticlesStartParams), the ping-pong buffers (m pParticlesContainer and m pParticlesContainerNew), and it is re- sponsible for advection of its own particles with the methods AdvectParticles() and Ren- derParticles() 1 4.2.2. Particles’ Type As already mentioned above, a probe can inject only particles of the same type. This section defines what a particle type is. A particle type is the collection of all parameters, which can be adjusted to get different particle behavior, or visual appearance. To separate the variables, representing such pa- rameters, from those, controlling internal processes, such as the advection and rendering, the class ParticleProbeOptions is introduced (see figure 4.3). Encapsulating all adjustable options in their own class enables the probe to expose only these for outside manipulation, ensuring its internal integrity. Adjustable particles’ options The following list summarizes all the adjustable options 1: • Injection Mode (InjectionMode); • Particles color / opacity (vProbeColor and fParticlesAlpha); • Particles count (iParticlesCount); • Particles lifetime (iParticlesLifetime); • Particles size (for sprites) (fSpriteSize); • Particles Display Mode (PVisMode); • 4th component aware operations options (see section 4.2.4) Along the other adjustable particle options, the ParticleProbeOptions class exposes also the matrix, used to save the positions and dimensions of the probe (mProbeWorldMatrix), thus allowing outside manipulation of it. 4.2.3. Adjustable Options and ’Change Detectors’ The ParticleProbeOptions class also introduces the concept of change detectors. The change detectors are functions which indicate a change of an option since the was previ- ously checked. This allows the probe to keep track of the adjustable options and react to changes, which require adjusting of internal structures. 1 Additionally, there are variables and methods for trajectories, which are the new version of streamlines, and are no longer understood by the application as different display mode, but can coexist with the particles within a probe. However, for simplicity, these are not going to be further considered. 26
  • 41. 4.2. The Probe Figure 4.4.: Some change detectors of the ParticleProbeOptions class Figure 4.4 shows some of the available detectors. These are triggered, if the respective property’s set accessor was used. After reading the detectors, the probe is responsible to reset them by invoking the ResetDetectors() method. 4.2.4. 4th Component Aware Operations This section focuses on the second upgrade requirement from the ’Thesis Goals’ section (3), for enabling modulation of particle density according to particular field characteristic. Moreover, it describes the integration of some of the existent importance-driven visual- ization techniques in the new probe concept. This is referred as the 4th component aware parameter modulation. The use of the 4th component of the vector field data to procedurally adjust a particle’s parameters is referred to as a 4th component aware operation. Those operations are ap- plied on per particle basis, but their threshold parameters are defined per probe. There are currently two types of 4th component aware operations - the 4th component aware injection and the 4th component aware modulation. 4th component aware particle injection. When this mode is enabled, only particles, whose positions satisfy the injection require- ment are born and subsequently advected. This injection requirement is defined by the three adjustable options - scale (f4thCompAwareInjectionScale), min (f4thCompAwareInjectionMin) and max (f4thCompAwareInjectionMax) - and the 4th component of the field at the particle’s position. The scale, min and max options are used to adjust the interval of 4th component values, which is of interest for a particular experiment. This is done according to the following formula: min ∗ scale < 4th component < max ∗ scale (4.1) For all particles, whose 4th component values are bigger than max ∗ scale the chance of birth is 100%. If the value is lower than min ∗ scale, the chance is zero. The chance for particles, lying in the specified range, is calculated with the HLSL’s smoothstep function, which uses Hermite interpolation to return a number between the specified lower and upper bound. 27
  • 42. 4. Multiple Sources of Particles Figure 4.5.: Adjustable options, controlling the 4th component aware operations, and their effect variables. Additionally, there is a flag to enable or disable them according to the format of the vector field data The effect of this method of injection is, that it allows the increase of particle density in regions of interest. The 4th component aware injection options act in the advection phase. The geometry shader, responsible for particle birth is ignoring all particles which failed the described test. Thus, only born particles are considered by subsequent advection steps. This method of particle injection density modulation is very fast, because the decisions are taken on the GPU. This allows for real-time dynamic density adjustment as the probe changes its position and dimensions. However, in most cases many particles must be transferred to the GPU but only a handful are used in the advection process. For an example, see 9.2. 4th component aware parameter modulation. In this mode, some of the parameters of the particles are changed according to the 4th component value at each particle’s momentary position. Which parameters get modu- lated depends on the particle’s display type - for points, the 4th component modulates the particle’s opacity; for sprites - the size and opacity. The modulation exposes the same adjustable options and conforms to the same formula as described for the 4th component aware injection. If the 4th component value lies below scale∗min the opacity / size is set to null. If otherwise higher than scale∗max the opacity / size are set to the maximal value given in the ParticleProbeOptions of the respective probe. The 4th component aware parameter modulation happens during the rendering phase. This means it doesn’t require the advection to be activated to see its effects. Also, no internal changes are caused by its options, making it very fast and interactive. For an example, see 9.4. 28
  • 43. 4.3. Probe Management 4.2.5. Shared Shader Variables and Effect Pools Every probe advects and renders its own particles. This requires it to manage the effects and shader variables needed for that itself. Figure 4.6.: Shared shader variables and effect in the ParticleTracerBase, and the child ad- vection effect in ParticleProbe The advection phase depends on many probe specific parameters, and also it needs access to the vector field data. This data, and other resources must be shared between probes, otherwise the inappropriate resource usage will defy the multiple-probe concept. This problem is solved with the help of effect pools and shared shader variables. Figure 4.6 shows the employed structure. The ParticleTracer manages an effect pool (m pProbeParticlesAdvectionEffectPool), and the vector field data (m pAEP VolumeTex shared), along other shared resources, is a shared variable, man- aged by this pool. Every probe then extends this pool with its own effect file (m pParticlesAdvectionChildEffect), which manages the probe specific variables. The rendering effect (m pProbeParticlesRenderingEffect) on the other hand is the same for all probes and is hosted by the ParticleTracer. The probe gets a pointer to the respective effect to create internal pointers to the technique and shader variables it needs to set. 4.3. Probe Management Multiple probes in the ParticleEngine are now possible by creating and maintaining an array of ParticleProbe instances. For this purpose, the vector type from C++’s Standard Template Library can be used. Utilizing a vector to store the probe instances has the ad- vantage of being flexible and extensible. This vector will then be a member of the Particle- Tracer. However, as the probe concept grows more complex, handling this vector becomes in- creasingly difficult. This was the reason to incorporate the vector in a container class, devoted to the task of managing the probes. Another aspect of probe management is exposing the available adjustable options of all currently instantiated probes for manipulation by the user. Also, supplying user-friendly 29
  • 44. 4. Multiple Sources of Particles and intuitive interface to these options is vital to making the concept practical. This is done by a controller class, which is responsible for ensuring the accessibility and easy manipulation of multi-probe configurations. 4.3.1. The ParticleProbeContainer Class Figure 4.7 depicts a simplified diagram of the ParticleProbeContainer class. Figure 4.7.: The container class, responsible for probe management. The ParticleProbeContainer is a member of the ParticleTracerBase. It hosts the vector with probe instances as a private member (m vParticleProbes), and defines interface for accessing it. The adding (AddProbe()) and removing (RemoveProbe()) of probes are the basic methods of the class. Additionally, methods for saving and loading probe configura- tions, and for exporting probe particles are supported (ResetProbesParticles()). Saving and Loading Probe Configurations The methods ExportProbesLayout() and ImportProbesLayout() implement saving and loading of probe configurations. Every probe exposes two methods allowing the import (ImportProbeOptions()) and export (ExportProbeOptions()) of its options. The container then wraps these methods to facilitate the process for multiple probes. The SaveProbeLayout() and LoadProbeLayout() methods of the ParticleTracerBase are triggered by the main user interface (see section 4.4.1). They display dialog for file selec- tion, and subsequently call the container’s methods to take the needed operations to save or recreate a probe configuration. 30
  • 45. 4.3. Probe Management 4.3.2. The ParticleProbeController Class The ParticleProbeController class concentrates on presenting intuitive user interface giving access to the adjustable options of all the probes currently instantiated. Drawing bounding boxes around the probes is also the job of the controller. Figure 4.8.: The ParticleProbeController class, methods for registering and unregistering a probe. Internally, the controller maintains a vector of ParticleProbeOptions instances (m vPPOptions). This implies, that all options, which should be able to be changed by the user must be present in the ParticleProbeOptions class. To use the controller to manage a probe, it must be registered first. The registration process adds a pointer to the ParticleProbeOptions instance of the probe to the controller’s vector, thus allowing the controller to display the interface for adjusting its options. The probe’s method RegisterToController() is used to register a probe. This method takes a pointer to the controller class as an argument. It then uses this pointer to call the RegisterParticleProbe() method of the controller class, supplying it with a pointer to its ParticeProbeOptions instance. This ensures that only a particle controller class instance can get a pointer to the adjustable options of the probe. Unregistering a probe is the reverse process. It identifies the probe which has requested to unregister, and removes it from the options vector. 31
  • 46. 4. Multiple Sources of Particles 4.3.3. Multi-probe Management Architecture The processes of registering and unregistering a probe are abstracted by the container class. When it is initialized, it get a pointer to the controller class (m pPPC). Afterwards, the methods AddProbe() and RemoveProbe() are responsible for creating a new probe and registering it to the controller, respectively unregistering and destroying it (figure 4.9). Figure 4.9.: Multi-probe management architecture. 32
  • 47. 4.4. User Interface 4.4. User Interface The user interface is very important part of every software solution. The usability of a tool is characterized by the power of its user interface. To make the process of ma- nipulating multiple probes and creating probe configurations a pleasant experience, new, redesigned interface is proposed. It tries to optimize over the already present solution, by making it more compact and intuitive. This section is intended to explain the new user interface’s control elements in more detail. 4.4.1. Tracer Parameters UI Reference The main UI, or the Tracer Parameters UI, is found in the bottom-right corner of the main application window. It is created and maintained within the ParticleTracer object. Figure 4.10.: Tracer Parameters UI, found in the lower right corner of the application win- dow. Reference Bounding Box (checkbox) Turn on/off the bounding box for the domain, containing the vector field data. Advect (checkbox) Turn on/off the advection of particles. If the advection is off, the advection phase is omitted, thus pausing the particles at their current positions. In this case the adjustable probe options, such as injection mode and lifetime, which act in the advection phase, will not have any effect. Some will reset the particle’s buffer, causing all probe’s particles to disappear. All options acting in the ren- dering phase, such as probe color, display type and sprite size, will have their usual effect. For thorough explanation of the different particle options refer to section 4.4.2. Step Scale (slider) Increase / decrease the step length for each advection step. The advection phase is permanently repeated. Every time, each particle is moved a small amount in the direction, pointed by the vector of the field at the particle’s position. How small this movement is is controlled by this slider. Increasing its value causes faster advection, but lower precision for the next particle position. 33
  • 48. 4. Multiple Sources of Particles Probes ’+’ (button) Adds a new probe to the current probe configuration. Probes ’-’ (button) Removes a probe from the current probe configuration. The probe must be selected in the probe interface first. Otherwise nothing happens. Probe interface is described in section 4.4.2. Probes ’Sv’ (button) Saves current probe configuration to file. This saves the positions, and all adjustable options of all probes currently in the scene. The current particles’ positions are not saved. Probes ’Ld’ (button) Load probe configuration from file. Already saved probe configurations can be loaded from file. Probe’s position and di- mension, and also all the adjustable options are loaded. The particle buffers are recreated from scratch for each probe and the advection starts from the probe. Lense ’+’ (button) Adds a new lense to the current probe configuration. For detailed information about lenses, see section 4.5. Lense ’-’ (button) Removes a lense from the current probe configuration. For detailed information about lenses, see section 4.5. Particles Reset (button) Forces the particles of all probes to reborn and start advection from their initial position within their probe. Particles Export (button) Exports the positions of selected probe’s particles currently in the field to a file. To export all particles at once, deselect all probes (for detailed information about export functionality, see chapter 8). Sprites ’Load 1’ (button) Loads a sprite from image file or geometry definition file. This sprite is then used to display particles in ’Sprites’ and ’Oriented sprites’ display modes. Sprites ’Load 2’ (button) Loads a second sprite from image file or geometry definition file. depth info (checkbox) Indicates if depth information is to be generated when loading spites from a geometry definition file. Render Volume (checkbox) Turns on volume rendering. Volume rendering is discussed in more detail in section 2.1.3. Show UI (checkbox) Turns on the volume rendering user interface. It gives access to all of the ray caster settings. The new volume rendering UI of the ParticleEngine is discussed in section 5.4 34
  • 49. 4.4. User Interface 4.4.2. Probe Parameters UI Reference The ParticleProbeController’s UI, or the Probe Parameters UI, is displayed in the upper- left corner of the application window. It is maintained by the ParticleProbeController object. The probe interface presents the user with control elements to modify all available ad- justable probe options. Additionally, it allows the user to select a probe, turn on and off the bounding boxes, and displays information about the selected probe’s current position and dimensions. The interface is divided in a static and a dynamic part. The dynamic part changes with the selection of a probe. Thus, to be able to see all the below described elements, a probe must be first selected. Reference Figure 4.11.: Probe Parameters UI Bounding Box (checkbox) Turns on/off the probes’ bounding boxes. Apart from presenting the user with a visual clue of each probe’s position and di- mensions, the bounding boxes allow mouse manip- ulations. For detailed information about mouse con- trol and user input modes, refer to section 4.4.3. Only selected probe (checkbox) When active, only the bounding box of the selected probe is dis- played. This allows easier probe manipulation in a complex multi-probe configurations. Select probe (drop-down menu) This menu con- tains all the probes in the current probe configura- tion. It is used to select a probe for editing its op- tions. Selecting a probe makes all probe specific control elements visible, and changes its bounding box color to yellow. Selecting ’None’ will deselect all probes. Probe position and dimensions (label) Displays the current probe position and dimensions within the domain’s [0,1] range and in the given physical units (see chapter 7). Three sliders represent the three axes, about which the probe can be moved and resized. The radio buttons at the top choose what these sliders control - position or dimensions. Injection Mode (drop-down menu) Defines how the particles are initially placed within the probe. 35
  • 50. 4. Multiple Sources of Particles Random will distribute the particles pseudo- randomly inside the probe, the uniform placement will place them in an uniform distance to each other. R G B (Particles color) (sliders) Sets the color of the probe, and of the injected particles. 4th comp. aware inj. (checkbox and sliders) The checkbox enables the 4th component aware injection. The sliders control the chance of particles to be born according to the 4th component of the field. In particular: scale scales the field down (dividing its values by the scale value) and the min/max sliders define the range for respectively 0 (0%) and 1 (100%) chance of birth. For the exact formula, see 4.1. Display particles (checkbox) Turn on/off the particles. This allows detailed control over which probe inserts particles into the field. When turned off, the advection phase for the probe is being omitted, improving the performance of the ParticleEngine. Particles type (no name on the UI) (drop-down menu) Selects which particle type to be displayed. There are currently three types of particle types supported - points, sprites and oriented sprites. When choosing one of the two sprite types, performance hit should be expected, because additional geometry is being introduced. Count (slider) How many particles will be initially injected into the flow. Activating 4th component aware injection will reduces this number upon injection, as some particles will eventually not meet the birth condition. Lifetime (slider) The number of advection steps before the particle gets reborn. Exiting the domain restarts the particle regardless of this setting. Opacity (slider) Controls the transparency of the particles. Different particle types are rendered with different blending modes, so this setting behavior is different according to the chosen particle type. Sprite size (slider) Meaningful only when displaying sprites. Gives the size of each sprite. 4th comp. parameter mod. (checkbox and sliders) Activates the 4th component parame- ter modulation. The sliders have the same function as described by 4th component aware injection above. Here, however, not the chance of birth is calculated, but the value of the size and opacity parameters. As the size is meaningless for Points particle type, only opacity is modulated for this type. The minimum value of a parameter is 0, and the maximum is the value, currently set by the respective slider. 36
  • 51. 4.4. User Interface Display trajectories (checkbox) Turns on/off the display of trajectories. Trajectories are preemptively advected particles a given number of steps. Lines are then used to connect the particles from one step to the next, creating a trajectory in the vector field. Trajectory type (no name on the UI) (drop-down menu) Currently, only streamlines are supported, as only steady flows are considered in this project. Trajectory Opacity (slider) The transparency of the trajectory lines. Trajectories Count (slider) How many particles are traced simultaneously. Trajectory Lifetime (slider) How many preemptive advection steps are used to construct the trajectories. 37
  • 52. 4. Multiple Sources of Particles 4.4.3. User Input Modes UI Reference The different user input modes control the camera and mouse behavior. There are cur- rently three different modes, which assign different functionality to the mouse and the keyboard. Figure 4.12.: User Input Mode UI View This mode uses model-view camera. The right and left mouse buttons are assigned to rotate the camera around the field domain. The scroll wheel zooms in or out. This mode can be quickly selected by pressing the F5 key. Probe Edit The same as the view mode. The difference is, that the left mouse button is used for probe selection and manipulation, rather than camera rotation. In this mode the left mouse button events are handled by the probe controller. The bounding boxes of the probes are used to catch the cursor. Thus, to enable this functional- ity, the bounding boxes must be turned on. Upon hovering the mouse over a bounding box, its sides turn yellow to indicate which side is currently catching the cursor. A single click selects a probe and allows further manipulation. To change the position of the probe, hover the mouse over a side, and then drag. The probe will be repositioned in the plane, containing the picked side. To resize the probe, hover over a side and Ctrl+drag. The probe’s dimensions will be changed along the plane, containing the picked side. The two operations can be seamlessly combined. By pressing Ctrl the dragging oper- ation change over to Ctrl+dragging without the need of releasing the mouse button, and vice versa. First Person This mode uses the first-person camera. The keys W, A, S, D are used to move the camera around. By dragging the mouse, the user can look around. 38
  • 53. 4.5. The Lense 4.5. The Lense The lense concept is taken from the current version of the ParticleEngine and is a part of the importance-driven visualization techniques. A lense allows the user to concentrate on important regions within the vector field domain in a complex probe configuration. This is achieved by manipulating the opacity/size of the particles based on their distance to a user-defined lense center. The old lense definition consists of a center point and a radius. The particles fade away with increasing of their distance to lense’s center. The position of the lense center is set by the user using the mouse and the scroll wheel to adjust the depth in view space. The radius is adjusted through the user interface. In the new architecture, the lense got extended and abstracted as a stand-alone entity. It is now defined as a special kind of probe (fig. 4.13). In that case, all the controls used to manipulate the probes, including the mouse, can be directly used on the lense, too. This allows for consistent and more simple user interface, and for more precise placement and resizing. Figure 4.13.: The new lense is a special kind of probe. The lense’s parameters are also contained by the ParticleProbeOptions class. New set of adjustable options is defined specially for lenses. These options can be used to simulate the old lense’s spherical shape behavior, and the clip-plane functionality. Lense Redefined • As a special kind of probe, it encloses is a cuboid-shaped region within the vector field domain. 39
  • 54. 4. Multiple Sources of Particles • Fading of the particles is controlled for each axis separately. It depends on the parti- cle’s distance from the lense’s edge along a particular axis. • clip-plane functionality is simulated by the lense’s projection capabilities. The pro- jection can be turned on for one of the three main planes. Then, the particles are projected onto the respective plane, going through the lense’s center. (a) Multi-probe configuration with a lense turned off. (b) Multi-probe configuration with a lense turned on. Figure 4.14.: Experiment, demonstrating the clip-plane functionality of the new lense. The particles from each probe get projected onto the lense’s plane. 40
  • 55. 4.6. Lense Management 4.6. Lense Management The same method for managing probes is applied also to lenses. There is no limit placed on how many lenses can be created in a probe configuration. Figure 4.15.: The ParticleProbeContainer is used also to manage lenses. The probe container maintains a separate vector only for lenses (m vLenses). The reason for this is that the rendering phase in the presence of lenses requires special care. The con- tainer also has special functions for adding (AddLense()) and removing (RemoveLense()) lenses. The probe controller on the other hand doesn’t differentiate between lenses and probes when registering. It checks dynamically upon selection, if the selected object is a lense or a probe, by calling the IsLense() method of the ParticleProbe class. Then, it presents the user with the appropriate interface. Rendering phase in the presence of lenses The rendering phase is managed by the particle container. It calls each probe’s OnRen- der() method sequentially. If a lense is introduced into the configuration, its OnRender() method must be called first. That is because the lense uses it to set the shader variables, which then manipulate the appearance of the particles. For every additional lense, all the probes must be rendered once more, because each lense can set different rendering param- eters. 41
  • 56. 4. Multiple Sources of Particles 4.7. Lense UI Reference Figure 4.16.: Probe Parameters UI - Lense selected The lense UI is also displayed by the Parti- cleProbeController. Only the dynamic part, below the position/dimensions sliders is changed, when a lense is selected. Thus, only this part will be dis- cussed here. Reference Turn on/off (chackbox) Allows the lense to be turned on or off. Fading ranges (sliders) For every axis there are two sliders, controlling the minimal, respectively maximal, distance from the lense’s edge along this axis. If a particle’s position is at a distance bigger than the maximum, its opacity/size is set to 0. If this distance is smaller than the minimum, its opacity/- size is taken from the probe parameters. In-between, interpolated value is calculated, using Hermite in- terpolation. Projection (radio buttons) Enables projection onto the specified plane, going through the lense’s center. The projection is enabling the lense to act as a clip-plane (included in the old version of the Par- ticleEngine). The fading ranges are also acting in this mode. Thus, modifying them will have an effect on how many particles get projected onto the specified plane. Clip plane presets To further facilitate the setup of a clip-plane lense, these three buttons automatically set the projection, and the position and dimensions of the lense. The dimen- sions along the two selected axes are maximized, and along the third - the plane is made very thin. 42
  • 57. 5. Transfer Function Editor This chapter describes the improvements made to the ParticleEngine to the ways of defin- ing a transfer function for its direct volume rendering capability. The transfer function dic- tates how the integrated ray caster should map values, sampled from the field, to colors for the frame buffer. This corresponds to the third project goal in chapter 3 ’Thesis Goals’. Currently, the only way to assign color of the sampled values is to load a shader code fragment from file, which then gets injected directly into the ray caster effect. In the next sections, a new user interface component is introduced, with the primary purpose of the precise, yet interactive setup of a transfer function. This greatly enhances the usability and the user experience, as it enables the user to try out many combinations, keeping track of the results, as the feedback is seen immediately. Figure 5.1.: Direct volume rendering by the ParticleEngine, visualizing the vector lenght as a 4th component. 43
  • 58. 5. Transfer Function Editor 5.1. The Raycaster The volume rendering capabilities of the ParticleEngine are provided by the Raycaster object. It is responsible for generating the images from the data, loaded in the 4th compo- nent. The Raycaster supports three different modes of rendering - direct volume rendering (DVR), isosurfaces and Clearview (see section 2.1.3). The DVR is only discussed in this chapter, as only it needs a transfer function. Figure 5.2.: The Raycaster class, represented in the ParticleTracerBase. The Raycaster is a member of the ParticleTracerBase. The m bDoRaycasting flag con- trols if the Raycaster is on or off, that is, if it renders an image or not. The default is off. When the vector field data is initially loaded, it is transferred to the Raycaster by means of the m Volume variable. This variable is responsible for interpreting the vector field data as a volume texture, to be used for volume rendering. When it is activated, the Raycaster creates images by casting rays through the field do- main and sampling the volume texture (m Volume) along these rays at discrete steps. The sampled values are the 4th component values of the vector field. In DVR mode, these sam- pled values are then mapped to colors by the transfer function, and blended together to produce the final color for the frame buffer. The actual mapping of values to color happens in the fragment shader of the Raycaster. The transfer function is internally represented by a 1-D texture. To get the particular color, the sampled value is first adjusted to the range from 0 to 1, and then used to sample the tex- ture. Each element of this texture has the DirectX DXGI FORMAT R32G32B32A32 FLOAT format. Thus, the returned value corresponds directly to the RGBA color components. The transfer function is maintained by the TFEditor class (see section 5.2.2), and is fed to the Raycaster as a ID3D10ShaderResourceView pointer by the SetTransferFunctionSRV() method. 44
  • 59. 5.2. The Editor To save time and resources, the casted rays are only sampled within the vector field domain. To determine the entry end exit positions, the Raycaster is running invisible ren- dering passes, used to render the bounding box of the domain (m Box) and saves the entry and exit depth values of each pixel covering the domain. Probe Volume Rendering An additional feature of the Raycaster, introduced with the new probe concept, is the probe volume rendering. The probe volume rendering casts rays only through a the region of the field domain, covered by the selected probe. This greatly speeds the process up, and is very useful for large data sets or slower machines. Also, this feature can be used to concentrate on particular regions within the field, and maybe define different transfer functions for different regions. Examples can be seen in section ’Visualizations’ (Figure 9.7). 5.2. The Editor The Transfer Function Editor is the user interface component, which is responsible for displaying the currently defined transfer function and providing a way to adjust it. It is a stand-alone class, hosted within the ParticleTracerBase class. After it handles user action, the Transfer Function Editor updates the texture, containing the transfer function. The updated texture is then set to the Raycaster. 5.2.1. The Transfer Function Control The transfer function control is a user interface control. It is maintained by the Transfer Function Editor, and it is incorporated within its user interface (UI). It displays a partially linear function for each color channel. The Y coordinates of the points represent the color channel’s value at this position in the texture. The value in-between points is a linear inter- polation of the two neighboring points’ Y coordinates. Each of the four color channels can be divided by the user to as many linear parts as needed for good enough approximation of the mapping. Figure 5.3.: The transfer function control element, displaying a transfer function This control element handles all mouse messages, when the mouse is within its area. Different user actions are supported. For example, the user can drag a control point to change its position. Double click will add or remove a point, depending on the cursor’s position. For extensive reference, see 5.4.3. 45
  • 60. 5. Transfer Function Editor 5.2.2. The TFEditor Class The TFEditor is responsible for wrapping the transfer function control inside a UI, and providing extended transfer function editing capabilities. Also, it defines a method for making the resulting transfer function available as a texture. This texture is then supplied to the Raycaster. Figure 5.4.: The TFEditor class Figure 5.4 depicts simplified diagram of the TFEditor class. First, there are the four LineStripe members. They store the user-defined control points for each color channel of the transfer function. When the user changes the control point configuration, the method updateTexture() is called to update the m texTransFunc texture. It uses LineStripe class’s methods to reconstruct the values of the transfer function in the space between two points with linear interpolation of their Y coordinates. The drawTransferFunction() method is handling the actual drawing of the transfer func- tion control. The TFEditor uses this method to insert it in its UI (m transferFuncEditorUI). This UI contains also additional interface elements for facilitating the editing of the transfer function. The method getTransferFunctionResource() returns the produced texture, which is then set to the Raycaster’s m pTransferFuncSRV variable. 46
  • 61. 5.3. The RaycastController Class 5.3. The RaycastController Class The RaycastController class has similar functionality to the ParticleProbeController, described in section 4.3.2. It generalizes and simplifies the management of the Raycaster options, and the Transfer Function Editor. Currently, the ParticleEngine exposes the Raycaster options directly on the main UI. However, the addition of all the new options there, would have caused too much clutter. To prevent that, the RaycastController is introduced. It organizes all the UIs, responsible for the control over the volume rendering, in one place. This requires not only DVR, but also all other options to be moved to the new UI. Figure 5.5.: RaycastController organizes all volume rendering options in a new UI Figure 5.5 depicts the new architecture involving the Raycaster and the Transfer Func- tion Editor. The RaycastController exists parallel to the Raycaster as a member of the ParticleTracerBase. Unlike the probe management structure, where the probes are expos- ing their options by means of another class, the Raycaster instance is controlled through its get/set accessor methods, which are available for all the Raycaster parameter variables (m eRendermode, m fBorder, etc.). To enable control over it, the Raycaster instance must be first registered to the controller. This is done by the RegisterRaycaster() method. The registration supplies a pointer to the Raycaster (m pRaycaster), which is then used by the UI to control its parameters. The Transfer function Editor (m pTFEditor) is hosted by the RaycastController as a pri- vate member. This allows the controller to integrate its UI with the other volume rendering options, building a system of UIs. There are three UIs, maintained by the controller - the 47
  • 62. 5. Transfer Function Editor Raycast Controller UI m RaycastControllerUI, the Raycaster UI m RaycasterUI and the Transfer Function Editor UI. The RaycastControllerUI is the main UI in this system. It provides means to select between the two other sub-UIs. The RaycasterUI contains all the controls for the Raycaster parameters. Save and load functionality is also exposed by the RaycastController. The both methods ExportRaycastControllerSettings() and ImportRaycastControllerSettings() are linked to the Raycast Controller UI. Currently, these functions only support export and import of the transfer function. 48
  • 63. 5.4. User Interface 5.4. User Interface As mentioned in the previous section, the RaycastController builds a system of three UIs. The Raycast Controller UI is the main one, providing the means to select which of the other two sub-UIs is displayed. The main UI can’t be displayed alone - one of the sub-UIs is always visible just below it (see figure 5.6). Figure 5.6.: Raycast Controller UI, and the Transfer Function Editor UI displayed below it. The Raycast Controller UIs are by default hidden. The main ParticleEngine interface’s checkbox ’Show UI’ is used to turn it on or off (see section 4.4.1). Each of the three UIs will be discussed in detail in the next sections. 5.4.1. Raycast Controller UI Reference The Raycast Controller UI is rendered in the bottom middle of the application window, just above the currently visuble sub-UI. Figure 5.7.: Raycast Controller UI Reference Raycaster (radio button) Selects the Raycaster sub-UI. Transfer function (radio button) Selects the Transfer Function Editor UI. Save (button) Allows the user to save the current transfer function to a file. Load (button) Loads transfer function from file into the Editor and updates the display. 5.4.2. Raycaster UI Reference The Raycaster UI is displayed just below the Raycast Controller UI, if the ’Raycaster’ radio button is selected. 49
  • 64. 5. Transfer Function Editor Figure 5.8.: The Raycaster UI Reference Mode (drop-down) Selects the render mode for the volume renderer. There are three ren- der modes available - ISO (Isosurfaces), DVR (Direct Volume Rendering) and Clearview. For more information refer to section 2.1.3. Step size (slider) Controls the quality of the image produced by the volume renderer. Smaller step corresponds to higher quality, and lower display update rate (performance of the ParticleEngine when volume rendering activated). Internally, this controls the length of the sample step along a ray, shooted by the Ray- caster. Longer step means fewer samples, and consequently higher performance, and lower image quality. Load Fragment (button) Allows the user to load custom shader code fragment. This is the old way to setup a transfer function, and it is deprecated. Custom (slider) The custom slider controls a shader variable, which is normally unused. It is meant to be incorporated in a custom fragment code, to control some arbitrary option. Controls for ISO render mode ISO Value 1 (slider) Sets the ISO value for the isosurface. The volume renderer is build- ing an isosurface of all values higher than this setting. Controls for DVR render mode TF Scale (slider) Scales the transfer function range. Labels on the Transfer Function Edi- tor UI show the actual range covered by the transfer function. TF Offset (slider) Offsets the transfer function range. Labels on the Transfer Function Editor UI show the actual range covered by the transfer function. 50
  • 65. 5.4. User Interface Controls for Clearview render mode ISO Value 2 (slider) Sets the ISO value for the second isosurface. Context scale (slider) Controls the blending of the two isosurfaces in the Clearview lense. Increasing this value, the second isosurface gets more visible. Decreasing it makes the first isosurface more visible. Size (slider) Controls the size of the Clearview lense. Border (slider) Controls the border width of the Clearview lense. Edge (slider) Sharpens the contours of the isosurfaces. 5.4.3. Transfer Function Editor UI Reference The Transfer Function Editor UI is displayed just below the Raycast Controller UI, if the ’Transfer function’ radio button is selected. The UI wraps the transfer function control, which editing functions are discussed below. Figure 5.9.: The Transfer Function Editor UI Reference R (button) Selects the red channel in the transfer function control, and brings it on top of the others. G (button) Selects the green channel in the transfer function control, and brings it on top of the others. B (button) Selects the blue channel in the transfer function control, and brings it on top of the others. 51
  • 66. 5. Transfer Function Editor Alpha (button) Selects the alpha channel in the transfer function control, and brings it on top of the others. These three buttons are made for easier access to the channels. Selecting channels with the mouse is also possible (see ’transfer function control’ below). Alpha Scale (slider) Scales the alpha channel Y axis down. This is needed to be able to fine tune the alpha component. As the alpha components of the sampled values along a ray are accumulated, using many samples will require a very small alpha value to be assigned to each sample. Reset (button) Resets the channels in the transfer function control to their default config- uration. In this state, every channel has two points in the bottom left and top right corners of the transfer function control area. These two points cannot be removed. Axis labels just below of the transfer function control area (labels) Show the range of 4th component values, covered by the transfer function. This range is setup by the ’TF Scale’ and ’TF Offset’ sliders in the Raycaster UI. Cursor (labels) Show the current position of the cursor within the transfer function con- trol area. The X coordinate is respecting the range of the transfer function, shown by the labels, described above. The Y coordinate is respecting the alpha slider value for the alpha channel. The other channels Y coordinates lie in the [0, 1] range, and they are assumed to be known. Selected point coordinates (no name on the UI) (textboxes) These are the two text boxes next to the ’Cursor’ labels. They show the exact position of the currently selected control point in the transfer function control (the white point on figure 5.9). Like the cursor labels, they respect the range of the transfer function. Additionally, they allow the user to input exact coordinates of the selected point manually. Transfer function control The area, which encloses the transfer function, is referred to as the transfer function control. It provides the user with additional editing capabilities. Select a channel for editing Click anywhere on the channel’s line or control points. This will bring the channel on top of the others and allow further manipulations. Select and move a control point Click on a control point to select it, and then drag the mouse to move it around. The exact coordinates are displayed below in the respective Transfer Function Editor UI controls. 52
  • 67. 5.4. User Interface Add and remove control points Double-clicking in any empty space within the trans- fer function control’s area will add new control point to the currently selected channel. Double-clicking on a existent point will remove it. 53
  • 68. 5. Transfer Function Editor 54
  • 69. 6. Fourth-component Recalculation As the power of the build-in volume renderer gets more accessible with the introduc- tion of user-definable transfer function, the 4th component visualization gains even more on relevance. The fourth-component recalculation feature creates the possibility for ex- perimenting with the 4th component at run-time. This, in combination with the Transfer Function Editor present new ways to explore flow field characteristics. This chapter presents the upgrades, taken on the ParticleEngine, which enable the 4th component to be recalculated at run-time. Three distinct recalculation techniques are in- troduced: • Calculating of entirely new values for the 4th component, using the underlying vec- tor information. All field characteristics, which can be derived for every cell by some function of the vector at the cell’s position and its neighbors, can be calculated using this technique. • Updating every 4th component values by means of a mathematical function, accept- ing one argument. This technique replaces each value with the result of the specified function, given as argument this value. • Updating every 4th component values by means of a mathematical function, accept- ing two arguments. The second argument of the function for each grid position is taken from the 4th components of another vector field, loaded extra from a file. Furthermore, arbitrary concatenations of those three techniques are possible. This can be very useful in the case, when a function of two already present field characteristics yields a new interesting characteristic. 55
  • 70. 6. Fourth-component Recalculation 6.1. ParticleTracer3D class upgraded The new feature is directly integrated into the ParticleTracer3D class. The recalcula- tion system consists of two parts - the method, which actually updates the 3-D texture, used to store the vector field, and the effect file, defining all possible functions on the 4th component. Figure 6.1.: ParticleTracer3D is upgraded to support the 4th component recalculation. The ParticleTracer3D class is managing the 3-D texture, which stores the vector field (m pTexture3D). For the updating of the 4th component of this texture, the method Calc4thComp VolumeTexture() is developed. This method starts by creating a render target (m pC4CE VolumeTex) and binding it to the GPU pipeline. The 3-D texture is also bound by the variable m pC4CE VolumeTex. The method processes the volume a single slice at a time. Which slice is currently to be pro- cessed is given with m pC4CE iSliceDepth. A pixel shader for previously chosen function from the m pCalc4thComponentEffect is applied for the current slice, rendering the new values for the slice onto the render target. After the slice has been processed, the render target contains all the cells from this slice with the updated 4th component values. Then, a staging texture with the usage flag set to D3D10 USAGE STAGING and CPU access set to D3D10 CPU ACCESS READ is used to access these values, and subsequently update the respective slice of the 3-D texture. Before executing the Calc4thComp VolumeTexture() method, the user must choose which function should be applied. This is done through a UI, hosted in the ParticleTracerBase, 56
  • 71. 6.2. Updating the Volume Texture but managed by the ParticleTracer3D to display the 4th component recalculation function- ality (m TracerUI). Normalizing the 4th component Normalization of the 4th component is necessary to ensure that all the values will stay in the range of the user interface controls, presented in pervious chapters. The controls for the 4th component aware operations, and for volume rendering are all only able to handle values in the range -1 to 1. The normalization is done by the Norm4thComp VolumeTexture() method. It is real- ized as a special case of the Calc4thComp VolumeTexture() method, so it functions the same way. This method also saves the factor, used for normalization. This is needed to denormalize the field before running other functions on it, as some functions, like logarithm, would produce incorrect results. Combining 4th components The combining the 4th components of the current field and an external field, loaded on- demand, is performed by the Combine4thCompWithExternal VolumeTexture() method. This method is realized the same way as the Calc4thComp VolumeTexture(), differing only in minor aspects - the method loads a new field from file, it binds it to the pipeline as the variable m pC4CE VolumeTex compose. 6.2. Updating the Volume Texture This section goes into more detail as how the Calc4thComp VolumeTexture() functions. It shows the most important code snippets to outline its algorithm. First, the render target is created with the X and Y dimensions of the vector field. A staging texture to read the data from the render target is also setup. Listing 6.1: Calc4thComp VolumeTexture() method m_p4thCompRT->Bind(false, false); pRenderTargetTex = m_p4thCompRT->GetTex(); D3D10_TEXTURE2D_DESC desc; pRenderTargetTex->GetDesc(&desc); desc.Usage = D3D10_USAGE_STAGING; desc.BindFlags = 0; desc.CPUAccessFlags = D3D10_CPU_ACCESS_READ; m_pd3dDevice->CreateTexture2D( &desc, NULL, &pStagingTexture ); For each mip level and depth level (corresponds to slice), the chosen pass is executed. 57
  • 72. 6. Fourth-component Recalculation Listing 6.2: Calc4thComp VolumeTexture() method float ClearColor[4] = { 0.0f, 0.0f, 0.0f, 0.0f }; m_p4thCompRT->Clear(m_pd3dDevice, ClearColor); m_pC4CE_iMipLevel->SetInt( iMIPLevel ); m_pC4CE_iSliceDepth->SetInt( iDepthLevel ); m_pRenderSliceTq->GetPassByName( sPassName.c_str() )->Apply(0); m_pd3dDevice->Draw( 3, 0 ); Then, the render target is copied to the staging texture, and subsequently mapped. Listing 6.3: Calc4thComp VolumeTexture() method m_pd3dDevice->CopyResource(pStagingTexture, pRenderTargetTex); D3D10_MAPPED_TEXTURE2D mappedTex; V_RETURN( pStagingTexture->Map( D3D10CalcSubresource(0,0,1), D3D10_MAP_READ, NULL, &mappedTex ) ); The mapped data is then uploaded to the 3-D texture. Listing 6.4: Calc4thComp VolumeTexture() method D3D10_BOX box; ZeroMemory(&box, sizeof(D3D10_BOX)); box.left = 0; box.right = iSize.x; box.top = 0; box.bottom = iSize.y; box.front = iDepthLevel; box.back = iDepthLevel+1; m_pd3dDevice->UpdateSubresource(m_pTexture3D, D3D10CalcSubresource(iMIPLevel ,0,1), &box, mappedTex.pData, mappedTex.RowPitch, 0); 58
  • 73. 6.3. Recalculation Fragment Shaders 6.3. Recalculation Fragment Shaders Here, the internal structure of the m pCalc4thComponentEffect will be shown. This ef- fect contains all the functions from which the user can choose. The chosen function is then translated into a effect technique pass name, and supplied to the Calc4thComp VolumeTexture() method. It then runs the appropriate pixel shader on the 3-D texture values. The vertex shader is generating a triangle covering the whole screen. This would call the fragment shader for every pixel of the render target. Listing 6.5: The vertex shader used by the calculation of the new 4th components float4 VS_FullScreenTri( uint id : SV_VertexID ) : SV_POSITION { return float4( + ((id << 1) & 2) * 2.0f - 1.0f, // x (-1, 3,-1) ( id & 2) * -2.0f + 1.0f, // y ( 1, 1,-3) 0.0f, 1.0f ); } Then, the chosen fragment shader is run on each pixel, producing the new 4th compo- nents. Listing 6.6: The fragment shader which saves the vector lenght at a particular position in the 4th component float4 PS_Calc4thComp_Length( float4 pos : SV_POSITION ) : SV_Target { float4 val = g_txVolume.Load( int4(pos.xy, g_iSliceDepth, g_iMipLevel) ); return float4(val.xyz, length(val.xyz) ); } It is first loading the current values from the 3-D texture for the particular slice and mip level. Then, it returns the first three components unchanged, and the fourth equal to the length of the vector, represented by them. 59