This document provides an introduction to rendering techniques used in the Quake3 game engine, including Binary Space Partitioning (BSP), Potentially Visible Set (PVS), textures, and lightmapping. It discusses how Quake3 stores level data in a BSP tree format using indexed nodes and leaves for random access. Rendering involves determining the camera's leaf, then using the leaf's PVS cluster to determine which other leaves are visible. Surfaces in visible leaves are looked up and rendered, checking for duplicates. Textures use shaders for surfaces and lightmaps bundled in the BSP file.
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
This talk is about our experiences gained during making of the Killzone Shadow Fall announcement demo.
We’ve gathered all the hard data about our assets, memory, CPU and GPU usage and a whole bunch of tricks.
The goal of talk is to help you to form a clear picture of what’s already possible to achieve on PS4.
Talk from SIGGRAPH 2010 and the <a />Beyond Programmable Shading course</a>
Also see <a />publications.dice.se</a> for more material and other DICE talks.
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
This talk is about our experiences gained during making of the Killzone Shadow Fall announcement demo.
We’ve gathered all the hard data about our assets, memory, CPU and GPU usage and a whole bunch of tricks.
The goal of talk is to help you to form a clear picture of what’s already possible to achieve on PS4.
Talk from SIGGRAPH 2010 and the <a />Beyond Programmable Shading course</a>
Also see <a />publications.dice.se</a> for more material and other DICE talks.
Graphics Gems from CryENGINE 3 (Siggraph 2013)Tiago Sousa
This lecture covers rendering topics related to Crytek’s latest engine iteration, the technology which powers titles such as Ryse, Warface, and Crysis 3. Among covered topics, Sousa presented SMAA 1TX: an update featuring a robust and simple temporal antialising component; performant and physically-plausible camera related post-processing techniques such as motion blur and depth of field were also covered.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
Practical Occlusion Culling in Killzone 3Guerrilla
Killzone 3 features complex occluded environments. To cull non-visible geometry early in the frame, the game uses PlayStation 3 SPUs to rasterize a conservative depth buffer and perform fast synchronous occlusion queries against it. This talk presents an overview of the approach and key lessons learned during its development.
Rendering Technologies from Crysis 3 (GDC 2013)Tiago Sousa
This talk covers changes in CryENGINE 3 technology during 2012, with DX11 related topics such as moving to deferred rendering while maintaining backward compatibility on a multiplatform engine, massive vegetation rendering, MSAA support and how to deal with its common visual artifacts, among other topics.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
Rendering Techniques in Rise of the Tomb RaiderEidos-Montréal
This cohesive overview of the advanced rendering techniques developed for Rise of the Tomb Raider presents a collection of diverse features, the challenges they presented, where current approaches succeed and fail, and solutions and implementation details.
A description of the next-gen rendering technique called Triangle Visibility Buffer. It offers up to 10x - 20x geometry compared to Deferred rendering and much higher resolution. Generally it aligns better with memory access patterns in modern GPUs compared to Deferred Lighting like Clustered Deferred Lighting etc.
Checkerboard Rendering in Dark Souls: Remastered by QLOCQLOC
This is a talk on checkerboard rendering Markus & Andreas held at Digital Dragons 2019.
In it they quickly go through the history of Checkerboard Rendering before taking a deep dive into how it works and how it is implemented in Dark Souls: Remastered. Lastly, they present the quality and performance improvements they got from using it and their conclusion.
PS: The PDF. file includes useful in-depth notes from both authors.
Ever wondered how to use modern OpenGL in a way that radically reduces driver overhead? Then this talk is for you.
John McDonald and Cass Everitt gave this talk at Steam Dev Days in Seattle on Jan 16, 2014.
Unite Berlin 2018 - Book of the Dead Optimizing Performance for High End Cons...Unity Technologies
In this session, the Unity Demo team provides their best tips and tricks for optimizing detailed, complex environment scenes for modern console performance.
Speakers:
Rob Thompson (Unity Technologies)
Terrain AnalysisBackgroundAircraft frequently rely on terrain el.pdffeetshoemart
Terrain Analysis
Background
Aircraft frequently rely on terrain elevation data to help navigate in low visibility conditions, to
reduce pilot workload, to help closely follow terrain at low altitude to avoid radar detection
(sometimes called ground-hugging or terrain-following flight), and to plot a course to avoid
extreme sudden altitude changes (e.g., flying around a mountain, rather than over it). Terrain
elevation data is gathered through terrain-following radar, lidar, satellites, and existing terrain
map
elevation information. Elevation data is especially useful radar or communication services are
not operating.
Elevation data is typically broken down into a grid, where each grid cell represents a square
physical area of uniform size. Because elevation varies
within a cell, the highest elevation in the cell is usually used as the elevation value for that cell
on the grid.
To navigate using elevation data, it’s useful to compute locations of peaks (high points), valleys
(low points), and flat areas (which might be at high or low elevations). To
a specific location is a peak, we\'re interested in discovering whether that
location is a local maximum. To do this, we must examine its eight adjacent locations: left, right,
above, below, and the four diagonal neighbors. If the location we’re analyzing has a higher
elevation than all eight surrounding locations, then the location is clearly a peak. We might
choose to loosen our definition of a peak by saying, for example, that a location is a peak if more
than five of the eight surrounding locations have a lower elevation.
Similarly, to decide if a location is a valley, we determine if it\'s a local minimum by checking to
see if all eight neighboring locations are higher than the location we’re analyzing. If all eight are
higher, we clearly have a valley. Again, we might choose to loosen our valley definition to
identify a valley if more than five surrounding locations are higher than the location we’re
analyzing.
Determining if a location is part of a relatively flat area (a plain for our purposes, but it might
also be a plateau in the real world), we would again examine all eight neighboring locations. If
they are all within a small tolerance in elevation, then we would declare the location to be part of
a plain. Once again, we might choose to loosen our definition of a plain, so that a location
surrounded by more than five locations that have a similar elevation is considered a plain.
Your Assignment
You will develop a C program that creates a two-dimensional array of doubles representing an
elevation grid, populates the array it with elevation data read from an instructor-supplied binary
data file, displays the elevation data numerically, analyzes the data in the array to find the peaks,
valleys, and plains, and displays strings indicating where these peaks, valleys, and plains are.
Terrain elevation values are all assumed to be expressed in meters.
When defining your C functions in this pr.
Graphics Gems from CryENGINE 3 (Siggraph 2013)Tiago Sousa
This lecture covers rendering topics related to Crytek’s latest engine iteration, the technology which powers titles such as Ryse, Warface, and Crysis 3. Among covered topics, Sousa presented SMAA 1TX: an update featuring a robust and simple temporal antialising component; performant and physically-plausible camera related post-processing techniques such as motion blur and depth of field were also covered.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
Practical Occlusion Culling in Killzone 3Guerrilla
Killzone 3 features complex occluded environments. To cull non-visible geometry early in the frame, the game uses PlayStation 3 SPUs to rasterize a conservative depth buffer and perform fast synchronous occlusion queries against it. This talk presents an overview of the approach and key lessons learned during its development.
Rendering Technologies from Crysis 3 (GDC 2013)Tiago Sousa
This talk covers changes in CryENGINE 3 technology during 2012, with DX11 related topics such as moving to deferred rendering while maintaining backward compatibility on a multiplatform engine, massive vegetation rendering, MSAA support and how to deal with its common visual artifacts, among other topics.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
Rendering Techniques in Rise of the Tomb RaiderEidos-Montréal
This cohesive overview of the advanced rendering techniques developed for Rise of the Tomb Raider presents a collection of diverse features, the challenges they presented, where current approaches succeed and fail, and solutions and implementation details.
A description of the next-gen rendering technique called Triangle Visibility Buffer. It offers up to 10x - 20x geometry compared to Deferred rendering and much higher resolution. Generally it aligns better with memory access patterns in modern GPUs compared to Deferred Lighting like Clustered Deferred Lighting etc.
Checkerboard Rendering in Dark Souls: Remastered by QLOCQLOC
This is a talk on checkerboard rendering Markus & Andreas held at Digital Dragons 2019.
In it they quickly go through the history of Checkerboard Rendering before taking a deep dive into how it works and how it is implemented in Dark Souls: Remastered. Lastly, they present the quality and performance improvements they got from using it and their conclusion.
PS: The PDF. file includes useful in-depth notes from both authors.
Ever wondered how to use modern OpenGL in a way that radically reduces driver overhead? Then this talk is for you.
John McDonald and Cass Everitt gave this talk at Steam Dev Days in Seattle on Jan 16, 2014.
Unite Berlin 2018 - Book of the Dead Optimizing Performance for High End Cons...Unity Technologies
In this session, the Unity Demo team provides their best tips and tricks for optimizing detailed, complex environment scenes for modern console performance.
Speakers:
Rob Thompson (Unity Technologies)
Terrain AnalysisBackgroundAircraft frequently rely on terrain el.pdffeetshoemart
Terrain Analysis
Background
Aircraft frequently rely on terrain elevation data to help navigate in low visibility conditions, to
reduce pilot workload, to help closely follow terrain at low altitude to avoid radar detection
(sometimes called ground-hugging or terrain-following flight), and to plot a course to avoid
extreme sudden altitude changes (e.g., flying around a mountain, rather than over it). Terrain
elevation data is gathered through terrain-following radar, lidar, satellites, and existing terrain
map
elevation information. Elevation data is especially useful radar or communication services are
not operating.
Elevation data is typically broken down into a grid, where each grid cell represents a square
physical area of uniform size. Because elevation varies
within a cell, the highest elevation in the cell is usually used as the elevation value for that cell
on the grid.
To navigate using elevation data, it’s useful to compute locations of peaks (high points), valleys
(low points), and flat areas (which might be at high or low elevations). To
a specific location is a peak, we\'re interested in discovering whether that
location is a local maximum. To do this, we must examine its eight adjacent locations: left, right,
above, below, and the four diagonal neighbors. If the location we’re analyzing has a higher
elevation than all eight surrounding locations, then the location is clearly a peak. We might
choose to loosen our definition of a peak by saying, for example, that a location is a peak if more
than five of the eight surrounding locations have a lower elevation.
Similarly, to decide if a location is a valley, we determine if it\'s a local minimum by checking to
see if all eight neighboring locations are higher than the location we’re analyzing. If all eight are
higher, we clearly have a valley. Again, we might choose to loosen our valley definition to
identify a valley if more than five surrounding locations are higher than the location we’re
analyzing.
Determining if a location is part of a relatively flat area (a plain for our purposes, but it might
also be a plateau in the real world), we would again examine all eight neighboring locations. If
they are all within a small tolerance in elevation, then we would declare the location to be part of
a plain. Once again, we might choose to loosen our definition of a plain, so that a location
surrounded by more than five locations that have a similar elevation is considered a plain.
Your Assignment
You will develop a C program that creates a two-dimensional array of doubles representing an
elevation grid, populates the array it with elevation data read from an instructor-supplied binary
data file, displays the elevation data numerically, analyzes the data in the array to find the peaks,
valleys, and plains, and displays strings indicating where these peaks, valleys, and plains are.
Terrain elevation values are all assumed to be expressed in meters.
When defining your C functions in this pr.
CEECNE 210—Statics SSEBE Mechanics Group Arizona State Un.docxcravennichole326
CEE/CNE 210—Statics SSEBE Mechanics Group
Arizona State University
1
Computing Project 4: Truss Analysis
Program
Computing Project 4 builds upon CP3 to develop a program to perform truss analysis. A truss consists of
straight, slender bars pinned together at their end points. Truss members are considered to be two force,
axial members. Thus, the force caused by each truss member - and the internal force in each member - acts
only along it’s axis. In other words, the direction of each member force is known and only the magnitudes
must be determined.
To analyze a truss we study the forces acting at each individual pin joint. This is known as the Method of
Joints. We will call each pin joint a node and the slender bars connecting the nodes will be called members.
The previous project computed a unit vector to describe the vector direction of every member of a truss
structure. To analyze the structure a few other key inputs must be included like the support reactions and
external loads applied to the structure. With all of this information, you will need to make the correct
changes to the provided planar (2-D) truss template program to be able to analyze a space (3-D) truss.
What you need to do
For a planar truss, every node has 2 degrees of freedom, the e1 and e2 directions. Therefore, for every planar
truss problem, the total number of degrees of freedom (DOF) in the structure is equal to 2 times the number
of nodes. We will consider the first degree of freedom for each node as the component acting in the e1
direction. So for any given node, i, the corresponding degree of freedom is (2·i)-1. For the same node, i,
the corresponding value for the second degree of freedom, the component in the e2 direction, is 2-i. This
numbering notation can be modified for a space truss. The difference with the space truss is that every
node has 3 degrees of freedom, one degree for each of the e1, e2 and e3 directions. The degree of freedom
indices are extremely crucial in understanding how to set up the matrices for the truss analysis.
For this computing project, you will first need to understand the planar truss program and the inputs that
are needed for that program. The first input is the spatial coordinates (x, y, z) of the nodal locations for a
truss. It is convenient to label each node with a unique number (also known as the “node number”). Each
row of the nodal coordinate array should contain the x and y coordinates of the node. We will use the matrix
name of “x” for all nodal coordinates. Please note that “nNode” is an integer value that corresponds to the
number of nodes in the truss and must be adjusted for every new truss problem. For Node 1 this matrix
array input looks like:
Node(1,:) = [0,0];
Once the coordinates of the nodes are in the program, you will need to input how those nodes are connected
by the members of the truss. In order to describe how t ...
Program Computing Project 4 builds upon CP3 to develop a program to .docxdessiechisomjj4
Program Computing Project 4 builds upon CP3 to develop a program to perform truss analysis. A truss consists of straight, slender bars pinned together at their end points. Truss members are considered to be two force, axial members. Thus, the force caused by each truss member - and the internal force in each member - acts only along it’s axis. In other words, the direction of each member force is known and only the magnitudes must be determined. To analyze a truss we study the forces acting at each individual pin joint. This is known as the Method of Joints. We will call each pin joint a node and the slender bars connecting the nodes will be called members. The previous project computed a unit vector to describe the vector direction of every member of a truss structure. To analyze the structure a few other key inputs must be included like the support reactions and external loads applied to the structure. With all of this information, you will need to make the correct changes to the provided planar (2-D) truss template program to be able to analyze a space (3-D) truss. What you need to do For a planar truss, every node has 2 degrees of freedom, the e1 and e2 directions. Therefore, for every planar truss problem, the total number of degrees of freedom (DOF) in the structure is equal to 2 times the number of nodes. We will consider the first degree of freedom for each node as the component acting in the e1 direction. So for any given node, i, the corresponding degree of freedom is (2·i)-1. For the same node, i, the corresponding value for the second degree of freedom, the component in the e2 direction, is 2-i. This numbering notation can be modified for a space truss. The difference with the space truss is that every node has 3 degrees of freedom, one degree for each of the e1, e2 and e3 directions. The degree of freedom indices are extremely crucial in understanding how to set up the matrices for the truss analysis. For this computing project, you will first need to understand the planar truss program and the inputs that are needed for that program. The first input is the spatial coordinates (x, y, z) of the nodal locations for a truss. It is convenient to label each node with a unique number (also known as the “node number”). Each row of the nodal coordinate array should contain the x and y coordinates of the node. We will use the matrix name of “x” for all nodal coordinates. Please note that “nNode” is an integer value that corresponds to the number of nodes in the truss and must be adjusted for every new truss problem. For Node 1 this matrix array input looks like: x(1,:) = [0,0]; Once the coordinates of the nodes are in the program, you will need to input how those nodes are connected by the members of the truss. In order to describe how the members connect the nodes you will also need to label each member with a “member number”. This connectivity array should contain only the nodes that are joined by a member, with each row containing firs.
Here are my slides for my preparation class for possible students in the Master in Electrical Engineering and Computer Science (Specialization in Computer Science)... for the entrance examination here at Cinvestav GDL.
Clustering is the process of making a group of abstract objects into classes of similar objects. Clustering helps to splits data into several subsets. Each of these subsets contains data similar to each other, and these subsets are called clusters. Now that the data from our customer base is divided into clusters, we can make an informed decision about who we think is best suited for this product.
Cluster analysis is a data analysis technique that explores the naturally occurring groups within a data set known as clusters. Cluster analysis doesn't need to group data points into any predefined groups, which means that it is an unsupervised learning method.
Cluster analysis is a data analysis technique that explores the naturally occurring groups within a data set known as clusters. Cluster analysis doesn't need to group data points into any predefined groups, which means that it is an unsupervised learning method.
This contains some hints and discussions about how to implement Grids in a Object Oriented language. Specifically the discussion is made with Java in mind, but obviosly, not limited to it.
An Optimal Approach For Knowledge Protection In Structured Frequent PatternsWaqas Tariq
Data mining is valuable technology to facilitate the extraction of useful patterns and trends from large volume of data. When these patterns are to be shared in a collaborative environment, they must be protectively shared among the parties concerned in order to preserve the confidentiality of the sensitive data. Sharing of information may be in the form of datasets or in any of the structured patterns like trees, graphs, lattices, etc., This paper propose a sanitization algorithm for protecting sensitive data in a structured frequent pattern(tree).
An Optimal Approach For Knowledge Protection In Structured Frequent Patterns
Quake3BSPRendering
1. Quake3 Rendering Engine
Intro to Quake3 rendering
techniques including BSP,
PVS, textures, and
lightmapping
Modified from an original presentation
prepared by Jason Calvert, 2003.
2. Problem
For our visibility system each leaf will know
which other leafs they can see.
Up until now we’ve used a linked structure to
represent our tree.
Linked structure won't work for our purposes.
If we need to access leaf 7 we want to be able to
access it by leaf[7]. Allows random access.
Linked trees are also problematic when saving
to disk because of pointers.
3. Indexed Binary Tree
Instead of using a linked tree we will use an
array/index based tree.
We will have arrays of nodes, leafs, polygons,
planes, etc.
Allows random access to leaves.
Tree is easy to save to disk.
Can be stored as Lumps or Chunks.
Using pointers can be really messy.
Easy to read back in.
During the compile process:
Arrays will need to grow dynamically.
4. Leafy BSP Node Structure
struct BSPNode
{
int front; // index in to node, or leaf array
int back;
int planeNum; // index into plane array
};
A negative value for either front or back indicates a leaf.
If front child index is negative - convert this to a valid index
into the array of leaves by taking:
leafIndex = abs(index value+1)
5. Quake3 Rendering Basics
All quake3 levels are stored as one big BSP
tree.
Quake3 uses Curved surfaces to descibe some
of it’s scene geometry.
Entities (models) are stored as external md3
files. Some models are embedded in the level
(Triangle soup).
md3 files are compiled .ase files from 3ds max.
Embedded models are stored in the BSP.
6. Quake3 BSP File Structure
BSP files are binary, non human readable files.
Stored as raw data dumped to a file directly from
arrays.
Very easy to read back in.
Disclaimer – Easy to read in using C or C++
7. Quake3 BSP File Structure (2)
Quake3 files are stored in lumps.
A lump based file format is similar to a chunk based
file format.
There are 18 lumps total in quake3.
Each lump contains a file offset and a lump length.
Data from each lump is stored in a separate array.
Separate arrays for leaves (Leafs), node (Nodes),
polygon (Surfaces), splitting planes (Planes),
vertices (Vertices), etc…
8. Quake3 Assignment
Base Assignment:
To render a Quake3 level using the BSP/PVS
combination. Then texture/light it.
Many extras are possible.
Primary concern is speed (20 to 100 frames per
second). No more 80,000 polygon models.
We will use the node, leaf, polygon, plane, pvs,
shader, and lightmap lumps.
We need not be concerned will external entities,
except texture maps.
In the code provided all of this data has been
loaded into data structures for you.
9. Data Structures
Lets look at the data structures that you
will need to understand.
We will then get back to the rendering
process.
11. Data Structure Details
Nodes array and front/back index values.
Leafs array
Arrangement into polygons (Surfaces)
Back leaves
PVS addressing
cluster == pvs for a particular leaf
Getting bits for a particular other leaf to check
visibility.
12. Quake3 Rendering
All level rendering is done using the BSP tree
and the PVS.
First, determine which leaf the camera is in by
passing the camera into the tree and traversing
the tree and classifying the camera location until
the camera reaches a leaf.
Once the location of the camera is known the
PVS can be used to determine what other leaves
this leaf can see and those can be rendered.
13. Quake 3 Point Classification
Quake3 uses a normal and a distance from the
origin to describe a plane.
Previously, we used a point on the plane and a
normal vector to describe a plane.
This allows the use of the plane equation to
classify a point.
Ax + By + Cz – D = 0
If the unit normal vector of the plane is N and the
point to be classified is P then this becomes:
N dot P – D = 0 for a point on the plane – positive
for points in front – negative for points behind.
14. Traversal and Classification
Leaf Identification
Instead of a flag Quake 3 uses a negative index
value in front or back to signify a leaf.
Test camera against current nodes plane.
If the camera location is in front of the current
node’s plane:
Check front child index:
If positive: this is the index of the next tree
node – continue traversal and classification.
If negative: the next node index actually
indexes the leaf array.
15. Traversal and Classification (2)
If front child index is negative - convert this to a
valid index into the array of leaves by taking:
leafIndex = abs(index value+1)
If the camera location is behind the current
node's plane:
Check Back child index
If positive: this is the index of the next tree
node – continue traversal and classification.
If negative: the camera is in solid space.
16. Traversal Termination
Once we’ve hit a negative node index we know
we’ve found the leaf the camera is in.
We now need to use the PVS information for this
leaf to find out which other leaves this leaf can
see.
The PVS is already built and is included into the
BSP data as a chunk so this information can be
easily looked up.
17. PVS Structure
Since the PVS is loaded as a lump – it is all
stored in a single large byte array.
Although stored as bytes the PVS is actually a bit
array so it is necessary to pull the correct bit out of
a byte.
Zero run length encoding is not used.
The PVS array is actually a 2D array stored in row
major order within a 1D array.
Each row of the PVS array represents the PVS
data for one leaf and is called a cluster.
18. PVS Clusters
A cluster stores all of the visibility information for
a particular leaf.
Because Quake3 allows back leaves, and since
only front leaves contain valid visibility data, the
number of leaves will differ from number of
clusters.
A back leaf contains a –1 for it’s cluster value.
The collection of all clusters, one for each front
leaf, represents the potential visibility information
for the entire level.
19. Cluster Location
Since we know which leaf we’re in the cluster variable
in the current leaf object can be used to calculate the
array index in the PVS where this leaf's data is
located.
The size of each cluster is the same and contains
enough bits to represent the visibility of all front leaves.
So
(clusterValueFromCurrentLeaf *
pvs.bytesPerCluster)
gives the index in the PVS array where the current
leaf's visibility information is located.
20. Visibility Determination
The camera's location leaf is known and the
cluster for that leaf has been located.
Now for each leaf visible from the camera leaf:
render all polygons in that leaf.
To determine whether a particular leaf is visible
we must find the PVS bit for that leaf in the
camera leaf's PVS cluster.
Since back leaves are allowed and do not have
PVS clusters, the leaf array index can't be used
to find the bit.
Instead use the cluster number in the leaf's entry
in the leaf table.
21. Visibility Determination (2)
Loop through the leaf array checking for front leafs
(those with a positive cluster number).
When a front leaf is found, its cluster number can be
used to find the correct bit to lookup in this cluster of
the PVS.
To find that bit:
Start at the camera leafs cluster offset.
Find the byte containing the bit that represents the leaf
whose visibility is being checked.
Byte index = cluster number / 8.
Then find the bit within that byte representing that leaf.
Check if the byte has that bit set to one or zero.
22. Visibility Determination (3)
Must bitwise AND the byte you looked up and the bit
for the leaf we are checking
Either:
byte & (1 << (cluster % 8))
Or byte & (1 << (cluster & 7))
If the value is zero: This leaf is not visible do not do
anything with it.
If the value is one: This leaf is visible, draw this leaf.
23. Visibility Determination (4)
Example
PVS:
[0] 10101010 [1] 01010101 [2] 00110100
Example: cluster number 20 (10100): byte index = 20/8 =
2 (00010) – so bit for 20 is in pvs[2].
Bit mask =1 << (cluster & 7) = 1 << 10100 & 111 =1<<4 =
1000
pvs[2] & 00001000 = 0 so leaf 20 is not visible
24. Looking up Surfaces/Polygons
In Quake3 the Surface is used to represent one of
three types of 3D objects. It generalizes polygons.
An array, called LeafFaces, is inserted between the
Leafs and Surfaces arrays.
Used to hold indices into the surface array.
Since Quake3 does not split polygons (the same
polygon goes down both front and back lists),
surfaces/polygons might be used in multiple leaves.
All the surfaces of one leaf are not necessarily in
contiguous parts of the Surfaces array. In
LeafSurfaces all the indices of surfaces in one face
are contiguous.
25. Looking up Surfaces/Polygons (2)
Leaves never access the surface array directly,
but access surfaces through the LeafFaces
array.
The LeafFaces array allows leaves to store a
starting index and a number of surfaces instead
of a bunch of random indices into the surface
array.
26. Draw Surface?
Now that we know how to look up a
surface/polygon, we need to know if this surface
has been drawn already.
The reason we need this check is because
quake3 does not split polygons and thus the
same polygon/surface may wind up in many
leaves.
One way to do this is to keep a byte or boolean
array - polygonsRendered.
One entry per surface/polygon initialized to false
and set to true when it is drawn. Use of this
polygons polygonsRendered value will determine
whether or not this polygon is to be drawn again.
27. Draw Surface? (2)
If a byte array is used, the framenumber of the
current frame can be used to indicate that the
polygon has already been drawn.
Need to setup a frame counter.
Before drawing a surface check the
corresponding byte in the polygonsRendered
array:
Use the same index into polygonsRendered as the
surface index you are using.
Assume an index i to be the index into both arrays. If
the ith polygon has already been drawn, the contents
of polygonsRendered[i] will be equal to the current
framecounter value.
28. Draw Surface? (3)
Check to see if that value is equal to the counter
variable.
polygonsRendered[I] == framecounter
If it is not:
set polygonsRendered to framecounter.
Render surface
If it is:
This surface has already been drawn, so skip it.
29. Draw Surface? (4)
Once all surfaces have been drawn this
framecounter must be increased.
Framecounter++
This immediately invalidates all values in
polygonsRendered
This way you do not have to loop through
polygonsRendered and set everything to zero.
30. Drawing a Leaf
Each leaf object contains a firstLeafSurface index
that references the first LeafFaces entry for this
leaf.
Each leaf object also contains a count of the
number of polygons/surfaces in this leaf called
numLeafSurfaces.
Use firstLeafSurface to access LeafFaces array
Go through numLeafSurfaces entries in LeafFaces.
Use those values to access surfaces array.
Test to see If this surface needs to be drawn.
Draw Surface.
32. Drawing a Surface:
Surface Types
Indicated in the type field of a Surface
Possible values:
PLANAR – planar polygons
Render as Triangle Fans (GL_TRIANGLE_FANS)
Vertex 1
Vertex 2
Vertex 3
Vertex 4 Vertex 5
Vertex 6
Vertex 7
Vertices are rendered in
this order in OpenGL to
form the fan
33. Drawing a Surface:
Surface Types (2)
Possible values:
TRIANGLE_SOUP – models
Render as separate Triangles (GL_TRIANGLES)
Each 3 vertices form a separate triangle
PATCH – Bezier surface
These are all quadratic patches – although the
Surface does contain a patchWidth and patchHeight
fields so higher order patches are possible.
Vertices and texture vertices should be gathered
into arrays for rendering using glEvalMesh2.
34. Drawing a Surface:
Patch Control Point Organization
Notes:
The picture contains an
example of how the control
points are numbered and
how they should be read
into an array for rendering.
1. Nine control points are read
in at a time and then
rendered.
2. The numbers next to each
control point are offsets
from the current surface's
firstVert. Adding the offset
to firstVert gives an index
into the Vertices array.
3. If the rows are not read in
backwards as shown then
the wrong faces are culled.
35. Texturing
Texturing is used for both surface texturing and
for lighting.
Surface texturing is done with standard image
textures – Quake 3 refers to image textures as
Shaders.
The textures for shaders all reside in external .jpg
or .tga files.
All lighting is done with lightmap textures –
Quake 3 refers to these as Lightmaps.
Lightmap texel values are all contained in the level
BSP file itself.
36. Texturing: Shaders
Shader texture file names are retrieved from the BSP
file by reading the Shader lump. (Quake3BSP.java:
800)
LoadBSP() maintains a temporary array of Shader
objects called pTextures[ ] that facilitates texture setup.
Shader file names do not have extensions in the BSP file
– those are added by FindTextureExtension ().
OpenGL texture setup is done by the CreateTexture()
method (pixel reading, 1st
texture binding and pixel
reading) and the set of texture id's is stored in the
textures array.
Note that gluBuild2DMipMaps() will ensure that texture
dimensions are always powers of 2 as OpenGL requires.
37. Texturing: Shader settings
Recommended OpenGL shader texture parameter
settings:
min filter – GL_LINEAR_MIPMAP_NEAREST
mag filter – GL_LINEAR_MIPMAP_LINEAR
shader textures should be set up with mipmaps.
Retrieval of other needed parameters from the
Texture object (tex):
internal and external pixel formats:
tex.getGLFormat().
image height: tex.getImageHeight()
image width: tex.getImageWidth()
reference to pixels: tex.getTextureBuffer().
38. Texturing: Lightmaps
Lightmap textures are read from the BSP file by reading
the Lightmap lump. (Quake3BSP.java: ~820)
LoadBSP() maintains a temporary array of Lightmap
objects called pLightMaps[ ] that facilitates lightmap
texture setup.
The lightmap constructor actually reads the lightmap
texels into the Lightmap member imageBits [ ].
OpenGL lightmap texture setup is done by the
CreateLightmapTexture() (Quake3BSP.java: 823) method
(including 1st
texture binding) and the set of texture id's is
stored in the lightmaps [ ] array.
If the lightMapNum in the Surface is -1 – that surface has
no lightmap
39. Texturing: Lightmap settings
Recommended OpenGL lightmap texture parameter
settings:
min filter – GL_LINEAR_MIPMAP_LINEAR
mag filter – GL_LINEAR
lightmap textures should also be set up with mipmaps.
Other needed parameters :
internal and external pixel formats: GL_RGB.
image height: 128
image width: 128
reference to pixels: loaded when the lightmap lump is
read and passed to CreateLightmapTexture as
pImageBits.
Lightmap textures should be applied in the
GL_MODULATE mode.
40. Note about lightmaps
There are many lightmaps stored within a
lightmap texture.
The textures are all 128 by 128
Lightmaps are typically 16 by 16
So, texture coordinates and lightmap
coordinates may be very different for a surface.
41. Multi-Texturing
First bind texture map to texture unit zero
gl.glActiveTexture(gl.GL_TEXTURE0);
Then set the shader texture by retrieving the textures
array index given by the shaderNum value in the surface
object.
Then bind lightmap to texture unit one
gl.glActiveTexture(gl.GL_TEXTURE1);
Then set the lightmap texture by retrieving the lightmaps
array index given by the lightmapNum value in the
surface object.
42. Multi-Texturing:
Texture coordinates
Texture coordinates must be set separately for
each texture unit
gl.glMultiTexCoord(gl.GL_TEXTURE0, u, v);
gl.glMultiTexCoord(gl.GL_TEXTURE1, u, v);
Values for texture coordinates are contained in the
Vertex object as texCoords and lightmapCoords.