SlideShare a Scribd company logo
1 of 25
Download to read offline
Measuring Positional Error of Harmonic Points
Due To Projection Mapping Onto The Surface of
a Circle
Ian Bloom
July 25, 2016
Contents
1 Background 2
2 Harmonic Points and Visual Perspective 5
3 The Model 6
4 Matlab Code 8
5 Data and Analysis 8
6 Experiments to Come 9
7 Applications 10
8 Conclusion 10
9 Appendix 13
9.1 Experiment I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
9.2 Experiment II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
9.3 Matlab Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1
1 BACKGROUND 2
1 Background
This project began under the supervision of Matt Wright in the Allosphere. The
Allosphere is
a unique scientific instrument that is a culmination of 26 years of
Professor JoAnn Kuchera-Morin’s creativity and research efforts in
media systems and studio design. [2]
The Allosphere itself is structurally massive. The Allosphere
consists of a 3-story cube that is treated with extensive sound ab-
sorption material making it one of the largest anechoic chambers in
the world. Standing inside this chamber is a 5-meter-radius oblate
sphere constructed of perforated aluminum that is designed to be
optically opaque and acoustically transparent. [2]
Images are projected onto the interior of this oblate sphere by twenty six pro-
jectors of varying resolution. The images from all the projectors are combined
to create a seamless, immersive viewing experience.
Figure 1: The exterior of the Allosphere [2]
1 BACKGROUND 3
Figure 2: The interior of the Allosphere [2]
Projection in the Allosphere comes with its own challenges. Each projector
must ’know’ what portion of the three-dimensional scene it is responsible for
displaying. Each projector must compensate in brightness for any overlapping
parts of its display. Each projector must also compensate for the distortion
that the projected images experience when they are displayed on the surface
of a curved screen. Traditionally, this curvature problem had been solved by
developing a distortion map. This distortion map is a pixel by pixel transfor-
mation that is applied prior to rendering and displaying through the projectors.
Applying this transformation to images over 60 times a second is computation-
ally expensive, and Matt and I were investigating the possibility of minimizing
this computational load by potentially transforming the three-dimensional scene
itself.
When Matt left the university I needed a new senior thesis advisor, and
professor Bill Jacob volunteered to help. Professor Jacob suggested we approach
this type of problem using the mathematics of projective geometry. Like any
branch of geometry, projective geometry describes the relationships between
points and lines. However, projective geometry is different because it does not
admit the idea that two lines in a plane might fail to meet by being parallel. [3]
When one assumes that two coplanar lines always meet, interesting properties
arise. These properties are useful in describing visual phenomena that occur in
three-dimensional perspective. For example, when one looks down a straight
railroad, one is given the impression that the rails meet at some point on the
horizon.
The two simplest objects in projective geometry are ranges, and pencils. A
range is the set of all points on a line, and a pencil is the set of all coplanar
lines passing through a point. The simplest relationship between these objects is
when corresponding members are incident. We say that a range is a ’section’ of
a pencil and a pencil ’projects’ the range. Think of a range like a flat image, and
a pencil like the lens of a camera. The lens of a camera relates points in three-
1 BACKGROUND 4
dimensional space to specific points on a flat image that maintain the three-
dimensional perspective. When a series of these relationships coincide, we call
the transformation a projectivity. Projectivities are useful not only in explaining
the transformation from a three-dimensional scene to a two dimensional image,
they are also useful in describing what happens when this two dimensional image
is projected onto a screen.
Figure 3: The set of lines through O are a pencil, the set of points that lie on
the line are called a range. [7]
Figure 4: The above is a projectivity that maps the points A, B, C, D to A’,
B’, C’, D’ [8]
After a bit of investigation, we decided to use the study of projectivities to try
to describe and measure error or skew that occurs when a two dimensional image
is projected onto a curved screen. The investigation raised a few questions. For a
given screen curvature, where should one place the projector to minimize image
distortion. Imagine a theater with a curved screen. For a given projection, can
seats be placed such that they minimize visual error? Can we keep this visual
error beneath a certain threshold? How should we measure this error to begin
with? What do we mean when we say visual perspective?
2 HARMONIC POINTS AND VISUAL PERSPECTIVE 5
2 Harmonic Points and Visual Perspective
Visual perspective is a phenomena that gives two dimensional images their three-
dimensional feel. Imagine a digital camera taking a picture of a landscape. For
each pixel of the image, the lens casts a ray from the camera sensor, through
the picture plane, to objects in the three-dimensional scene.
Things that lie upon the same ray do not appear, and different viewpoints
will cast rays hitting different things that all will represent the same three-
dimensional scene. Each individual picture represents a perspectivity from
the three-dimensional scene to the two dimensional photograph. How do we
explain these properties that are maintained when our viewpoint or viewing
angle is changed? How does our eye recognize that objects in two different
pictures represent the same scene? This requires the introduction of quadran-
gular and harmonic sets. We can of the harmonic relationship much like the
three-dimensional relationship between objects in a two dimensional picture.
Figure 5: The points on the blue line represent a Quadrangular Set. [6]
Harmonic points are members of harmonic sets, and members of these sets
fulfill a specific spatial relationship. Define a quadrangular set as the section of
a complete quadrangle by any line g that does not pass through a vertex. In
general, this is a set of six collinear points, one for each of the six possible lines
drawn through any set of four points. If our line g passes through both of the
diagonal points of our quadrangle, this reduces our set of points to four, and
this set is known as harmonic. [3]
3 THE MODEL 6
We know that each point of a quadrangular set is uniquely determined by
the remaining points. By Theorem 2.41 in Coxeter’s Projective Geometry [4]
This is also true for harmonic sets, which are simply a subset of quadrangular
sets. The harmonic relationship is useful to our purpose because it is invariant
under projectivities and perspectivities by Theorem 3.33 in Coxeter’s Projective
Geometry [4] In example, suppose we were atop South Hall overlooking UCSB.
As we walk back and forth across the roof, our viewing position changes relative
to the other buildings. Some walls appear foreshortened, some lengthened, but
overall our three-dimensional understanding of the space remains unchanged,
regardless of viewing position. But what happens if we take a picture from atop
South Hall, and display it on a curved screen? This is equivalent to mapping
four collinear harmonic points (our two dimensional picture) to four points on
a conic curve by a projectivity.
3 The Model
Figure 6: The above is a graphical representation of our model. Harmonic points
are noted, and the viewer is represented by a star.
Our model is illustrated above in Figure 6. We begin with our base line
l, which represents our flat screen (viewing plane). On l, lie three points A,
B, and J which imply the harmonic relationship H(AB,JF). The model we have
3 THE MODEL 7
developed makes several assumptions to simplify our problem of projection onto
a conic screen. The screen is assumed to be circular to simplify the math
involved, although with some tweaking any conic could be used (in fact, since a
conic is a circle in perspective, the model captures the principles involved in this
project). The viewing plane and our screen are assumed to intersect at two of
the harmonic points. Let A, and B be elements of both the circle (Allosphere)
and our viewing plane. We choose a projection point P and draw the lines
AP, and BP. We draw a third line cast arbitrarily from our point A. Lines 4,5,
and 6 are implied by our harmonic relationship and determine our point F of
H(AB,JF).
Let the point where PJ intersects our circle be J’, and the point where PF
intersects our circle be F’. These are the representations of points J and F after
projection onto the screen of the Allosphere.
Finally, we choose a point E which represents the viewer. By construction,
we know A, and B are visually correct. We draw the lines EJ and EF, and
call their intersections with the circle J” and F” respectively. Now, we measure
the angle from I’EI” and J’EJ”. This angle is how we’ve chosen to measure our
projective error in our model.
Furthermore, we have chosen J to represent points near the center of the
screen, where viewers will spend more time looking. Error in this point is
smaller by construction as J lies much closer to the circle than F. We have
chosen F to represent points on the periphery, where a larger margin of error is
more acceptable. Our goals are as follows:
• Given a projection, how can we position viewers to keep visual error below
a certain threshold?
• Also, given a viewer, how do we place our projector to minimize his per-
ceived error?
Projective space exists without the notion of a measure, so it was convenient
to express this model with Cartesian coordinates. It is convenient to assume
one of the harmonic points for both the picture plane and the screen lies on the
origin. It is also convenient to assume our point B also lies on the x-axis.
This begs the question, is there any position where we can guarantee our
harmonic points are all perceived correctly? It turns out, there’s exactly one
point for any projection where this is true. If the viewer is placed at the inter-
section of the lines from J and F, then the point the view perceives on the screen
and its intended position on the viewing plane will be collinear. The points A
and B appear correctly to all viewers of this projection by construction, but J
and F appear correctly as well.
Intuitively, we must ask ourselves in what neighborhood can we put seats
such that their image error is less than a certain threshold. We will see that, for
the most part, our limiting factor in potential viewer positions is error on points
at the periphery of the screen. Since error for J is typically much smaller, to
keep error below a threshold value, it is convenient to place viewers along the
ray that passes through F and F’.
4 MATLAB CODE 8
4 Matlab Code
The function conicharmonic takes as input: The slopes of two lines cast from
point A. The x and y coordinates of our point J’. The slope of the line passing
through J and J’. The x value of the point B. Finally, it takes our viewers x
and y coordinates. It was most convenient to have the user specify the point
J’ because A, B, and J’ are three non-collinear points. These three points
determine a circle of unique radius and center. Our function outputs the value
of F, F’, J, and the viewer error for points F and J on the screen, calculated by
measuring the angles F’EF” and J’EJ”.
This function calls several subroutines. The subroutine circumscribe takes
three non-collinear points as input and outputs the center point and radius of
the unique circle that passes through all three of the points. The subroutine
yintercept takes as input the slope of a line, and a point that the line passes
through and outputs the yintercept of that line. The subroutine linsolve takes
a matrix representing two non-parallel linear equations as input and outputs
the point in which they intersect. The subroutine pointline takes two points
as input and outputs the slope and y-intercept of the line that passes through
both. The subroutine lincirc takes the slope and y-intercept of a line, as well as
the center and radius of a circle as input. This function outputs the intersection
point of the line and the circle. Finally, viewerangle takes the viewers position,
a points intended position, and its perceived position as input and outputs the
angle between the intended and perceived positions.
Our code manifests slightly differently than our model only to simplify its
execution. We first calculate the circle that circumscribes points A, B, and
J’. This circle represents our screen. Since the user specifies the slope of the
line passing through J’, we use this slope to calculate our point J. We then
cast the rays of user determined slope from our point A. Then, we calculate
where our line from J intersects the aforementioned rays, and call these points P
(projector) and Q. We then draw the lines PB and QB. We have now completed
our quadrangle. Let the two corners of the quadrangle that are not P and Q be
called K and L. Finally, draw the line through K and L which determines the
points F.
We have now drawn our complete quadrangle and have determined all the
points in the relationship H(AB,JF). We then determine the intersection of the
line passing through F with the circle. We then call our viewerangle subroutine
to determine the perceived error for our points J and F. Finally, a diagram is
output displaying all the above information.
5 Data and Analysis
The results of our experiments are not incredibly surprising. The closer the
viewer is to the rays that determine the points J and F, the less error he perceives
when the points are projected onto the screen. Furthermore, the closer J is to
J’ and F is to F’, the less error any viewer will experience. This means that it is
6 EXPERIMENTS TO COME 9
in our best interest to insure the rays that determine the position of J’ and F’
do not strike the screen at an oblique angle. Not only can the positions of the
harmonic points be manipulated, so too can the angles of the rays they cast.
Furthermore, the viewer can be freely moved. In order to remain concise, I have
chosen to show two experiments with interesting results.
In the Appendix, subsection Experiment I, you will see five sequential im-
ages. I have chosen to vary the slope of one of the rays cast by A. You will see
that the farther our viewer becomes from this line, the more error he perceives
in our point F. In fact, he experiences more error in the point J as well, but
this error is statistically insignificant in comparison. Our viewer remains at the
point (2,4), A lies on the origin, J has value 4.25, and F lies at 10.2. A always
casts two rays, one of slope .25, and one of variable slope. J always casts a ray
of slope 4. Our point B has value 6.
Please note that, since the position of the points A, B, and J have not
changed, the position of the point F has not changed. However, the line that
determines its position strikes the screen at a different point in each iteration,
and this determines the error that our viewer sees. Clearly, when the ray that
determines the position of F and F’ is close to our user, he experiences less visual
distortion at these points. Note that the error our viewer detects for point J
remains constant. This gives us an excellent tool to reduce the amount of error
our viewer experiences at the periphery (F) without altering the perception of
images in the center of the screen (J).
In the Appendix, subsection Experiment II, you will see three sequential
images. This time, I have varied the slope of the line cast from the point from
J’. This means that the harmonic points do change slightly, but careless variance
in this ray produces far greater error in both points for the user. We will see
that the distance from F to F’ plays a significant factor in viewer error in these
cases. In this experiment the lines from A have slope .25 and 1.25, J’ is at the
point (4,-1), C has an x-value of 6, and our viewer maintains his position at
(2,4).
In this case, our intended image has changed quite a bit, with the positions
of J and F moving drastically. However, its representation on the screen changes
very little. This is another way to introduce error into the system. In this case,
we see that if F strikes our baseline at an oblique angle, its distance from F’ is
greater, and its perceived error is much higher. In the final image of the series,
our point F’ is a full 20.4 degrees away from its intended position. We can use
this information to ensure we use a projection that reduces the distance between
F and F’, ensuring error remains low for all users.
6 Experiments to Come
As of now, the program only outputs the visual error that the viewer experiences.
In the future, I’d like to further develop the program to the point that it specifies
the area in which seats can be placed so that the visual error experienced lies
under a threshold value. I was unable to determine a way to do so without some
7 APPLICATIONS 10
serious computation time.
Furthermore, the screen is circular because it is uniquely determined by three
points. There are an infinite number of conic sections that pass through these
same three points. Motivated by a real world application of this problem, it
would be beneficial to modify my program to calculate viewer error when the
screen is a non-circular conic section i.e. an ellipse.
Lastly, I’d like to investigate the possibility of projectivities between non-
intersecting, or tangent lines and conics. What if our projectivity fixes only one
point, or no points? How does that affect the perceptual error? I’d imagine that
visual error would be far greater in these situations. It would be interesting to
examine real applications of these ideas, like Imax theaters and the Allosphere
and collect data and measurements from their projections.
7 Applications
These experiments give us a way to quantify viewer error. Curved screens,
when executed properly, give users a more immersive experience. Theaters
with curved screens already exist, and are very popular with users i.e. Imax.
If the owner of a theater does decide to use a curved screen, he will likely
need to be made aware that image skew will occur. Testing positional error in
harmonic points gives us quantifiable data to tell clients exactly how much error
an individual viewer will experience with a given projector position.
With a given projector and screen setup, we will be able to tell clients where
seats can be positioned in order to keep perceptual error below a certain upper
bound. In theaters with multiple projectors i.e. the Allosphere, each projector
will have an area of acceptable seat placement. The intersection of these areas
will represent the seats that will experience an acceptable image regardless of
viewing direction.
Furthermore, assume we have an outdoor seating area at a local venue. They
are hiring a company to project a movie onto a curved screen. With our tests,
we will be able to place the screen and projector in a way that will keep error to
a minimum. Projector and screen position can be limited by viewing direction,
objects that may obstruct projection, and size/curvature constraints. Our tests
will allow us optimize the aforementioned variables to provide the best viewing
experience.
8 Conclusion
After beginning work in the Allosphere, I became fascinated with image projec-
tion and capture for curved screens. Your first time in the Allosphere is likely to
induce the same fascination. It really feels like you are experiencing something
rather than viewing it. Much of this effect is due to the fact that the screen
curves about you on all sides. Matt Wright had made it clear that this curvature
came with its own difficulties. For three-dimensional images, simple polarized
8 CONCLUSION 11
glasses wouldn’t do, and the Allosphere team had to use specially designed shut-
tering glasses in order to give the illusion of three-dimensionality. Not only this,
but the screen required many different projectors to cover its entire surface. The
position of these projectors was limited to above the catwalk, and above each
entrance and exit. Many of these projectors struck the screen at oblique angles.
What did this do to the images produced by the Allosphere? Indeed, this is one
of the problems that motivated this project.
I was surprised that it did almost nothing. Aside from a lower pixel density
in images projected by oblique projectors, it was difficult or impossible to tell
that the Allosphere team was operating within such rigid constraints. They were
using a ’ball of Go-Pros’ and some clever Matlab scripts to determine exactly
where each pixel of each projector was in three space, and using this ’distortion
map’ to modify rendered images so they would be projected as perceptually
correct. Is there a mathematically rigorous way to describe the error that oc-
curs in this projection setup so that we might do away with this pre-rendering
distortion map? Professor Jacob recommended we turn to the classical field of
projective geometry to see what it could tell us.
The ideas of projective geometry go back to Euclid where basic notions
were laid out in his volume on Optics, but its concept of projectivity arose later
and provided the function we were looking for. Capturing images with a camera
describes a projectivity from the three-dimensional scene to the two-dimensional
photo. Projection again describes a projectivity from our two dimensional image
onto our screen.
We needed a way to describe the three-dimensional properties that were
maintained among the above projectivities. Harmonic points proved to be use-
ful in this pursuit. If the front two corners of a building and the point at
infinity form a trio of points, they imply a fourth point, lets say a third visible
corner of the building. If one takes a different picture of the same building, the
three-dimensional space has not changed, and given those same three points,
we assume they will imply the same fourth point. Projectivities among ’flat’
images maintain this relationship. What happens if we project a ’flat’ image
onto a curved screen?
Clearly, there will be some error. We developed a way to quantify this error
through difference in angle between intended and perceived position of a point.
We were able to prove that for any projection and screen curvature where two of
the harmonic points that lie on the line also lie on the curved screen, there is a
single point where all harmonic points are correctly perceived. We were able to
develop theory to minimize viewing error by placing points that experience less
error in the center of the image, and points with more error on the periphery.
Lastly, we determined areas where viewers would experience less viewing error,
motivating practical applications like theater construction.
Furthermore, the theory can be further developed for generalized conics. We
can investigate what happens when projectivities fix either one, or none of the
harmonic points. With less constraints, the perceived error is likely to be much
higher. This paper will allow others with interest in projecting onto curved
surfaces a jumping off point. Our scope was rather narrow by requiring that the
8 CONCLUSION 12
screen be circular, but the examination of harmonic points proved extremely
useful in describing three-dimensional perspective.
9 APPENDIX 13
9 Appendix
9.1 Experiment I
Figure 7: Slope of 1.5.
The projection of F’ onto the x-axis is 6.99
The perceived error in the angle of point F is 3.25 degrees.
The perceived error in the angle of point J is 7.56 degrees.
9 APPENDIX 14
Figure 8: Slope of 1.25.
The projection of F’ onto the x-axis is 6.98
The perceived error in the angle of point F is 3.64 degrees.
The perceived error in the angle of point J is 7.56 degrees.
9 APPENDIX 15
Figure 9: Slope of 1.
The projection of F’ onto the x-axis is 6.95
The perceived error in the angle of point F is 4.19 degrees.
The perceived error in the angle of point J is 7.56 degrees.
9 APPENDIX 16
Figure 10: Slope of 0.75.
The projection of F’ onto the x-axis is 6.91
The perceived error in the angle of point F is 5.01 degrees.
The perceived error in the angle of point J is 7.56 degrees.
9 APPENDIX 17
Figure 11: Slope of 0.5.
The projection of F’ onto the x-axis is 6.83
The perceived error in the angle of point F is 6.38 degrees.
The perceived error in the angle of point J is 7.56 degrees.
9 APPENDIX 18
9.2 Experiment II
Figure 12: Slope of 4.
J = 4.25
F = 10.2
The projection of F’ onto the x-axis is 6.98
The perceived error in the angle of point F is 3.64 degrees.
The perceived error in the angle of point J is 7.56 degrees.
9 APPENDIX 19
Figure 13: Slope of -4.
J = 3.75
F = 15
The projection of F’ onto the x-axis is 6.87
The perceived error in the angle of point F is 14.61 degrees.
The perceived error in the angle of point J is 1.83 degrees.
9 APPENDIX 20
Figure 14: Slope of -2.
J = 3.5
F = 21
The projection of F’ onto the x-axis is 6.84
The perceived error in the angle of point F is 20.4 degrees.
The perceived error in the angle of point J is 1.25 degrees.
9 APPENDIX 21
9.3 Matlab Code
function [XF,JBASELINE,FCIRCLE,JERROR,FERROR] =
conicharmonic (ma 1 , ma 2 , jx , jy , mj 1 , b , viewerx , viewery )
%come back and define output
%Assume point a i s the o r i g i n
%Create base l i n e
space = linspace ( −1000 ,1000 ,10000);
baseline = 0∗ space ;
%plot ( space , baseline ) ;
%
%
%
%Find c i r c l e that circumscribes t r i a n g l e AJB
[ xc , yc , r ] = circumscribe (0 ,0 , jx , jy , b , 0 ) ;
%Find yint of l i n e J
b j = yintercept ( mj 1 , jx , jy ) ;
%Find where l i n e J i n t e r s e c t s BASELINE
JBASE = ones ( 1 , 2 ) ;
JBASE = l i n s o l v e ([− mj 1 1; 0 1 ] , [ b j ; 0 ] ) ;
JBASELINE = JBASE( 1 ) ;
disp ( ’ I n t e r s e c t i o n of l i n e J with baseline ’ ) ;
disp (JBASELINE ) ;
%
%
%
%Cast ray of slope ma 1 from point A
y1 = ma 1 ∗ space ;
%plot ( space , y1 ) ;
%Cast ray of slope ma 2 from point A
y2 = ma 2 ∗ space ;
%plot ( space , y2 ) ;
%Draw l i n e of slope mj 1 from point J
%First find yint of l i n e from J
b j = yintercept ( mj 1 , jx , jy ) ;
y3 = mj 1 ∗ space + b j ;
%plot ( space , y3 ) ;
%Find where l i n e from J i n t e r s e c t s two A points at P and Q
P = l i n s o l v e ([−ma 1 1; −mj 1 1 ] , [ 0 ; b j ] ) ;
9 APPENDIX 22
Q = l i n s o l v e ([−ma 2 1; −mj 1 1 ] , [ 0 ; b j ] ) ;
%Connect point P to point B
[mp, bp ] = p o i n t l i n e ( [ b P( 1 ) ] , [ 0 P ( 2 ) ] ) ;
y4 = mp ∗ space + bp ;
%plot ( space , y4 ) ;
%Connect point Q to point B
[mq, bq ] = p o i n t l i n e ( [ b Q( 1 ) ] , [ 0 Q( 2 ) ] ) ;
y5 = mq ∗ space + bq ;
%plot ( space , y5 ) ;
%I d e n t i f y our harmonic conjugate determinant points
K = l i n s o l v e ([−ma 2 1; −mp 1 ] , [ 0 ; bp ] ) ;
L = l i n s o l v e ([−ma 1 1; −mq 1 ] , [ 0 ; bq ] ) ;
%Use p o i n t l i n e method to find l i n e between J and K and draw
[mKL, bKL] = p o i n t l i n e ( [K(1) L ( 1 ) ] , [K(2) L ( 2 ) ] ) ;
y6 = mKL ∗ space + bKL;
%plot ( space , y6 ) ;
%I d e n t i f y and print point F ( i n t e r s e c t i o n with baseline )
F = l i n s o l v e ([−mKL 1; 0 1 ] , [bKL; 0 ] ) ;
XF = F( 1 ) ;
disp ( ’F on baseline ’ ) ; disp (XF) ;
%Find where l i n e F i n t e r s e c t s CIRCLE
[ FCIRx , FCIRy ] = l i n e c i r c (mKL,bKL, xc , yc , r ) ;
FCIRCLE = FCIRx ( 1 ) ;
disp ( ’ Projection of i n t e r s e c t i o n of F with c i r c l e onto X axis ’ ) ;
disp (FCIRCLE) ;
%%%
%Figure out perceived angle d i f f e r e n c e in point F
Fangle1 = viewerangle ( viewerx , viewery , FCIRx(1) ,FCIRy(1) ,XF, 0 ) ;
Fangle2 = viewerangle ( viewerx , viewery , FCIRx(2) ,FCIRy(2) ,XF, 0 ) ;
%Choose the right i n t e r s e c t i o n of F with conic
i f Fangle1 < 90
FERROR = Fangle1 ;
e l s e
FERROR = Fangle2 ;
end
disp ( ’ perceived angle distance in harmonic point F ’ ) ;
disp (FERROR) ;
%Figure out perceived angle d i f f e r e n c e in point J
JERROR = viewerangle ( viewerx , viewery , jx , jy ,JBASELINE, 0 ) ;
disp ( ’ perceived angle distance in harmonic point J ’ ) ;
9 APPENDIX 23
disp (JERROR) ;
%%%
f i g u r e
%Plot CIRCLE
v i s c i r c l e s ( [ xc yc ] , r ) ;
hold on ;
plot ( space , baseline , space , y1 , space , y2 , space , y3 , space , y4 , space , y5 , space , y6 ) ;
hold on ;
plot ( viewerx , viewery , ’ ∗ ’ ) ;
disp ( ’ viewerx ’ ) ;
disp ( viewerx ) ;
disp ( ’ viewery ’ ) ;
disp ( viewery ) ;
axis ([ −20 ,40 , −20 ,20]);
end
REFERENCES 24
References
[1] “Chapter 3 - OpenGL Programming Guide.” Chapter 3 - OpenGL Program-
ming Guide. N.p., n.d. Web. 07 July 2016.
[2] Kuchera-Morin, JoAnn. “The AlloSphere at the California NanoSystems In-
stitute, UC Santa Barbara.” The AlloSphere at the California NanoSystems
Institute, UC Santa Barbara. The Regents of the University of California,
2010. Web. 07 July 2016.
[3] Coxeter, H. S. M. Introduction to Geometry. New York: Wiley, 1989. Print.
[4] Coxeter, H. S. M. Projective Geometry. New York: Springer, 2003. Print.
[5] Casselman, Bill. “Feature Column from the AMS.” American Mathematical
Society.
[6] Rorig, Thilo. “Lecture 7.” Lecture Notes for Geometry I at T.U. Berlin.
T.U. Berlin, 11 June 2012. Web. 19 July 2016. http://dgd.service.tu-
berlin.de/wordpress/geometryws12/2012/11/06/lecture-7/. N.p., Feb. 2014
[7] Weisstein, Eric W. “Pencil.” From MathWorld –A Wolfram Web Resource.
http://mathworld.wolfram.com/Pencil.html
[8] “Homography.” Wikipedia. Wikimedia Foundation, n.d. Web. 19 July 2016.
https://en.wikipedia.org/wiki/Homography.

More Related Content

What's hot

3 d projections
3 d projections3 d projections
3 d projectionsMohd Arif
 
Morphological image processing
Morphological image processingMorphological image processing
Morphological image processingVinayak Narayanan
 
Projection In Computer Graphics
Projection In Computer GraphicsProjection In Computer Graphics
Projection In Computer GraphicsSanu Philip
 
Back face detection
Back face detectionBack face detection
Back face detectionPooja Dixit
 
theory of projection
theory of projection theory of projection
theory of projection LelisoHobicho
 
Structure and Motion - 3D Reconstruction of Cameras and Structure
Structure and Motion - 3D Reconstruction of Cameras and StructureStructure and Motion - 3D Reconstruction of Cameras and Structure
Structure and Motion - 3D Reconstruction of Cameras and StructureGiovanni Murru
 
Parallel and perspective projection in 3 d cg
Parallel and perspective projection in 3 d cgParallel and perspective projection in 3 d cg
Parallel and perspective projection in 3 d cgShaishavShah8
 
Projection ppt
Projection pptProjection ppt
Projection pptAnkit Garg
 
COMPUTER GRAPHICS-"Projection"
COMPUTER GRAPHICS-"Projection"COMPUTER GRAPHICS-"Projection"
COMPUTER GRAPHICS-"Projection"Ankit Surti
 
Lecture 11 Perspective Projection
Lecture 11 Perspective ProjectionLecture 11 Perspective Projection
Lecture 11 Perspective Projectionguest0026f
 
Graphics_3D viewing
Graphics_3D viewingGraphics_3D viewing
Graphics_3D viewingRabin BK
 

What's hot (19)

3 d display methods
3 d display methods3 d display methods
3 d display methods
 
3D transformation
3D transformation3D transformation
3D transformation
 
Shading
ShadingShading
Shading
 
3 d projections
3 d projections3 d projections
3 d projections
 
Morphological image processing
Morphological image processingMorphological image processing
Morphological image processing
 
Projection In Computer Graphics
Projection In Computer GraphicsProjection In Computer Graphics
Projection In Computer Graphics
 
3 d viewing
3 d viewing3 d viewing
3 d viewing
 
Back face detection
Back face detectionBack face detection
Back face detection
 
theory of projection
theory of projection theory of projection
theory of projection
 
3D Display
3D Display3D Display
3D Display
 
Structure and Motion - 3D Reconstruction of Cameras and Structure
Structure and Motion - 3D Reconstruction of Cameras and StructureStructure and Motion - 3D Reconstruction of Cameras and Structure
Structure and Motion - 3D Reconstruction of Cameras and Structure
 
Morphological operations
Morphological operationsMorphological operations
Morphological operations
 
Parallel and perspective projection in 3 d cg
Parallel and perspective projection in 3 d cgParallel and perspective projection in 3 d cg
Parallel and perspective projection in 3 d cg
 
Projection ppt
Projection pptProjection ppt
Projection ppt
 
COMPUTER GRAPHICS-"Projection"
COMPUTER GRAPHICS-"Projection"COMPUTER GRAPHICS-"Projection"
COMPUTER GRAPHICS-"Projection"
 
Hidden Surfaces
Hidden SurfacesHidden Surfaces
Hidden Surfaces
 
Lecture 11 Perspective Projection
Lecture 11 Perspective ProjectionLecture 11 Perspective Projection
Lecture 11 Perspective Projection
 
Graphics_3D viewing
Graphics_3D viewingGraphics_3D viewing
Graphics_3D viewing
 
Hit and-miss transform
Hit and-miss transformHit and-miss transform
Hit and-miss transform
 

Viewers also liked

Literatura Hebrea; Carmen Gómez, Saina Cachiguango, Raquel Llarch i Noor Harrak
Literatura Hebrea; Carmen Gómez, Saina Cachiguango, Raquel Llarch i Noor HarrakLiteratura Hebrea; Carmen Gómez, Saina Cachiguango, Raquel Llarch i Noor Harrak
Literatura Hebrea; Carmen Gómez, Saina Cachiguango, Raquel Llarch i Noor Harrakedtannefrank
 
Invitació a la lectura de tres autors clàssics: Dante, Shakespeare i Cervantes.
Invitació a la lectura de tres autors clàssics: Dante, Shakespeare i Cervantes.Invitació a la lectura de tres autors clàssics: Dante, Shakespeare i Cervantes.
Invitació a la lectura de tres autors clàssics: Dante, Shakespeare i Cervantes.edtannefrank
 
COMO SUBIR UN PDF A SLIDERSHARE
COMO SUBIR UN PDF A SLIDERSHARE COMO SUBIR UN PDF A SLIDERSHARE
COMO SUBIR UN PDF A SLIDERSHARE VALEPA895
 
Task Network Inc. Flyer
Task Network Inc. FlyerTask Network Inc. Flyer
Task Network Inc. FlyerHazel Lustre
 
De las limitaciones y excepciones
De las limitaciones y excepcionesDe las limitaciones y excepciones
De las limitaciones y excepcionesluimadrigal
 
Desktops enviroment a bdus9dhfdsopfcdfdf
Desktops enviroment a bdus9dhfdsopfcdfdfDesktops enviroment a bdus9dhfdsopfcdfdf
Desktops enviroment a bdus9dhfdsopfcdfdfabdulllam
 
Nota puntos de atención informe trimestral asoban
Nota puntos de atención informe trimestral asobanNota puntos de atención informe trimestral asoban
Nota puntos de atención informe trimestral asobanOxígeno Bolivia
 
An Open Source Web Service for Registering and Managing Environmental Samples
 An Open Source Web Service for Registering and Managing Environmental Samples An Open Source Web Service for Registering and Managing Environmental Samples
An Open Source Web Service for Registering and Managing Environmental SamplesAnusuriya Devaraju
 
WordCamp Rio de Janeiro 2016 - Vinícius Lourenço | Lojas Virtuais Descomplica...
WordCamp Rio de Janeiro 2016 - Vinícius Lourenço | Lojas Virtuais Descomplica...WordCamp Rio de Janeiro 2016 - Vinícius Lourenço | Lojas Virtuais Descomplica...
WordCamp Rio de Janeiro 2016 - Vinícius Lourenço | Lojas Virtuais Descomplica...Vinícius Lourenço
 
Augmenter le taux de succès et la rentabilité de vos Propositions Commerciales
Augmenter le taux de succès et la rentabilité de vos Propositions CommercialesAugmenter le taux de succès et la rentabilité de vos Propositions Commerciales
Augmenter le taux de succès et la rentabilité de vos Propositions CommercialesBespoke Bids Limited
 

Viewers also liked (17)

Redes sociales
Redes socialesRedes sociales
Redes sociales
 
Literatura Hebrea; Carmen Gómez, Saina Cachiguango, Raquel Llarch i Noor Harrak
Literatura Hebrea; Carmen Gómez, Saina Cachiguango, Raquel Llarch i Noor HarrakLiteratura Hebrea; Carmen Gómez, Saina Cachiguango, Raquel Llarch i Noor Harrak
Literatura Hebrea; Carmen Gómez, Saina Cachiguango, Raquel Llarch i Noor Harrak
 
Invitació a la lectura de tres autors clàssics: Dante, Shakespeare i Cervantes.
Invitació a la lectura de tres autors clàssics: Dante, Shakespeare i Cervantes.Invitació a la lectura de tres autors clàssics: Dante, Shakespeare i Cervantes.
Invitació a la lectura de tres autors clàssics: Dante, Shakespeare i Cervantes.
 
COMO SUBIR UN PDF A SLIDERSHARE
COMO SUBIR UN PDF A SLIDERSHARE COMO SUBIR UN PDF A SLIDERSHARE
COMO SUBIR UN PDF A SLIDERSHARE
 
Task Network Inc. Flyer
Task Network Inc. FlyerTask Network Inc. Flyer
Task Network Inc. Flyer
 
De las limitaciones y excepciones
De las limitaciones y excepcionesDe las limitaciones y excepciones
De las limitaciones y excepciones
 
Desktops enviroment a bdus9dhfdsopfcdfdf
Desktops enviroment a bdus9dhfdsopfcdfdfDesktops enviroment a bdus9dhfdsopfcdfdf
Desktops enviroment a bdus9dhfdsopfcdfdf
 
Презентация
ПрезентацияПрезентация
Презентация
 
Plantilla 4
Plantilla 4Plantilla 4
Plantilla 4
 
Nota puntos de atención informe trimestral asoban
Nota puntos de atención informe trimestral asobanNota puntos de atención informe trimestral asoban
Nota puntos de atención informe trimestral asoban
 
Redes sociales
Redes socialesRedes sociales
Redes sociales
 
An Open Source Web Service for Registering and Managing Environmental Samples
 An Open Source Web Service for Registering and Managing Environmental Samples An Open Source Web Service for Registering and Managing Environmental Samples
An Open Source Web Service for Registering and Managing Environmental Samples
 
Estos son mis maestr@s
Estos son mis maestr@sEstos son mis maestr@s
Estos son mis maestr@s
 
Brieff
BrieffBrieff
Brieff
 
Reglamento del cetis
Reglamento del cetisReglamento del cetis
Reglamento del cetis
 
WordCamp Rio de Janeiro 2016 - Vinícius Lourenço | Lojas Virtuais Descomplica...
WordCamp Rio de Janeiro 2016 - Vinícius Lourenço | Lojas Virtuais Descomplica...WordCamp Rio de Janeiro 2016 - Vinícius Lourenço | Lojas Virtuais Descomplica...
WordCamp Rio de Janeiro 2016 - Vinícius Lourenço | Lojas Virtuais Descomplica...
 
Augmenter le taux de succès et la rentabilité de vos Propositions Commerciales
Augmenter le taux de succès et la rentabilité de vos Propositions CommercialesAugmenter le taux de succès et la rentabilité de vos Propositions Commerciales
Augmenter le taux de succès et la rentabilité de vos Propositions Commerciales
 

Similar to ResearchPaper

Unit II & III_uncovered topics.doc notes
Unit II & III_uncovered topics.doc notesUnit II & III_uncovered topics.doc notes
Unit II & III_uncovered topics.doc notessmithashetty24
 
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an ObjectAnkur Tyagi
 
visual realism in geometric modeling
visual realism in geometric modelingvisual realism in geometric modeling
visual realism in geometric modelingsabiha khathun
 
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...Youness Lahdili
 
Object Distance Detection using a Joint Transform Correlator
Object Distance Detection using a Joint Transform CorrelatorObject Distance Detection using a Joint Transform Correlator
Object Distance Detection using a Joint Transform CorrelatorAlexander Layton
 
Handout optik-geometri-english
Handout optik-geometri-englishHandout optik-geometri-english
Handout optik-geometri-englishsupraptounnes
 
Solving the Pose Ambiguity via a Simple Concentric Circle Constraint
Solving the Pose Ambiguity via a Simple Concentric Circle ConstraintSolving the Pose Ambiguity via a Simple Concentric Circle Constraint
Solving the Pose Ambiguity via a Simple Concentric Circle ConstraintDr. Amarjeet Singh
 
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWSAPPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWSijcga
 
3D Display Method
3D Display Method3D Display Method
3D Display MethodKhaled Sany
 
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...INFOGAIN PUBLICATION
 
Review of Linear Image Degradation and Image Restoration Technique
Review of Linear Image Degradation and Image Restoration TechniqueReview of Linear Image Degradation and Image Restoration Technique
Review of Linear Image Degradation and Image Restoration TechniqueBRNSSPublicationHubI
 
Class[4][19th jun] [three js-camera&amp;light]
Class[4][19th jun] [three js-camera&amp;light]Class[4][19th jun] [three js-camera&amp;light]
Class[4][19th jun] [three js-camera&amp;light]Saajid Akram
 
Visual Hull Construction from Semitransparent Coloured Silhouettes
Visual Hull Construction from Semitransparent Coloured Silhouettes  Visual Hull Construction from Semitransparent Coloured Silhouettes
Visual Hull Construction from Semitransparent Coloured Silhouettes ijcga
 
Visual Hull Construction from Semitransparent Coloured Silhouettes
Visual Hull Construction from Semitransparent Coloured Silhouettes  Visual Hull Construction from Semitransparent Coloured Silhouettes
Visual Hull Construction from Semitransparent Coloured Silhouettes ijcga
 
Introduction to Multi-view Drawing
Introduction to Multi-view Drawing Introduction to Multi-view Drawing
Introduction to Multi-view Drawing Mekete Mulualem
 

Similar to ResearchPaper (20)

Mirrors.pptx
Mirrors.pptxMirrors.pptx
Mirrors.pptx
 
Unit II & III_uncovered topics.doc notes
Unit II & III_uncovered topics.doc notesUnit II & III_uncovered topics.doc notes
Unit II & III_uncovered topics.doc notes
 
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
 
Notes04.pdf
Notes04.pdfNotes04.pdf
Notes04.pdf
 
visual realism in geometric modeling
visual realism in geometric modelingvisual realism in geometric modeling
visual realism in geometric modeling
 
Final
FinalFinal
Final
 
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
6 - Conception of an Autonomous UAV using Stereo Vision (presented in an Indo...
 
Object Distance Detection using a Joint Transform Correlator
Object Distance Detection using a Joint Transform CorrelatorObject Distance Detection using a Joint Transform Correlator
Object Distance Detection using a Joint Transform Correlator
 
Handout optik-geometri-english
Handout optik-geometri-englishHandout optik-geometri-english
Handout optik-geometri-english
 
Solving the Pose Ambiguity via a Simple Concentric Circle Constraint
Solving the Pose Ambiguity via a Simple Concentric Circle ConstraintSolving the Pose Ambiguity via a Simple Concentric Circle Constraint
Solving the Pose Ambiguity via a Simple Concentric Circle Constraint
 
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWSAPPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
 
Curves and surfaces
Curves and surfacesCurves and surfaces
Curves and surfaces
 
3D Display Method
3D Display Method3D Display Method
3D Display Method
 
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
 
Review of Linear Image Degradation and Image Restoration Technique
Review of Linear Image Degradation and Image Restoration TechniqueReview of Linear Image Degradation and Image Restoration Technique
Review of Linear Image Degradation and Image Restoration Technique
 
427lects
427lects427lects
427lects
 
Class[4][19th jun] [three js-camera&amp;light]
Class[4][19th jun] [three js-camera&amp;light]Class[4][19th jun] [three js-camera&amp;light]
Class[4][19th jun] [three js-camera&amp;light]
 
Visual Hull Construction from Semitransparent Coloured Silhouettes
Visual Hull Construction from Semitransparent Coloured Silhouettes  Visual Hull Construction from Semitransparent Coloured Silhouettes
Visual Hull Construction from Semitransparent Coloured Silhouettes
 
Visual Hull Construction from Semitransparent Coloured Silhouettes
Visual Hull Construction from Semitransparent Coloured Silhouettes  Visual Hull Construction from Semitransparent Coloured Silhouettes
Visual Hull Construction from Semitransparent Coloured Silhouettes
 
Introduction to Multi-view Drawing
Introduction to Multi-view Drawing Introduction to Multi-view Drawing
Introduction to Multi-view Drawing
 

ResearchPaper

  • 1. Measuring Positional Error of Harmonic Points Due To Projection Mapping Onto The Surface of a Circle Ian Bloom July 25, 2016
  • 2. Contents 1 Background 2 2 Harmonic Points and Visual Perspective 5 3 The Model 6 4 Matlab Code 8 5 Data and Analysis 8 6 Experiments to Come 9 7 Applications 10 8 Conclusion 10 9 Appendix 13 9.1 Experiment I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 9.2 Experiment II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 9.3 Matlab Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1
  • 3. 1 BACKGROUND 2 1 Background This project began under the supervision of Matt Wright in the Allosphere. The Allosphere is a unique scientific instrument that is a culmination of 26 years of Professor JoAnn Kuchera-Morin’s creativity and research efforts in media systems and studio design. [2] The Allosphere itself is structurally massive. The Allosphere consists of a 3-story cube that is treated with extensive sound ab- sorption material making it one of the largest anechoic chambers in the world. Standing inside this chamber is a 5-meter-radius oblate sphere constructed of perforated aluminum that is designed to be optically opaque and acoustically transparent. [2] Images are projected onto the interior of this oblate sphere by twenty six pro- jectors of varying resolution. The images from all the projectors are combined to create a seamless, immersive viewing experience. Figure 1: The exterior of the Allosphere [2]
  • 4. 1 BACKGROUND 3 Figure 2: The interior of the Allosphere [2] Projection in the Allosphere comes with its own challenges. Each projector must ’know’ what portion of the three-dimensional scene it is responsible for displaying. Each projector must compensate in brightness for any overlapping parts of its display. Each projector must also compensate for the distortion that the projected images experience when they are displayed on the surface of a curved screen. Traditionally, this curvature problem had been solved by developing a distortion map. This distortion map is a pixel by pixel transfor- mation that is applied prior to rendering and displaying through the projectors. Applying this transformation to images over 60 times a second is computation- ally expensive, and Matt and I were investigating the possibility of minimizing this computational load by potentially transforming the three-dimensional scene itself. When Matt left the university I needed a new senior thesis advisor, and professor Bill Jacob volunteered to help. Professor Jacob suggested we approach this type of problem using the mathematics of projective geometry. Like any branch of geometry, projective geometry describes the relationships between points and lines. However, projective geometry is different because it does not admit the idea that two lines in a plane might fail to meet by being parallel. [3] When one assumes that two coplanar lines always meet, interesting properties arise. These properties are useful in describing visual phenomena that occur in three-dimensional perspective. For example, when one looks down a straight railroad, one is given the impression that the rails meet at some point on the horizon. The two simplest objects in projective geometry are ranges, and pencils. A range is the set of all points on a line, and a pencil is the set of all coplanar lines passing through a point. The simplest relationship between these objects is when corresponding members are incident. We say that a range is a ’section’ of a pencil and a pencil ’projects’ the range. Think of a range like a flat image, and a pencil like the lens of a camera. The lens of a camera relates points in three-
  • 5. 1 BACKGROUND 4 dimensional space to specific points on a flat image that maintain the three- dimensional perspective. When a series of these relationships coincide, we call the transformation a projectivity. Projectivities are useful not only in explaining the transformation from a three-dimensional scene to a two dimensional image, they are also useful in describing what happens when this two dimensional image is projected onto a screen. Figure 3: The set of lines through O are a pencil, the set of points that lie on the line are called a range. [7] Figure 4: The above is a projectivity that maps the points A, B, C, D to A’, B’, C’, D’ [8] After a bit of investigation, we decided to use the study of projectivities to try to describe and measure error or skew that occurs when a two dimensional image is projected onto a curved screen. The investigation raised a few questions. For a given screen curvature, where should one place the projector to minimize image distortion. Imagine a theater with a curved screen. For a given projection, can seats be placed such that they minimize visual error? Can we keep this visual error beneath a certain threshold? How should we measure this error to begin with? What do we mean when we say visual perspective?
  • 6. 2 HARMONIC POINTS AND VISUAL PERSPECTIVE 5 2 Harmonic Points and Visual Perspective Visual perspective is a phenomena that gives two dimensional images their three- dimensional feel. Imagine a digital camera taking a picture of a landscape. For each pixel of the image, the lens casts a ray from the camera sensor, through the picture plane, to objects in the three-dimensional scene. Things that lie upon the same ray do not appear, and different viewpoints will cast rays hitting different things that all will represent the same three- dimensional scene. Each individual picture represents a perspectivity from the three-dimensional scene to the two dimensional photograph. How do we explain these properties that are maintained when our viewpoint or viewing angle is changed? How does our eye recognize that objects in two different pictures represent the same scene? This requires the introduction of quadran- gular and harmonic sets. We can of the harmonic relationship much like the three-dimensional relationship between objects in a two dimensional picture. Figure 5: The points on the blue line represent a Quadrangular Set. [6] Harmonic points are members of harmonic sets, and members of these sets fulfill a specific spatial relationship. Define a quadrangular set as the section of a complete quadrangle by any line g that does not pass through a vertex. In general, this is a set of six collinear points, one for each of the six possible lines drawn through any set of four points. If our line g passes through both of the diagonal points of our quadrangle, this reduces our set of points to four, and this set is known as harmonic. [3]
  • 7. 3 THE MODEL 6 We know that each point of a quadrangular set is uniquely determined by the remaining points. By Theorem 2.41 in Coxeter’s Projective Geometry [4] This is also true for harmonic sets, which are simply a subset of quadrangular sets. The harmonic relationship is useful to our purpose because it is invariant under projectivities and perspectivities by Theorem 3.33 in Coxeter’s Projective Geometry [4] In example, suppose we were atop South Hall overlooking UCSB. As we walk back and forth across the roof, our viewing position changes relative to the other buildings. Some walls appear foreshortened, some lengthened, but overall our three-dimensional understanding of the space remains unchanged, regardless of viewing position. But what happens if we take a picture from atop South Hall, and display it on a curved screen? This is equivalent to mapping four collinear harmonic points (our two dimensional picture) to four points on a conic curve by a projectivity. 3 The Model Figure 6: The above is a graphical representation of our model. Harmonic points are noted, and the viewer is represented by a star. Our model is illustrated above in Figure 6. We begin with our base line l, which represents our flat screen (viewing plane). On l, lie three points A, B, and J which imply the harmonic relationship H(AB,JF). The model we have
  • 8. 3 THE MODEL 7 developed makes several assumptions to simplify our problem of projection onto a conic screen. The screen is assumed to be circular to simplify the math involved, although with some tweaking any conic could be used (in fact, since a conic is a circle in perspective, the model captures the principles involved in this project). The viewing plane and our screen are assumed to intersect at two of the harmonic points. Let A, and B be elements of both the circle (Allosphere) and our viewing plane. We choose a projection point P and draw the lines AP, and BP. We draw a third line cast arbitrarily from our point A. Lines 4,5, and 6 are implied by our harmonic relationship and determine our point F of H(AB,JF). Let the point where PJ intersects our circle be J’, and the point where PF intersects our circle be F’. These are the representations of points J and F after projection onto the screen of the Allosphere. Finally, we choose a point E which represents the viewer. By construction, we know A, and B are visually correct. We draw the lines EJ and EF, and call their intersections with the circle J” and F” respectively. Now, we measure the angle from I’EI” and J’EJ”. This angle is how we’ve chosen to measure our projective error in our model. Furthermore, we have chosen J to represent points near the center of the screen, where viewers will spend more time looking. Error in this point is smaller by construction as J lies much closer to the circle than F. We have chosen F to represent points on the periphery, where a larger margin of error is more acceptable. Our goals are as follows: • Given a projection, how can we position viewers to keep visual error below a certain threshold? • Also, given a viewer, how do we place our projector to minimize his per- ceived error? Projective space exists without the notion of a measure, so it was convenient to express this model with Cartesian coordinates. It is convenient to assume one of the harmonic points for both the picture plane and the screen lies on the origin. It is also convenient to assume our point B also lies on the x-axis. This begs the question, is there any position where we can guarantee our harmonic points are all perceived correctly? It turns out, there’s exactly one point for any projection where this is true. If the viewer is placed at the inter- section of the lines from J and F, then the point the view perceives on the screen and its intended position on the viewing plane will be collinear. The points A and B appear correctly to all viewers of this projection by construction, but J and F appear correctly as well. Intuitively, we must ask ourselves in what neighborhood can we put seats such that their image error is less than a certain threshold. We will see that, for the most part, our limiting factor in potential viewer positions is error on points at the periphery of the screen. Since error for J is typically much smaller, to keep error below a threshold value, it is convenient to place viewers along the ray that passes through F and F’.
  • 9. 4 MATLAB CODE 8 4 Matlab Code The function conicharmonic takes as input: The slopes of two lines cast from point A. The x and y coordinates of our point J’. The slope of the line passing through J and J’. The x value of the point B. Finally, it takes our viewers x and y coordinates. It was most convenient to have the user specify the point J’ because A, B, and J’ are three non-collinear points. These three points determine a circle of unique radius and center. Our function outputs the value of F, F’, J, and the viewer error for points F and J on the screen, calculated by measuring the angles F’EF” and J’EJ”. This function calls several subroutines. The subroutine circumscribe takes three non-collinear points as input and outputs the center point and radius of the unique circle that passes through all three of the points. The subroutine yintercept takes as input the slope of a line, and a point that the line passes through and outputs the yintercept of that line. The subroutine linsolve takes a matrix representing two non-parallel linear equations as input and outputs the point in which they intersect. The subroutine pointline takes two points as input and outputs the slope and y-intercept of the line that passes through both. The subroutine lincirc takes the slope and y-intercept of a line, as well as the center and radius of a circle as input. This function outputs the intersection point of the line and the circle. Finally, viewerangle takes the viewers position, a points intended position, and its perceived position as input and outputs the angle between the intended and perceived positions. Our code manifests slightly differently than our model only to simplify its execution. We first calculate the circle that circumscribes points A, B, and J’. This circle represents our screen. Since the user specifies the slope of the line passing through J’, we use this slope to calculate our point J. We then cast the rays of user determined slope from our point A. Then, we calculate where our line from J intersects the aforementioned rays, and call these points P (projector) and Q. We then draw the lines PB and QB. We have now completed our quadrangle. Let the two corners of the quadrangle that are not P and Q be called K and L. Finally, draw the line through K and L which determines the points F. We have now drawn our complete quadrangle and have determined all the points in the relationship H(AB,JF). We then determine the intersection of the line passing through F with the circle. We then call our viewerangle subroutine to determine the perceived error for our points J and F. Finally, a diagram is output displaying all the above information. 5 Data and Analysis The results of our experiments are not incredibly surprising. The closer the viewer is to the rays that determine the points J and F, the less error he perceives when the points are projected onto the screen. Furthermore, the closer J is to J’ and F is to F’, the less error any viewer will experience. This means that it is
  • 10. 6 EXPERIMENTS TO COME 9 in our best interest to insure the rays that determine the position of J’ and F’ do not strike the screen at an oblique angle. Not only can the positions of the harmonic points be manipulated, so too can the angles of the rays they cast. Furthermore, the viewer can be freely moved. In order to remain concise, I have chosen to show two experiments with interesting results. In the Appendix, subsection Experiment I, you will see five sequential im- ages. I have chosen to vary the slope of one of the rays cast by A. You will see that the farther our viewer becomes from this line, the more error he perceives in our point F. In fact, he experiences more error in the point J as well, but this error is statistically insignificant in comparison. Our viewer remains at the point (2,4), A lies on the origin, J has value 4.25, and F lies at 10.2. A always casts two rays, one of slope .25, and one of variable slope. J always casts a ray of slope 4. Our point B has value 6. Please note that, since the position of the points A, B, and J have not changed, the position of the point F has not changed. However, the line that determines its position strikes the screen at a different point in each iteration, and this determines the error that our viewer sees. Clearly, when the ray that determines the position of F and F’ is close to our user, he experiences less visual distortion at these points. Note that the error our viewer detects for point J remains constant. This gives us an excellent tool to reduce the amount of error our viewer experiences at the periphery (F) without altering the perception of images in the center of the screen (J). In the Appendix, subsection Experiment II, you will see three sequential images. This time, I have varied the slope of the line cast from the point from J’. This means that the harmonic points do change slightly, but careless variance in this ray produces far greater error in both points for the user. We will see that the distance from F to F’ plays a significant factor in viewer error in these cases. In this experiment the lines from A have slope .25 and 1.25, J’ is at the point (4,-1), C has an x-value of 6, and our viewer maintains his position at (2,4). In this case, our intended image has changed quite a bit, with the positions of J and F moving drastically. However, its representation on the screen changes very little. This is another way to introduce error into the system. In this case, we see that if F strikes our baseline at an oblique angle, its distance from F’ is greater, and its perceived error is much higher. In the final image of the series, our point F’ is a full 20.4 degrees away from its intended position. We can use this information to ensure we use a projection that reduces the distance between F and F’, ensuring error remains low for all users. 6 Experiments to Come As of now, the program only outputs the visual error that the viewer experiences. In the future, I’d like to further develop the program to the point that it specifies the area in which seats can be placed so that the visual error experienced lies under a threshold value. I was unable to determine a way to do so without some
  • 11. 7 APPLICATIONS 10 serious computation time. Furthermore, the screen is circular because it is uniquely determined by three points. There are an infinite number of conic sections that pass through these same three points. Motivated by a real world application of this problem, it would be beneficial to modify my program to calculate viewer error when the screen is a non-circular conic section i.e. an ellipse. Lastly, I’d like to investigate the possibility of projectivities between non- intersecting, or tangent lines and conics. What if our projectivity fixes only one point, or no points? How does that affect the perceptual error? I’d imagine that visual error would be far greater in these situations. It would be interesting to examine real applications of these ideas, like Imax theaters and the Allosphere and collect data and measurements from their projections. 7 Applications These experiments give us a way to quantify viewer error. Curved screens, when executed properly, give users a more immersive experience. Theaters with curved screens already exist, and are very popular with users i.e. Imax. If the owner of a theater does decide to use a curved screen, he will likely need to be made aware that image skew will occur. Testing positional error in harmonic points gives us quantifiable data to tell clients exactly how much error an individual viewer will experience with a given projector position. With a given projector and screen setup, we will be able to tell clients where seats can be positioned in order to keep perceptual error below a certain upper bound. In theaters with multiple projectors i.e. the Allosphere, each projector will have an area of acceptable seat placement. The intersection of these areas will represent the seats that will experience an acceptable image regardless of viewing direction. Furthermore, assume we have an outdoor seating area at a local venue. They are hiring a company to project a movie onto a curved screen. With our tests, we will be able to place the screen and projector in a way that will keep error to a minimum. Projector and screen position can be limited by viewing direction, objects that may obstruct projection, and size/curvature constraints. Our tests will allow us optimize the aforementioned variables to provide the best viewing experience. 8 Conclusion After beginning work in the Allosphere, I became fascinated with image projec- tion and capture for curved screens. Your first time in the Allosphere is likely to induce the same fascination. It really feels like you are experiencing something rather than viewing it. Much of this effect is due to the fact that the screen curves about you on all sides. Matt Wright had made it clear that this curvature came with its own difficulties. For three-dimensional images, simple polarized
  • 12. 8 CONCLUSION 11 glasses wouldn’t do, and the Allosphere team had to use specially designed shut- tering glasses in order to give the illusion of three-dimensionality. Not only this, but the screen required many different projectors to cover its entire surface. The position of these projectors was limited to above the catwalk, and above each entrance and exit. Many of these projectors struck the screen at oblique angles. What did this do to the images produced by the Allosphere? Indeed, this is one of the problems that motivated this project. I was surprised that it did almost nothing. Aside from a lower pixel density in images projected by oblique projectors, it was difficult or impossible to tell that the Allosphere team was operating within such rigid constraints. They were using a ’ball of Go-Pros’ and some clever Matlab scripts to determine exactly where each pixel of each projector was in three space, and using this ’distortion map’ to modify rendered images so they would be projected as perceptually correct. Is there a mathematically rigorous way to describe the error that oc- curs in this projection setup so that we might do away with this pre-rendering distortion map? Professor Jacob recommended we turn to the classical field of projective geometry to see what it could tell us. The ideas of projective geometry go back to Euclid where basic notions were laid out in his volume on Optics, but its concept of projectivity arose later and provided the function we were looking for. Capturing images with a camera describes a projectivity from the three-dimensional scene to the two-dimensional photo. Projection again describes a projectivity from our two dimensional image onto our screen. We needed a way to describe the three-dimensional properties that were maintained among the above projectivities. Harmonic points proved to be use- ful in this pursuit. If the front two corners of a building and the point at infinity form a trio of points, they imply a fourth point, lets say a third visible corner of the building. If one takes a different picture of the same building, the three-dimensional space has not changed, and given those same three points, we assume they will imply the same fourth point. Projectivities among ’flat’ images maintain this relationship. What happens if we project a ’flat’ image onto a curved screen? Clearly, there will be some error. We developed a way to quantify this error through difference in angle between intended and perceived position of a point. We were able to prove that for any projection and screen curvature where two of the harmonic points that lie on the line also lie on the curved screen, there is a single point where all harmonic points are correctly perceived. We were able to develop theory to minimize viewing error by placing points that experience less error in the center of the image, and points with more error on the periphery. Lastly, we determined areas where viewers would experience less viewing error, motivating practical applications like theater construction. Furthermore, the theory can be further developed for generalized conics. We can investigate what happens when projectivities fix either one, or none of the harmonic points. With less constraints, the perceived error is likely to be much higher. This paper will allow others with interest in projecting onto curved surfaces a jumping off point. Our scope was rather narrow by requiring that the
  • 13. 8 CONCLUSION 12 screen be circular, but the examination of harmonic points proved extremely useful in describing three-dimensional perspective.
  • 14. 9 APPENDIX 13 9 Appendix 9.1 Experiment I Figure 7: Slope of 1.5. The projection of F’ onto the x-axis is 6.99 The perceived error in the angle of point F is 3.25 degrees. The perceived error in the angle of point J is 7.56 degrees.
  • 15. 9 APPENDIX 14 Figure 8: Slope of 1.25. The projection of F’ onto the x-axis is 6.98 The perceived error in the angle of point F is 3.64 degrees. The perceived error in the angle of point J is 7.56 degrees.
  • 16. 9 APPENDIX 15 Figure 9: Slope of 1. The projection of F’ onto the x-axis is 6.95 The perceived error in the angle of point F is 4.19 degrees. The perceived error in the angle of point J is 7.56 degrees.
  • 17. 9 APPENDIX 16 Figure 10: Slope of 0.75. The projection of F’ onto the x-axis is 6.91 The perceived error in the angle of point F is 5.01 degrees. The perceived error in the angle of point J is 7.56 degrees.
  • 18. 9 APPENDIX 17 Figure 11: Slope of 0.5. The projection of F’ onto the x-axis is 6.83 The perceived error in the angle of point F is 6.38 degrees. The perceived error in the angle of point J is 7.56 degrees.
  • 19. 9 APPENDIX 18 9.2 Experiment II Figure 12: Slope of 4. J = 4.25 F = 10.2 The projection of F’ onto the x-axis is 6.98 The perceived error in the angle of point F is 3.64 degrees. The perceived error in the angle of point J is 7.56 degrees.
  • 20. 9 APPENDIX 19 Figure 13: Slope of -4. J = 3.75 F = 15 The projection of F’ onto the x-axis is 6.87 The perceived error in the angle of point F is 14.61 degrees. The perceived error in the angle of point J is 1.83 degrees.
  • 21. 9 APPENDIX 20 Figure 14: Slope of -2. J = 3.5 F = 21 The projection of F’ onto the x-axis is 6.84 The perceived error in the angle of point F is 20.4 degrees. The perceived error in the angle of point J is 1.25 degrees.
  • 22. 9 APPENDIX 21 9.3 Matlab Code function [XF,JBASELINE,FCIRCLE,JERROR,FERROR] = conicharmonic (ma 1 , ma 2 , jx , jy , mj 1 , b , viewerx , viewery ) %come back and define output %Assume point a i s the o r i g i n %Create base l i n e space = linspace ( −1000 ,1000 ,10000); baseline = 0∗ space ; %plot ( space , baseline ) ; % % % %Find c i r c l e that circumscribes t r i a n g l e AJB [ xc , yc , r ] = circumscribe (0 ,0 , jx , jy , b , 0 ) ; %Find yint of l i n e J b j = yintercept ( mj 1 , jx , jy ) ; %Find where l i n e J i n t e r s e c t s BASELINE JBASE = ones ( 1 , 2 ) ; JBASE = l i n s o l v e ([− mj 1 1; 0 1 ] , [ b j ; 0 ] ) ; JBASELINE = JBASE( 1 ) ; disp ( ’ I n t e r s e c t i o n of l i n e J with baseline ’ ) ; disp (JBASELINE ) ; % % % %Cast ray of slope ma 1 from point A y1 = ma 1 ∗ space ; %plot ( space , y1 ) ; %Cast ray of slope ma 2 from point A y2 = ma 2 ∗ space ; %plot ( space , y2 ) ; %Draw l i n e of slope mj 1 from point J %First find yint of l i n e from J b j = yintercept ( mj 1 , jx , jy ) ; y3 = mj 1 ∗ space + b j ; %plot ( space , y3 ) ; %Find where l i n e from J i n t e r s e c t s two A points at P and Q P = l i n s o l v e ([−ma 1 1; −mj 1 1 ] , [ 0 ; b j ] ) ;
  • 23. 9 APPENDIX 22 Q = l i n s o l v e ([−ma 2 1; −mj 1 1 ] , [ 0 ; b j ] ) ; %Connect point P to point B [mp, bp ] = p o i n t l i n e ( [ b P( 1 ) ] , [ 0 P ( 2 ) ] ) ; y4 = mp ∗ space + bp ; %plot ( space , y4 ) ; %Connect point Q to point B [mq, bq ] = p o i n t l i n e ( [ b Q( 1 ) ] , [ 0 Q( 2 ) ] ) ; y5 = mq ∗ space + bq ; %plot ( space , y5 ) ; %I d e n t i f y our harmonic conjugate determinant points K = l i n s o l v e ([−ma 2 1; −mp 1 ] , [ 0 ; bp ] ) ; L = l i n s o l v e ([−ma 1 1; −mq 1 ] , [ 0 ; bq ] ) ; %Use p o i n t l i n e method to find l i n e between J and K and draw [mKL, bKL] = p o i n t l i n e ( [K(1) L ( 1 ) ] , [K(2) L ( 2 ) ] ) ; y6 = mKL ∗ space + bKL; %plot ( space , y6 ) ; %I d e n t i f y and print point F ( i n t e r s e c t i o n with baseline ) F = l i n s o l v e ([−mKL 1; 0 1 ] , [bKL; 0 ] ) ; XF = F( 1 ) ; disp ( ’F on baseline ’ ) ; disp (XF) ; %Find where l i n e F i n t e r s e c t s CIRCLE [ FCIRx , FCIRy ] = l i n e c i r c (mKL,bKL, xc , yc , r ) ; FCIRCLE = FCIRx ( 1 ) ; disp ( ’ Projection of i n t e r s e c t i o n of F with c i r c l e onto X axis ’ ) ; disp (FCIRCLE) ; %%% %Figure out perceived angle d i f f e r e n c e in point F Fangle1 = viewerangle ( viewerx , viewery , FCIRx(1) ,FCIRy(1) ,XF, 0 ) ; Fangle2 = viewerangle ( viewerx , viewery , FCIRx(2) ,FCIRy(2) ,XF, 0 ) ; %Choose the right i n t e r s e c t i o n of F with conic i f Fangle1 < 90 FERROR = Fangle1 ; e l s e FERROR = Fangle2 ; end disp ( ’ perceived angle distance in harmonic point F ’ ) ; disp (FERROR) ; %Figure out perceived angle d i f f e r e n c e in point J JERROR = viewerangle ( viewerx , viewery , jx , jy ,JBASELINE, 0 ) ; disp ( ’ perceived angle distance in harmonic point J ’ ) ;
  • 24. 9 APPENDIX 23 disp (JERROR) ; %%% f i g u r e %Plot CIRCLE v i s c i r c l e s ( [ xc yc ] , r ) ; hold on ; plot ( space , baseline , space , y1 , space , y2 , space , y3 , space , y4 , space , y5 , space , y6 ) ; hold on ; plot ( viewerx , viewery , ’ ∗ ’ ) ; disp ( ’ viewerx ’ ) ; disp ( viewerx ) ; disp ( ’ viewery ’ ) ; disp ( viewery ) ; axis ([ −20 ,40 , −20 ,20]); end
  • 25. REFERENCES 24 References [1] “Chapter 3 - OpenGL Programming Guide.” Chapter 3 - OpenGL Program- ming Guide. N.p., n.d. Web. 07 July 2016. [2] Kuchera-Morin, JoAnn. “The AlloSphere at the California NanoSystems In- stitute, UC Santa Barbara.” The AlloSphere at the California NanoSystems Institute, UC Santa Barbara. The Regents of the University of California, 2010. Web. 07 July 2016. [3] Coxeter, H. S. M. Introduction to Geometry. New York: Wiley, 1989. Print. [4] Coxeter, H. S. M. Projective Geometry. New York: Springer, 2003. Print. [5] Casselman, Bill. “Feature Column from the AMS.” American Mathematical Society. [6] Rorig, Thilo. “Lecture 7.” Lecture Notes for Geometry I at T.U. Berlin. T.U. Berlin, 11 June 2012. Web. 19 July 2016. http://dgd.service.tu- berlin.de/wordpress/geometryws12/2012/11/06/lecture-7/. N.p., Feb. 2014 [7] Weisstein, Eric W. “Pencil.” From MathWorld –A Wolfram Web Resource. http://mathworld.wolfram.com/Pencil.html [8] “Homography.” Wikipedia. Wikimedia Foundation, n.d. Web. 19 July 2016. https://en.wikipedia.org/wiki/Homography.