Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
New Geometric Interpretation and Analytic Solution
for Quadrilateral Reconstruction
Joo-Haeng Lee
Convergence Technology R...
d
s0
s2
q0
y2 y0
u0
u2
v0v2
vm
pc
l0
l2
m0m2
(a) Camera pose when d = 1.7.
cv0v2 vm
(b) Circular trajectory of pc for vary...
d
s0
s2
q0
y2
y0
u0
u2
v0v2 vm
pc
l0
l2
m0m2
(a) Camera pose when d = 1.7.
cv0v2 vm
(b) Trajectory of pc when d is not fixe...
C. Projective Reconstruction
Using a trigonometric identity and the pose equation of
Eq.(14), we can derive the equation f...
Qg
Q
om
ug,0 ug,1
ug,2
ug,3
um
u0
u1
u2
u3
w0
w1
wd,0
wd,1
wm
Fig. 6. Derivation of a centered proxy quadrilateral Q that ...
#1 #2 #3 #4
(a) Input: web images of Fountain Place in Dallas, Texas.
(b) A reconstructed quadrilateral with different tex...
Upcoming SlideShare
Loading in …5
×

New geometric interpretation and analytic solution for quadrilateral reconstruction (ICPR-2014)

747 views

Published on

Accepted as poster presentation for ICPR 2014, Stockholm, Sweden on August 24~28, 2014.

[Revised Version]

Title: New geometric interpretation and analytic solution for quadrilateral reconstruction

Author: Joo-Haeng Lee
Affiliation: Human-Robot Interaction Research Team, ETRI, KOREA

Abstract:
A new geometric framework, called generalized coupled line camera (GCLC), is proposed to derive an analytic solution to reconstruct an unknown scene quadrilateral and the relevant projective structure from a single or multiple image quadrilaterals. We extend the previous approach developed for rectangle to handle arbitrary scene quadrilaterals. First, we generalize a single line camera by removing the centering constraint that the principal axis should bisect a scene line. Then, we couple a pair of generalized line cameras to model a frustum with a quadrilateral base. Finally, we show that the scene quadrilateral and the center of projection can be analytically reconstructed from a single view when prior knowledge on the quadrilateral is available. A completely unknown quadrilateral can be reconstructed from four views through non-linear optimization. We also describe a improved method to handle an off-centered case by geometrically inferring a centered proxy quadrilateral, which accelerates a reconstruction process without relying on homography. The proposed method is easy to implement since each step is expressed as a simple analytic equation. We present the experimental results on real and synthetic examples.

[Submitted Version]

Title: Generalized Coupled Line Cameras and Application in Quadrilateral Reconstruction

Abstract:
Coupled line camera (CLC) provides a geometric framework to derive an analytic solution to reconstruct an unknown scene rectangle and the relevant projective structure from a single image quadrilateral. We extend this approach as generalized coupled line camera (GCLC) to handle a scene quadrilateral. First, we generalize a single line camera by removing the centering constraint that the principal axis should bisect a scene line. Then, we couple a pair of generalized line cameras to model a frustum with a quadrilateral base. Finally, we show that the scene quadrilateral and the center of projection can be analytically reconstructed from a single view when prior knowledge on the quadrilateral is available. ...

Published in: Technology
  • Be the first to comment

  • Be the first to like this

New geometric interpretation and analytic solution for quadrilateral reconstruction (ICPR-2014)

  1. 1. New Geometric Interpretation and Analytic Solution for Quadrilateral Reconstruction Joo-Haeng Lee Convergence Technology Research Lab ETRI Daejeon, 305–777, KOREA Abstract—A new geometric framework, called generalized coupled line camera (GCLC), is proposed to derive an analytic solution to reconstruct an unknown scene quadrilateral and the relevant projective structure from a single or multiple image quadrilaterals. We extend the previous approach developed for rectangle to handle arbitrary scene quadrilaterals. First, we gen- eralize a single line camera by removing the centering constraint that the principal axis should bisect a scene line. Then, we couple a pair of generalized line cameras to model a frustum with a quadrilateral base. Finally, we show that the scene quadrilateral and the center of projection can be analytically reconstructed from a single view when prior knowledge on the quadrilat- eral is available. A completely unknown quadrilateral can be reconstructed from four views through non-linear optimization. We also describe a improved method to handle an off-centered case by geometrically inferring a centered proxy quadrilateral, which accelerates a reconstruction process without relying on homography. The proposed method is easy to implement since each step is expressed as a simple analytic equation. We present the experimental results on real and synthetic examples. I. INTRODUCTION A new geometric framework, called generalized coupled line camera (GCLC), is proposed to derive an analytic solution to reconstruct an unknown scene quadrilateral and the relevant projective structure from a single or multiple image quadri- laterals. We extend the previous approach, called coupled line camera (CLC), which models a rectangular frustum of a pinhole camera using two line cameras [1], [2]. (A line camera in our context does not refer to a capturing device such as a line-scan camera. Rather, our geometric configuration is more related to modeling approaches based on linear elements for camera calibration [3] or multi-perspective image [4].) Under CLC configuration, geometric relation among the base rectangle, the image quadrilateral and the optical center can be comprehensively described as simple equations of a compact parameter set. Hence, given a single image quadri- lateral, we can uniquely identify the frustum by reconstructing the base rectangle and optical center using a closed-form solution. The solution also contains a determinant that tells if a image quadrilateral is the projection of any rectangle prior to reconstruction. In the CLC-based reconstruction, no explicit form of camera parameters is involved since the formulation is based on pure geometric configuration of a pinhole projection. In application, an image quadrilateral is represented by a set of diagonal parameters (i.e. relative lengths of partial diagonals and the crossing angle) rather than actual pixel coordinates. If re- quired, unknown camera parameters such as the focal length can be computed subsequently using a standard calibration technique [5], [6]. Generally the previous solutions require to reconstruct the camera parameters first [7]. For example, when we apply the IAC (image of the absolute conic) method, the unknown focal length should be found first [5], [8]. Another interesting feature of CLC-based reconstruction is geometric interpretation of the solution space, which leads to an optimized analytic solution [2]. For example, given an im- age quadrilateral, two candidate line cameras are defined over two solution spheres. By the constraint of common principal axis, spheres are confined to two solution circles. Finally, the optical center is found in the intersection of two solutions circles. We believe a similar geometric framework can be applied in other geometric computer vision problems such as investigating the solution space of n-view reconstruction. In this paper, we propose generalized coupled line camera (GCLC) that inherits the key features of CLC and models a projective frustum with a quadrilateral base, which targets on a prospective application of projective reconstruction of an unknown scene quadrilateral. While keeping the same centering constraint of CLC that the principal axis passes through the center of quadrilaterals, we extend the model with additional parameters to describe the lengths of all partial diagonals. In CLC, these parameters need not be specified since they cancel out due to equilateral partial diagonals of a rectangle [1], [2]. The increased number of configuration parameters in GCLC, however, hinders to formulate a closed- form solution for single view reconstruction. We investigate this property and propose an analytic solution that works for single view reconstruction under special conditions, and a method to approximate unknown diagonal parameters from multiple views. For practical application of CLC framework, we need to handle an off-centered case. In this paper, we also propose an improved method composed of simpler operations based on geometric properties, not relying on constrained equation solving or explicit homography as in [1]. This paper is organized as follows. In Section II, we sum- marize the previous work on CLC [1], [2]. In Section III, we generalize CLC and describe reconstruction solution including off-centered cases. In Section IV, we give experimental results on synthetic and real quadrilaterals to demonstrate the perfor- mance. Finally, we conclude with remarks on future work.
  2. 2. d s0 s2 q0 y2 y0 u0 u2 v0v2 vm pc l0 l2 m0m2 (a) Camera pose when d = 1.7. cv0v2 vm (b) Circular trajectory of pc for varying d. Fig. 1. An example of a canonical line camera: m0 = m2 = 1, l0 = 0.6, l2 = 0.4, and ↵ = 0.2. II. PRELIMINARIES OF COUPLED LINE CAMERAS A. Line Camera Definition 1. A line camera captures an image line uiui+2 from a scene line vivi+2 where vi = (mi, 0, 0) and vi+2 = ( mi+2, 0, 0) for positive mi and mi+2. See Figure 1a. Definition 2. In a centered line camera, the principal axis passes through the center vm of the scene line vivi+2: vm = (vi + vi+2)/2. (1) Definition 3. A canonical line camera is a centered line camera with two constraints for simple formulation: vm = (0, 0, 0)T and equilateral unit division: kvi vmk = kvik = kvi+2k = 1. (2) For a line camera Ci, let d be the length of the principal axis from the center of projection pc to vm. Let ✓i be the orientation angle of the principal axis measured between vmpc and vmvi. Definition 4. For a canonical line camera, its pose equation is expressed as follow: cos ✓i = ✓ li li+2 li + li+2 ◆ d = ↵i d (3) where li = kui umk is the length of partial diagonals. Let ↵i be the line division coefficient of the canonical configuration ↵i = li li+2 li + li+2 (4) According to Eq.(3), we can observe the relation among ✓i, d and ↵i. Note that when ↵i is fixed, pc is defined along a circular trajectory or on a solution sphere of radius 0.5/|↵|. See Figure 1b. B. Coupled Line Cameras Definition 5. Coupled line camera is a pair of line cameras, that share the principal axis and the center of projection. By coupling two canonical line cameras, we can represent a projective structure with a rectangle base. See Figure 2. Definition 6. For coupled line camera, we can derive a coupling constraint: = l1 l0 = tan 1 tan 0 = sin ✓1 (d cos ✓0) sin ✓0 (d cos ✓1) (5) v0 v1 v2 v3 f vm G (a) Scene rectangle G (b) 1st line camera C0 (c) 2nd line camera C1 (d) Coupling C0 and C1 (e) Projective structure u0 u1 u2 u3 r um Q (f) Projection of G to Q Fig. 2. Coupling of two canonical line cameras to represent a projective structure with a rectangle base. where is the coupling coefficient defined by the ratio of the lengths, l0 and l1, of two partial diagonals of Q. See Figure 2f. C. Projective Reconstruction Algorithm 1 (Single View Reconstruction with CLC). The unknown elements of projective structure, such as the scene rectangle G and the center of projection pc, can be recon- structed from a single image quadrilaterals Q as in the below. First, the pose equation of Eq.(3) and the coupling constraint of Eq.(5) can be rearranged into a system of equations: d = sin ✓0 cos ✓1 cos ✓0 sin ✓1 sin ✓0 sin ✓1 = cos ✓0 ↵0 = cos ✓1 ↵1 (6) Then, the length d of the common principal axis can be computed from the system of equations in Eq.(6) as follows: d = p A0/A1 (7) where A0 = (1 ↵1)2 2 (1 ↵0)2 and A1 = ↵2 0(1 ↵1)2 2 (1 ↵0)2 ↵2 1. Once d is computed, two orientation angles, ✓0 and ✓1, can be computed using Eq.(3). The base rectangle G can be reconstructed by computing its unknown shape parameter, the diagonal angle : cos = cos ⇢ sin ✓0 sin ✓1 + cos ✓0 cos ✓1 (8) where ⇢ is the diagonal angle of the image quadrilateral Q. Finally, the projective structure can be reconstructed by computing the coordinates of a center of projection pc: pc = d (sin cos ✓0, cos ✓1 cos cos ✓0, sin ⇢ sin ✓0 sin ✓1) sin (9) D. Determinant Condition When Eq.(7) has a valid value, two conditions should be satisfied: (1) A0 and A1 have the same sign; and (2) the length d of the common principal axis should not exceed the diameter
  3. 3. d s0 s2 q0 y2 y0 u0 u2 v0v2 vm pc l0 l2 m0m2 (a) Camera pose when d = 1.7. cv0v2 vm (b) Trajectory of pc when d is not fixed. Fig. 3. An example of a generalized line camera: m0 = 1, m2 = 1.4, l0 = 0.6, l2 = 0.4, and ↵ = 0.2. of each solution sphere: d  min(1/k↵0k, 1/k↵1k). These conditions can be combined into Boolean expressions: D = D0 _ D1 (10) D0 = ✓ 1 ↵0 1 ↵1 ◆ ^ ✓ 1  ↵0 ↵1 ◆ (11) D1 = ✓  1 ↵0 1 ↵1 ◆ ^ ✓ 1 ↵0 ↵1 ◆ (12) where ^ and _ are Boolean and and or operations, respec- tively. Since ↵0, ↵1 and are the coefficients from a given image quadrilateral Q, we can determine if Q is an image of any scene rectangle before actual reconstruction. Once the determinant D is satisfied, Algorithm 1 can be applied. E. Off-Centered Case CLC assumes the principal axis passes through the centers of the image quadrilateral Q and the scene rectangle G. When handling an off-centered quadrilateral Qg, a centered proxy quadrilateral Q should be found first by solving equations that formulate edge parallelism between Q and Qg, centering constraint of Q, and a vanishing line derived from Qg [1]. Once Q is found, the centered proxy rectangle G can be reconstructed using Algorithm 1. Since the inferred Q does not guarantee congruency to Qg, the target scene rectangle Gg should be reconstructed using a homography H between Q and G: Gg = HQg. In this paper, we propose a new method to handle an off- centered case. First, we derive a centered proxy quadrilateral Q that is perspectively congruent to Qg. Then, we show that the target scene rectangle Gg can be geometrically derived without relying on homography. See Section III-E. III. GENERALIZATION OF COUPLED LINE CAMERAS As a main contribution of this paper, we generalize a line camera to support a non-canonical configuration. Then, we show that a pair of generalized line cameras can be coupled to represent a projective structure with a quadrilateral base other than a rectangle. Finally, we describe how we can reconstruct a projective structure from a single view with a sufficient prior knowledge to constrain the solution space. We also describe how to handle off-centered cases. v0 v1 v2 v3 f vm G (a) Scene quad. G (b) 1st line camera C0 (c) 2nd line camera C1 (d) Coupling C0 and C1 (e) Projective Structure u0 u1 u2 u3 r um Q (f) Projection of G to Q Fig. 4. Coupling of two generalied line cameras to represent a projective structure with a quadrilateral base. A generalized line camera Ci is assigned for each diagonal of a scene quadrilateral G. actual values of diagonal parameters. A. Generalized Line Camera Definition 7. In a general configuration of a line camera, the principal axis may not bisect the scene line: we may not consider the centering constraints of Eqs.(1)-(2). See Figure 3 where m0 6= m2. Accordingly, the pose equation of a canonical line camera in Eq.(3) should be generalized with two additional parameters, m0 and m2. Assuming m0 > 0 and m2 > 0, the following geometric relation holds: li : li+2 = mi sin ✓0 d d ˆdi : mi+2 sin ✓0 d d + ˆdi+2 (13) where ˆd0 = m0 cos ✓0 and ˆd2 = m2 cos ✓0. Definition 8. The generalized pose equation can be derived from Eq.(13): cos ✓i = ✓ mi+2li mili+2 mimi+2 (li + li+2) ◆ d = ↵g,i d (14) where ↵g,i is the generalized division coefficient ↵g,i = mi+2li mili+2 mimi+2 (li + li+2) . (15) For a fixed ↵g,i, the center of projection pc is defined over a circular trajectory as in Figure 3b, or on a solution sphere [2]. B. Coupling Generalized Line Cameras By coupling two generalized line cameras, we can represent a projective structure with a quadrilateral base G with vertices: v0 = m0(1, 0), v1 = m1(cos , sin ), v2 = m2/m0v0, and v3 = m3/m1v1 where mi’s are the relative lengths of partial diagonals or diagonal parameters of G. See Figure 4. Definition 9. A generalized coupling constraint g is defined as follows: g = l1 l0 = m1 sin ✓1 m0 sin ✓0 (d m0 cos ✓0) (d m1 cos ✓1) (16)
  4. 4. C. Projective Reconstruction Using a trigonometric identity and the pose equation of Eq.(14), we can derive the equation for 2 g by squaring both the sides of Eq.(16): sin2 ✓i = 1 cos2 ✓i = 1 ↵2 g,i d2 (17) 2 g = m2 1(1 m0↵g,0)2 (1 ↵2 g,1d2 ) m2 0(1 m1↵g,1)2(1 ↵2 g,0d2) (18) From Eq.(18), the length d of the common principal axis can be expressed with GCLC parameters: d = s Ag,0 Ag,1 (19) where Ag,0 = m2 0(1 m1↵g,1)2 2 g m2 1(1 m0↵g,0)2 and Ag,1 = m2 0↵2 g,0(1 m1↵g,1)2 2 g m2 1(1 m0↵g,0)2 ↵2 g,1. Eq.(19) states that d can be computed from known diagonal parameters, mi and li, of a single pair of scene and image quadrilaterals, not relying on their diagonal angles, and ⇢. Algorithm 2 (Single View Reconstruction with GCLC). Once the length d of the common principal axis has been found using Eq.(19) with prior knowledge on diagonal parameters, we can compute the orientation angles, ✓0 and ✓1, using the pose equation of Eq.(14). Then, the diagonal angle of a scene quadrilateral and the center of projection pc can be computed using Eqs.(8) and (9), respectively. ⇤ If we have no prior knowledge on diagonal parameters mi of G, we can infer them using multiple image quadrilaterals Qj from different views. By setting m0 = 1, the number of unknown diagonal parameters of G is reduced to three: m1, m2 and m3. For each Qj, the crossing angle j of Eq.(8) is expressed with m1, m2 and m3, and coefficients derived from known diagonal parameters li,j of Qj. Since the reconstructed j’s should be identical regardless of views, the following identity should hold: cos j = cos j+1. Hence, if we have four different views, we can formulate three equations of three unknowns, m1, m2 and m3: cos 0 = cos 1 = cos 2 = cos 3 (20) The number of views are varying according to the degree of freedom in diagonal parameters. Although an analytic solution for Eq.(20) is not found yet, the problem can be formulated as minimization of the following objective function: fobj = n 1X j=0 k cos j cos j+1k2 (21) where n is the number of views. Generally, Eq.(21) can be solved using a numerical nonlinear optimization method [9]. Since optimization may get stuck in a local minima, we may check the validity using determinant of Eq. 24. Algorithm 3 (n-View Reconstruction with GCLC). When Algorithm 2 cannot be applied due to lack of knowledge on the scene rectangle G, but we have multiple image (a) Reference: Gg and Qg (b) Inferring a centered Q in blue (c) Reconstruction of G and Gg (d) Congruency of G and Gg Fig. 5. Reconstruction of a synthetic quadrilateral Gg from an off-centered quadrilateral Qg: m0 = 1, m1 = 0.75, m2 = 1.35, m3 = 1.4 and = 1.35. Diagonal parameters mi and the vanishing line is given. quadrilaterals Qj from n different views, we can find the unknown mi’s by minimizing the objective function of Eq.(21). Then, we can apply Algorithm 2 for one of the views to reconstruct the projective structure. ⇤ The number of views required in Algorithm 3 depends on the number of unknown mi’s. For a general quadrilateral of three unknown mi’s except m0 = 1, at least 4 views are required according to Eq.(20). For a parallelogram with known m0 = m2 = 1 and unknown m1 = m3, at least 2 views are required to find m1. See Section IV for real examples. D. Determinant Condition Similarly as in Section II-D, we can derive, from Eqs.(14) and (19), a condition Dg that can determine if Q is projection of a centered scene quadrilateral G with known mi’s. Dg = Dg,0 _ Dg,1 (22) Dg,0 = ⇣ m1(1 m0↵g,0) m0(1 m1↵g,1) ⌘ ^ ⇣ 1  ↵g,0 ↵g,1 ⌘ (23) Dg,1 = ⇣  m1(1 m0↵g,0) m0(1 m1↵g,1) ⌘ ^ ⇣ 1 ↵g,0 ↵g,1 ⌘ (24) E. Off-Centered Case Let an off-centered image quadrilateral Qg be projection of a scene quadrilateral Gg, which is also off-centered and unknown yet. See Fig. 5a. To apply Algorithms 2 and 3, we provide a method to find a centered proxy quadrilateral Q that is an image of a centered scene quadrilateral G. Specially, G is guaranteed to be congruent to Gg through parallel translation by t. We also show that the translation vector t can be computed in image space. Hence, we do not need to compute homography H between G and Q to reconstruct Gg as in CLC. See Section II-E and [1]. Algorithm 4 (Reconstruction from an Off-Centered Quadri- lateral). An off-centered scene quadrilateral Gg can be recon- structed from its image Qg by adding extra steps to the GCLC methods presented in Section III-C. See Figure 5:
  5. 5. Qg Q om ug,0 ug,1 ug,2 ug,3 um u0 u1 u2 u3 w0 w1 wd,0 wd,1 wm Fig. 6. Derivation of a centered proxy quadrilateral Q that is perspectively congruent to Qg. Assume the vanishing line w0w1 is given. 1) Infer a centered proxy quadrilateral Q from Qg such that Q is projection of a centered scene quadrilateral G that is congruent to the target quadrilateral Gg. See Algorithm 5; 2) Apply Algorithm 2 to Q to reconstruct the corresponding centered quadrilateral G and the center of projection pc. If multiple Qg,j are available, apply Algorithm 3. 3) The target scene quadrilateral Gg can be computed as translation of G: Gg = G + t where t can be computed from displacement s = um om between centers of Q and Qg using Algorithm 6. Algorithm 5 (Centered Proxy Quadrilateral). Assuming a vanishing line w0w1 is given, we can find a centered proxy quadrilateral Q by perspectively translating an off-centered quadrilateral Qg. See Figure 6: 1) Find the intersection points wd,i between the vanishing line w0w1 and each diagonal ug,iug,i+2 of Gg. 2) Find the intersection point wm between the vanishing line w0w1 and the line of translation omum. 3) Find the intersection point u0 between the line ug,0wm and the line omwd,0. Similarly, find u2 from ug,2wm and omwd,0. 4) Find the intersection point u1 between the line ug,1wm and the line omwd,1. Similarly, find u3 from ug,3wm and omwd,1. 5) The i-th vertex of Q is ui. ⇤ Note that Algorithm 5 is composed of simple line-line inter- sections rather than geometric constraint solving as in [1]. Algorithm 6 (Perspective-to-Euclidean Vector Transforma- tion). With GCLC defined with known Q and G (as in Fig. 4), we can project an image vector s to a scene vector t. First, we perspectively decompose s along two diagonals of Q: 1) Find the intersection points us,0 between the line u0om and the line umwd,1. Similarly, find us,1 from u1om and umwd,0. 2) For each decomposition coefficient si of us,i, compute the coefficient ti for vi using Eq. 26. 3) The corresponding scene vector t can be expressed as a vector sum of two diagonal vectors, t0v0 + t1v1, of G assuming vm = 0. See Fig. 4b. ⇤ Algorithm 6 is based on the following property of a general- ized line camera. um ug,m u0 u1 us,0 us,1 w0 w1 wd,0 wd,1 v0 vm vg,m v1 vt,1 vt,0 G Gg Q Qg Fig. 7. Perspective-to-Euclidean vector transformation. d q0 v0v2 vm vt,0vt,2 pc l0 l2 u0 u2 us,0 us,2 m0m2 um Fig. 8. Scaling transformation in a generalized line camera, which is explained as a cross ratio between corresponding four points. Using projective invariance of cross-ratio [8], the following holds for two sets of collinear points, (vt,0, v0, vm, v2) and (us,0, u0, um, u2), in the scene and images lines, respectively: sili(li + li+2) li(sili + li+2) = timi(mi + mi+2) mi(timi + mi+2) (25) where si = kus,i umk/li and ti = kvt,i vmk/mi. (See Fig. 8.) By solving Eq.(25) for ti, we get the following relation between ti and si: ti = simi+2(li + li+2) simi+2li + ((1 si)mi + mi+2)li+2 (26) Hence, if a line camera is defined, a scaling factor si of image line can be mapped to ti of the scene, and vice versa. IV. EXPERIMENT We give experimental results on real and synthetic exam- ples. All the experiments were performed in Mathematica implementations. We applied Algorithm 4 to real-world quadrilaterals found in web images of modern architectures. We assume each image is independently taken by unknown cameras and not altered (by cropping). Each input quadrilateral Qg,j is specified in red lines in Fig. 9a and Fig 10a. To infer a centered proxy quadrilateral Qj using Algorithm 5, we find a vanishing line using patterns of parallel lines such as window frames [10]. Once a set of centered quadrilaterals Qj are found, we estimate unknown diagonal parameters mi that minimize the objective function fobj of Eq.(21). In the experiment, we used NMinimize[] function of Mathematica for non-linear optimization [9]. With mi known, we can reconstruct the centered scene quadrilateral Gj which is congruent to the target scene quadrilateral Gg,j. See Fig. 9b and Fig 10b. The result of reconstructed 3D view frustum is omitted for the page limit.
  6. 6. #1 #2 #3 #4 (a) Input: web images of Fountain Place in Dallas, Texas. (b) A reconstructed quadrilateral with different textures of given images. Fig. 9. Reconstruction of a quadrilateral from four views using Algorithm 4. Input • Two images from uncalibrated cameras #1 #2 (a) Input: web images of the Dockland in Hamburg, Germany. Output •Reconstructed parallelogram! m1=2.87 (err 2.8%), phi=0.61 (err 0.7%), inc=24. 29 (err 1.2%) using Ref-#2 (b) A reconstructed parallelogram with different textures of given images. Fig. 10. Reconstruction of a parallelogram from two view using Algorithm 4. For a quadrilateral case of Fig. 9, four images were used. The optimization converges when fobj  3.7 ⇥ 10 4 with m1 = 2.46639, m2 = 0.476389, m3 = 1.25378. The mean of four j is 1.77297 with variance 5.9974 ⇥ 10 5 . The optimization takes about 3 seconds in 2.6 GHz Intel Core i7. Time for other reconstruction steps is trivial through evaluation of analytic expressions. For a parallelogram of Fig. 10, it converges when fobj  10 30 with m1 = 2.87419 and = 0.606594 in 0.06 second. We also applied Algorithm 4 to the synthetic quadrilateral G of Fig. 4 with four different views. The optimization for mi converges when fobj < 10 15 in 3 seconds. The mean error of reconstructed mi is 1.2⇥10 7 . Timing is similar to the real example of Fig. 9, but precision is much higher due to absence of noise sources such as lens distortion or feature detection. When added random noises of 1-pixel radius to vertices of Qj in 1280 ⇥ 1024 image, the precision dropped with errors 6.9 ⇥ 10 3 and 4.3 ⇥ 10 3 in mi and , respectively. V. CONCLUSION We proposed a novel method to reconstruct a scene quadri- lateral and projective structure based on generalized coupled line cameras (GCLC). The method gives an analytic solution for a single-view reconstruction when prior knowledge on diagonal parameters is given. Otherwise, required parameters can be approximated beforehand from multiple views through optimization. We also provide an improved method to handle off-centered cases by geometrically inferring a centered proxy quadrilateral, which accelerates a 2D reconstruction process without relying on homography or calibration. The overall computation is quite efficient since each key step is represented as a simple analytic equation. Experiments show a reliable result on real images from uncalibrated cameras. To apply the proposed method to a real-world case with an off-centered quadrilateral, a vanishing line should be available for each view. This condition can be easily satisfied in a specially textured quadrilateral of artifacts [11]. Otherwise, we need other types of prior knowledge to infer a centered quadrilateral. For example, a predefined parametric polyhedral model can be a good candidate [12]. Lastly, coupled line projectors (CLP) [13] is a dual of CLC. We expect that generalized CLP can be combined with GCLC for a projector-based augmented reality application. REFERENCES [1] J.-H. Lee, “Camera calibration from a single image based on coupled line cameras and rectangle constraint,” in ICPR 2012, 2012, pp. 758– 762. 1, 3, 4, 5 [2] ——, “A new solution for projective reconstruction based on coupled line cameras,” ETRI Journal, vol. 35, no. 5, pp. 939–942, 2013. 1, 3 [3] Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 26, no. 7, pp. 892–899, 2004. 1 [4] J. Yu and L. McMillan, “General linear cameras,” in Computer Vision- ECCV 2004. Springer, 2004, pp. 14–27. 1 [5] P. Sturm and S. Maybank, “On plane-based camera calibration: A general algorithm, singularities, applications,” in CVPR 1999, 1999, pp. 432–437. 1 [6] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330– 1334, 2000. 1 [7] Z. Zhang and L.-W. He, “Whiteboard scanning and image enhancement,” Digital Signal Processing, vol. 17, no. 2, pp. 414–432, 2007. 1 [8] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. Cambridge University Press, 2004. 1, 5 [9] J. A. Nelder and R. Mead, “A simplex method for function minimiza- tion,” The computer journal, vol. 7, no. 4, pp. 308–313, 1965. 4, 5 [10] J.-C. Bazin, Y. Seo, C. Demonceaux, P. Vasseur, K. Ikeuchi, I. Kweon, and M. Pollefeys, “Globally optimal line clustering and vanishing point estimation in manhattan world,” in CVPR 2012, 2012, pp. 638–645. 5 [11] Z. Zhang, A. Ganesh, X. Liang, and Y. Ma, “Tilt: transform invariant low-rank textures,” International journal of computer vision, vol. 99, no. 1, pp. 1–24, 2012. 6 [12] P. E. Debevec, C. J. Taylor, and J. Malik, “Modeling and rendering architecture from photographs: A hybrid geometry-and image-based approach,” in SIGGRAPH 1996. ACM, 1996, pp. 11–20. 6 [13] J.-H. Lee, “An analytic solution to a projector pose estimation problem,” ETRI Journal, vol. 34, no. 6, pp. 978–981, 2012. 6

×