Upcoming SlideShare
×

# An Approach for Estimating the Fundamental Matrix by Barragan

686
-1

Published on

The camera calibration problem consists in estimating the intrinsic and the extrinsic parameters. This problem can be solved by computing the fundamental matrix. The fundamental matrix can be obtained from a set of corresponding points. However in practice, corresponding points may be inaccurately estimated, falsely matched or badly located, due to occlusion and ambiguity, among others. On the other hand, if the set of corresponding points does not include information on different depth planes, the estimated fundamental matrix may not be able to correctly recover the epipolar geometry. In this paper a method for estimating the fundamental matrix is introduced. The estimation problem is posed as finding a set of corresponding points. Fundamental matrices are estimated using subsets of corresponding points and an optimisation criterion is used to select the best estimated fundamental matrix. The experimental evaluation shows that the least range of residuals is a tolerant criterion to large baselines.

Published in: Education, Technology
0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

Views
Total Views
686
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
0
0
Likes
0
Embeds 0
No embeds

No notes for slide

### An Approach for Estimating the Fundamental Matrix by Barragan

1. 1. AN APPROACH FOR ESTIMATING THE FUNDAMENTAL MATRIXResearch work submitted for the degree of Master of Engineering in Computer Science Daniel Barragan Calderon, Eng Universidad del Valle, Cali - Colombia If I have seen farther than others, it is because I was standing on the shoulders of giants Albert Einstein
2. 2. Content  Motivation  Camera Model  Epipolar Geometry  Camera Model to Epipolar Geometry Derivation  3D Reconstruction Process  State-of-the-art  Problem Statement  Research Objectives  Proposed Approach  Results  Remarks, ConclusionsUniversidad del Valle – School of Computer and Systems Engineering Slide 2
3. 3. Motivation – Suite 1 Figure 1. 3D Applications. Source: Google ImagesUniversidad del Valle – School of Computer and Systems Engineering Slide 3
4. 4. Motivation – Suite 2 ? Figure 2. Direct Problem, Inverse / Ill-posed ProblemUniversidad del Valle – School of Computer and Systems Engineering Slide 4
5. 5. Motivation – Suite 3 Figure 3. Stereo Capture [50] Video 1. 3D ReconstructionUniversidad del Valle – School of Computer and Systems Engineering Slide 5
6. 6. Camera Model Figure 4. Extrinsic and Intrinsic Camera ParametersUniversidad del Valle – School of Computer and Systems Engineering Slide 6
7. 7. Epipolar Geometry Figure 5. Corresponding Points Figure 6. Epipolar GeometryUniversidad del Valle – School of Computer and Systems Engineering Slide 7
8. 8. Camera Model to Epipolar Geometry Derivation  Points 𝒎 and 𝒎′ (homogeneous coordinates) can be related through 𝑷 and 𝑷′ 𝒎′ = 𝑷′𝑷+ 𝒎  Epipolar line equation can be derived as follows 𝒍′ = 𝒆′ × 𝒎′ → 𝒍′ = 𝒆′ 𝒙 𝒎′ 𝒍′ = 𝒆′ 𝒙 (𝑷′ 𝑷+ )𝒎 𝓕 = 𝒆′ 𝒙 𝑷′ 𝑷+ 𝒍′ = 𝓕𝒎  Epipolar equation 𝒎′ 𝑻 𝒍′ = 0 → 𝒎′ 𝑻 𝓕𝒎 = 0Universidad del Valle – School of Computer and Systems Engineering Slide 8
9. 9. 3D Reconstruction Process Diagram 1. Illustration of 3D Reconstruction WorkflowUniversidad del Valle – School of Computer and Systems Engineering Slide 9
10. 10. State-of-the-art Calibration Extrinsic and Epipolar Intrinsic Geometry Parameters One Camera Two Camera Natured Inspired Robust Methods Calibration Calibration Techniques Genetic Bucketing Natured Inspired Two Step Calibrating Two Natured Inspired Algorithms Basic Algorithms Algorithms Techniques Techniques Times [30] Techniques [37, 38] [43, 45] Genetic Genetic Tsai M-Estimators Algorithms Algorithms [32] [42] [12-14,25,26] [35, 36] Particle Swarm Heikkila LMedS Optimizer [33] [40] [15-18] Neural Networks Zhang RANSAC [19,22,29] [31] [44] Diagram 2. State-of-the-artUniversidad del Valle – School of Computer and Systems Engineering Slide 10
11. 11. State-of-the-art Calibration Extrinsic and Epipolar Intrinsic Geometry Parameters One Camera Two Camera Natured Inspired Robust Methods Calibration Calibration Techniques Genetic Bucketing Natured Inspired Two Step Calibrating Two Natured Inspired Algorithms Basic Algorithms Algorithms Techniques Techniques Times [30] Techniques [37, 38] [43, 45] Genetic Genetic Tsai M-Estimators Algorithms Algorithms [32] [42] [12-14,25,26] [35, 36] Particle Swarm Heikkila LMedS Optimizer [33] [40] [15-18] Neural Networks Zhang RANSAC [19,22,29] [31] [44] Diagram 2. State-of-the-artUniversidad del Valle – School of Computer and Systems Engineering Slide 11
12. 12. State-of-the-art Calibration Extrinsic and Epipolar Intrinsic Geometry Parameters One Camera Two Camera Natured Inspired Robust Methods Calibration Calibration Techniques Genetic Bucketing Natured Inspired Two Step Calibrating Two Natured Inspired Algorithms Basic Algorithms Algorithms Techniques Techniques Times [30] Techniques [37, 38] [43, 45] Genetic Genetic Tsai M-Estimators Algorithms Algorithms [32] [42] [12-14,25,26] [35, 36] Particle Swarm Heikkila LMedS Optimizer [33] [40] [15-18] Neural Networks Zhang RANSAC [19,22,29] [31] [44] Diagram 2. State-of-the-artUniversidad del Valle – School of Computer and Systems Engineering Slide 12
13. 13. State-of-the-art Calibration Extrinsic and Epipolar Intrinsic Geometry Parameters One Camera Two Camera Natured Inspired Robust Methods Calibration Calibration Techniques Genetic Bucketing Natured Inspired Two Step Calibrating Two Natured Inspired Algorithms Basic Algorithms Algorithms Techniques Techniques Times [30] Techniques [37, 38] [43, 45] Genetic Genetic Tsai M-Estimators Algorithms Algorithms [32] [42] [12-14,25,26] [35, 36] Particle Swarm Heikkila LMedS Optimizer [33] [40] [15-18] Neural Networks Zhang RANSAC [19,22,29] [31] [44] Diagram 2. State-of-the-artUniversidad del Valle – School of Computer and Systems Engineering Slide 13
14. 14. State-of-the-art Calibration Extrinsic and Epipolar Intrinsic Geometry Parameters One Camera Two Camera Natured Inspired Robust Methods Calibration Calibration Techniques Genetic Bucketing Natured Inspired Two Step Calibrating Two Natured Inspired Algorithms Basic Algorithms Algorithms Techniques Techniques Times [30] Techniques [37, 38] [43, 45] Genetic Genetic Tsai M-Estimators Algorithms Algorithms [32] [42] [12-14,25,26] [35, 36] Particle Swarm Heikkila LMedS Optimizer [33] [40] [15-18] Neural Networks Zhang RANSAC [19,22,29] [31] [44] Diagram 2. State-of-the-artUniversidad del Valle – School of Computer and Systems Engineering Slide 14
15. 15. State-of-the-art Calibration Extrinsic and Epipolar Intrinsic Geometry Parameters One Camera Two Camera Natured Inspired Robust Methods Calibration Calibration Techniques Genetic Bucketing Natured Inspired Two Step Calibrating Two Natured Inspired Algorithms Basic Algorithms Algorithms Techniques Techniques Times [30] Techniques [37, 38] [43, 45] Genetic Genetic Tsai M-Estimators Algorithms Algorithms [32] [42] [12-14,25,26] [35, 36] Particle Swarm Heikkila LMedS Optimizer [33] [40] [15-18] Neural Networks Zhang RANSAC [19,22,29] [31] [44] Diagram 2. State-of-the-artUniversidad del Valle – School of Computer and Systems Engineering Slide 15
16. 16. State-of-the-art Calibration Extrinsic and Epipolar Intrinsic Geometry Parameters One Camera Two Camera Natured Inspired Robust Methods Calibration Calibration Techniques Genetic Bucketing Natured Inspired Two Step Calibrating Two Natured Inspired Algorithms Basic Algorithms Algorithms Techniques Techniques Times [30] Techniques [37, 38] [43, 45] Genetic Genetic Tsai M-Estimators Algorithms Algorithms [32] [42] [12-14,25,26] [35, 36] Particle Swarm Heikkila LMedS Optimizer [33] [40] [15-18] Neural Networks Zhang RANSAC [19,22,29] [31] [44] Diagram 2. State-of-the-artUniversidad del Valle – School of Computer and Systems Engineering Slide 16
17. 17. Problem Statement – Suite 1 Translation and Rotation Figure 7. Geometria EpipolarUniversidad del Valle – School of Computer and Systems Engineering Slide 17
18. 18. Problem Statement – Suite 2  Let 𝑺 be a set of corresponding points 𝒎 and 𝒎′ subject to:  The points 𝒎 and 𝒎′ have to be true projections of 𝑴  The 𝑢, 𝑣 𝑇 and 𝑢′, 𝑣′ 𝑇 coordinates have to correspond to the true localisation of 𝒎 and 𝒎′ , respectively  The cardinality of 𝑺 have to be in relation to depth planes in the 3D scene  The addressed problem consists in finding a set 𝑺 which fulfils the above criteriaUniversidad del Valle – School of Computer and Systems Engineering Slide 18
19. 19. Research Objetives General Objective  Proposing a correspondence selection method for the fundamental matrix estimation Specific Objectives  Implementing techniques for correspondence selection  Implementing techniques for the Fundamental Matrix estimation  Measuring the impact of correspondence selection on Fundamental Matrix estimation  Establishing a evaluation criterion for selecting the algorithm with the more accurate Fundamental MatrixUniversidad del Valle – School of Computer and Systems Engineering Slide 19
20. 20. Proposed Approach  An algorithm for fundamental matrix estimation were proposed Diagram 3. Proposed ApproachUniversidad del Valle – School of Computer and Systems Engineering Slide 20
21. 21. Proposed Approach  Clustering of Correspondences Diagram 3. Proposed ApproachUniversidad del Valle – School of Computer and Systems Engineering Slide 21
22. 22. Proposed Approach  Clustering of Correspondences Diagram 4. Disparity-Based Clustering of Correspondences Diagram 2. Proposed Genetic MethodUniversidad del Valle – School of Computer and Systems Engineering Slide 22
23. 23. Proposed Approach  Clustering of Correspondences  Disparity Estimation 𝒔𝒆𝒕 = ( 𝒎 𝟏 , 𝒎′ 𝟏 , … , 𝒎 𝒊 , 𝒎′ 𝒊 , … , (𝒎 𝒏 𝒄 , 𝒎′ 𝒏 𝒄 )) 𝒔𝒆𝒕 = (𝑢1 𝑣1 , 𝑢′1 𝑣′1 , … , 𝑢 𝑖 𝑣 𝑖 , 𝑢′ 𝑖 𝑣 ′ 𝑖 , … , (𝑢 𝑛 𝑐 𝑣 𝑛 𝑐 , 𝑢′ 𝑛 𝑐 𝑣′ 𝑛 𝑐 )) 𝒹 𝑢 𝑖 = 𝑢 𝑖 − 𝑢′𝑖 𝒹 𝑣 𝑖 = 𝑣 𝑖 − 𝑣 ′𝑖 𝓭 𝒊 = 𝒹 𝑢 𝑖, 𝒹 𝑣 𝑖  Subtractive Clustering 𝑛 2 − 𝒅𝒊 − 𝒅𝒋 𝑃𝑜𝑡 𝑖 = exp( ) 𝑟𝑎 2 𝑗=1 2 2 − 𝒅𝒊 − 𝒄 𝟏 𝑃𝑜𝑡 𝑖 = 𝑃𝑜𝑡 𝑖 − 𝑃𝑜𝑡𝑉𝑎𝑙(𝒄 𝟏 )exp( 𝑟𝑏 2 2Universidad del Valle – School of Computer and Systems Engineering Slide 23
24. 24. Proposed Approach  Clustering of Correspondences  Kmeans Clustering (𝑡+1) 1 𝒄𝒋 = (𝑡) 𝒹𝑖 𝒮𝑗 (𝑡) 𝓭 𝒊 ∈𝒮 𝑗 (𝑡) (𝑡) (𝑡) 𝒮𝑗 = 𝓭 𝒊: 𝓭𝒊 − 𝒄𝒋 ≤ 𝓭 𝒊 − 𝒄 𝒋∗ 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑗 ∗ = 1, … , 𝑘 𝒔𝒖𝒃𝒔𝒆𝒕 = ((𝑢1 𝑣1 , 𝑢′1 𝑣 ′1 ) 𝑟𝑎𝑛𝑑 , … , (𝑢λ𝑘 𝑣 𝜆𝑘 , 𝑢′ 𝜆𝑘 𝑣′ 𝜆𝑘 ) 𝑟𝑎𝑛𝑑 )  Number of Subsets ℘ = 1 − [1 − (1 −∈) 𝑛 𝑐 ] 𝑛 𝑠 log 1 − ℘ 𝑛𝑠 = 𝑛𝑐 log 1 − 1 −∈Universidad del Valle – School of Computer and Systems Engineering Slide 24
25. 25. Proposed Approach  Correspondences Selection Diagram 3. Proposed ApproachUniversidad del Valle – School of Computer and Systems Engineering Slide 25
26. 26. Proposed Approach  Fundamental matrix estimation Diagram 5. Correspondence Selection by GA Diagram 3. Proposed ApproachUniversidad del Valle – School of Computer and Systems Engineering Slide 26
27. 27. Proposed Approach  Correspondences Selection  Population 𝜃 = (𝑥1 , … , 𝑥 𝑗 , … , 𝑥 𝑝 ) 𝑥 𝑗 = (𝒎 𝑗 , 𝒎′ 𝑗 ) 𝜃 = ((𝒎1 , 𝒎′1 ), … , (𝒎 𝑗 , 𝒎′ 𝑗 ), … , (𝒎 𝑝 , 𝒎′ 𝑝 )) 𝑥 𝑗 = (𝑢 𝑗 𝑣 𝑗 , 𝑢′ 𝑗 𝑣 ′ 𝑗 ) 𝜃 = (𝑢1 𝑣1 , 𝑢′1 𝑣 ′1 , … (𝑢 𝑗 𝑣 𝑗 , 𝑢′ 𝑗 𝑣 ′ 𝑗 ) … , (𝑢 𝑝 𝑣 𝑝 𝑢′ 𝑝 𝑣 ′ 𝑝 ))  Fitness 𝑛 𝑓 𝜃 = 𝑑 𝒎 𝒊 , 𝓕𝒎 𝒊 ′ + 𝑑 ′ (𝒎 𝒊 ′, 𝓕𝒎 𝒊 ) 𝑖=1 𝜃0 = arg min 𝑓(𝜃)  Selection (Roulette)Universidad del Valle – School of Computer and Systems Engineering Slide 27
28. 28. Proposed Approach  Correspondences Selection  Crossover ′ 𝜃1 = 𝑠𝑢𝑏 𝜃1 , ℎ |𝑠𝑢𝑏 𝜃2 , 𝑝 − ℎ , ℎ = 𝒫𝑝 ′ 𝜃2 = 𝑠𝑢𝑏 𝜃2 , 𝑝 − ℎ |𝑠𝑢𝑏 𝜃1 , ℎ , ℎ = 𝒫𝑝 0.15 ≤ 𝒫 ≤ 0.85  Mutation 𝑥′ = 𝑥𝑗 + 𝜉 𝑗 𝜉 : Mutation offset 0 1 2 7 𝑥𝑗 3 6 5 4Universidad del Valle – School of Computer and Systems Engineering Slide 28
29. 29. Proposed Approach  Fundamental matrix Diagram 3. Proposed ApproachUniversidad del Valle – School of Computer and Systems Engineering Slide 29
30. 30. Results – Suite 1 This section contains the results for the following tests:  Results for different correspondences selection methods and different fundamental matrix estimation algorithms.  Repeatability analysis for proposed GA-based algorithm  Performance evaluation using multiple datasets for proposed GA-based algorithmUniversidad del Valle – School of Computer and Systems Engineering Slide 30
31. 31. Results – Suite 2  Results were evaluated using the following error measure residual 𝒎 𝒎′ Figure 8. Error MeasureUniversidad del Valle – School of Computer and Systems Engineering Slide 31
32. 32. Results – Suite 3  Results were evaluated using the epipolar lines Camera Camera Left Right Figure 9. Epipolar LinesUniversidad del Valle – School of Computer and Systems Engineering Slide 32
33. 33. Results – Suite 4 Residual Estimation Correspondence selection technique Fundamental matrix Random Buckets Proposed DBC* estimation algorithm Normalized 7 Points Algorithm 1,4482E-04 1,7010E-04 1,8253E-04 Normalized 8 Points Algorithm 1,1341E-07 2,0495E-09 1,2947E-06 Table 1. Residual Estimation (a) (b) Figure 10. (a) Norm. 7 Points + DBC , (b) Norm. 8 Points + DBC *DBC: Disparity Based ClusteringUniversidad del Valle – School of Computer and Systems Engineering Slide 33
34. 34. Results – Suite 5 Residual Estimation Correspondence selection technique Fundamental matrix Random Buckets Proposed DBC estimation algorithm LMedS 7,6743E-05 7,1070E-05 8,0746E-05 Proposed GA-based 8,9615E-06 1,5240E-05 2,4937E-05 Table 2. Residual Estimation (Robust Methods) (a) (b) Figure 11. (a) LMedS + DBC , (b) GA-Based+ DBCUniversidad del Valle – School of Computer and Systems Engineering Slide 34
35. 35. Results – Suite 6 Residual Estimation Correspondence selection technique Fundamental matrix estimation Random Buckets Proposed DBC algorithm Normalized 7 Points Algorithm 1,4482E-04 1,7010E-04 1,8253E-04 Normalized 8 Points Algorithm 1,1341E-07 2,0495E-09 1,2947E-06 LMedS 7,6743E-05 7,1070E-05 8,0746E-05 Proposed GA-based 8,9615E-06 1,5240E-05 2,4937E-05 Table 3. Residual Estimation 2,0000E-04 1,5000E-04 1,0000E-04 5,0000E-05 Random 0,0000E+00 Buckets Proposed DBC Chart 1. Residual EstimationUniversidad del Valle – School of Computer and Systems Engineering Slide 35
36. 36. Results – Suite 7 Computing Time (Sec.) Correspondence selection technique Fundamental matrix estimation Random Buckets Proposed DBC algorithm Normalized 7 Points Algorithm 2,654 2,794 3,547 Normalized 8 Points Algorithm 2,742 2,790 3,209 LMedS 3,563 3,620 4,002 Proposed GA-based 10,390 11,983 18,697 Table 4. Computing Time (Sec.) AMD 1,7GHz, 3Gb RAM 20,0000 Seconds 15,0000 10,0000 5,0000 0,0000 Random Buckets Proposed DBC Chart 2. Computing Time (Sec.) AMD 1,7GHz, 3Gb RAMUniversidad del Valle – School of Computer and Systems Engineering Slide 36
37. 37. Results – Suite 8  Filtering the initial estimated corresponding points using RANSAC and Guide Sampling [48] results were improved Fundamental matrix estimation Residual Estimation Computing Time (Sec.) algorithm LMedS + Bucketing 7,1070E-05 3,620 Proposed GA-based 7,9477E-09 25,065 Table 5. Proposed GA-based + RANSAC + Guide Sampling (a) (b) Figure 12. (a) Bad Located and False Matches Filtering, (b) Epipolar Line for the Proposed GA-based + RANSAC + Guide SamplingUniversidad del Valle – School of Computer and Systems Engineering Slide 37
38. 38. Results – Suite 9 Dataset Residual Computing Time (Sec.) 1,5752E-09 25,967 2,4951E-09 37,333 Lab 9,6977E-10 67,101 1,0642E-09 32,820 1,4664E-10 20,284 Table 6. Repeatability Analysis for Proposed GA-based + RANSAC + Guide Sampling Figure 13. Epipolar lines for the Proposed GA-based + RANSAC + Guide SamplingUniversidad del Valle – School of Computer and Systems Engineering Slide 38
39. 39. Results – Suite 10 FM Estimation Computing Dataset Residual Algorithm Time (Sec.) Bucketing + LMedS 1,8745E-04 3,0833Lab Proposed GA-based 2,1072E-06 20,4822 Bucketing + LMedS 1,5994E-05 3,3743Corridor [49] Proposed GA-based 1,3204E-09 7,3512 Bucketing + LMedS 1,2072E-04 45,4828Raglan [49] Proposed GA-based 1,6294E-10 59,2093 Bucketing + LMedS 1,8288E-04 2,9213Kapel [49] Proposed GA-based 6,0952E-09 37,5971 Table 7. Performance evaluation using multiple datasetsUniversidad del Valle – School of Computer and Systems Engineering Slide 39
40. 40. Results – Suite 11 Figure 14. Epipolar lines for multiple datasets [49]Universidad del Valle – School of Computer and Systems Engineering Slide 40
41. 41. Remarks – Suite 1  The GA-based algorithm can be used in applications that do not require successive fast calibration of a stereo rig, for example: content generation where calibration is done usually one time at the beginning of the capture  Parallel computing reduce estimation time for robust algorithms when the computing time dedicated to algorithm iterations is long compared with the computing time dedicated to split tasks. Test were made but they are not include in the research workUniversidad del Valle – School of Computer and Systems Engineering Slide 41
42. 42. Remarks – Suite 2  Algorithms’ speed can be improved when operations over vector of correspondences are done through indexes  Security systems that use multiple cameras are based nowadays just on plain information from images but not on their coordinate systems. Unifying coordinate systems of cameras would avoid many drawbacks of actual security systemsUniversidad del Valle – School of Computer and Systems Engineering Slide 42
43. 43. Conclusions – Suite 1  Residual value does not provide reliable results as a benchmarking for fundamental matrix estimation when presence of outliers is high. It is necessary to perform a previous filtering step in order to obtain reliable residual values  The GA (genetic algorithm) by itself is not able to discard correspondence outliers, it is necessary to include a previous filtering step when noise levels are high in order to obtain satisfactory results for fundamental matrix estimationUniversidad del Valle – School of Computer and Systems Engineering Slide 43
44. 44. Conclusions – Suite 2  Mathematically having 7 or 8 corresponding points is enough to solve the equation system for fundamental matrix estimation, but having 7 or 8 pairs free of false matches and bad matches is a difficult task in real problems. It is better to have a bigger number of correspondences to include the variability inherent to reality from different depth planesUniversidad del Valle – School of Computer and Systems Engineering Slide 44
45. 45. Contributions  Poster Acerca del Algoritmo 8 Puntos LatinAmerican Conference On Networked and Electronic Media 2009 Daniel Barragan, Maria Trujillo  Paper Submitted and Oral Presentation An Approach for Estimating the Fundamental Matrix 6th Colombian Computation Congress 2011 Daniel Barragan, Maria Trujillo  Paper Submitted A GA-based Method for Estimating the Fundamental Matrix IEEE Congress on Evolutionary Computation 2011 Daniel Barragan, Ivan Cabezas, Maria Trujillo  Paper Submitted A GA-based Method for Estimating the Fundamental Matrix 22nd British Machine Vision Conference 2011 Daniel Barragan, Ivan Cabezas, Maria TrujilloUniversidad del Valle – School of Computer and Systems Engineering Slide 45
46. 46. References – Suite 1  [1] J. Bazin, I. Kweon, C. Demonceaux, y P. Vasseur, “Automatic calibration of catadioptric cameras in urban environment,” 2008, págs. 3108-3114.  [2] P. Krsek, M. Spanel, M. Svub, V. Stancl, O. Siler, y R. Barton, “Network collaborative environment supporting 3D medicine,” Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual International Conference of the IEEE, 2009, págs. 2164-2167.  [3] Jiuxiang Hu, A. Razdan, y J. Zehnder, “An Algorithm to Calibrate Field Cameras for Stereo Clouds,” 2008, págs. II-1048-II-1051.  [4] L. Ray, “Monocular 3D vision for a robot assembly environment,” IEEE International Conference on Systems Engineering, Pittsburgh, PA, USA: , págs. 430-434.  [5] V. Mazya y T.O. Shaposhnikova, Jacques Hadamard, AMS Bookstore, 1999.  [6] G. Xú y Z. Zhang, Epipolar geometry in stereo, motion, and object recognition, Springer, 1996.  [7] J. Weng, P. Cohen, y M. Herniou, “Camera Calibration with Distortion Models and Accuracy Evaluation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, 1992, págs. 965-980.  [8] Richard Hartley y Andrew Zisserman, Multiple view geometry in computer vision, Cambridge University Press, 2003.  [9] H.C. Longuet-Higgins, “A computer algorithm for reconstructing a scene from two projections,” Nature, vol. 293, 1981, págs. 133-135.  [10] R. Hartley, “In defense of the eight-point algorithm,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 19, 1997, págs. 580-593.  [11] Q. Luong y O.D. Faugeras, “The fundamental matrix: Theory, algorithms, and stability analysis,” International Journal of Computer Vision, vol. 17, Ene. 1996, págs. 43-75.  [12] A. Abellard, M. Bouchouicha, y Mohamed Moncef Ben Khelifa, “A genetic algorithm application to stereo calibration,” Computational Intelligence in Robotics and Automation, 2005. CIRA 2005. Proceedings. 2005 IEEE International Symposium on, 2005, págs. 285-290.  [13] M. Bouchouicha, M. Khelifa, y W. Puech, “A non-linear camera calibration with genetic algorithms,” 2003, págs. 189-192 vol.2.Universidad del Valle – School of Computer and Systems Engineering Slide 46
47. 47. References – Suite 2  [14] Z. Yang, F. Chen, y J. Zhao, “A novel camera calibration method based on genetic algorithm,” 2008, págs. 2222-2227.  [15] Dechao Wang, Yaqing Tu, y Tienan Zhang, “Research on the application of PSO algorithm in non-linear camera calibration,” Intelligent Control and Automation, 2008. WCICA 2008. 7th World Congress on, 2008, págs. 4495-4500.  [16] Xiaona Song, Bo Yang, Zhiquan Feng, Ting Xu, Deliang Zhu, y Yan Jiang, “Camera Calibration Based on Particle Swarm Optimization,” Image and Signal Processing, 2009. CISP 09. 2nd International Congress on, 2009, págs. 1-5.  [17] J. Ze-Tao, W. Wenhuan, y W. Min, “Camera Autocalibration from Kruppas Equations Using Particle Swarm Optimization,” Proceedings of the 2008 International Conference on Computer Science and Software Engineering - Volume 01, IEEE Computer Society, 2008, págs. 1032-1034.  [18] H. Gao, B. Niu, Y. Yu, y L. Chen, “An Improved Two-Stage Camera Calibration Method Based on Particle Swarm Optimization,” Emerging Intelligent Computing Technology and Applications. With Aspects of Artificial Intelligence, 2009, págs. 804-813.  [19] M. Ahmed, E. Hemayed, y A. Farag, “A neural approach for single- and multi-image camera calibration,” 1999, págs. 925-929 vol.3.  [20] Qiang Ji y Yongmian Zhang, “Camera calibration with genetic algorithms,” Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, vol. 31, 2001, págs. 120-130.  [21] Junghee Jun y Choongwon Kim, “Robust camera calibration using neural network,” TENCON 99. Proceedings of the IEEE Region 10 Conference, 1999, págs. 694-697 vol.1.  [22] M. Ahmed, E. Hemayed, y A. Farag, “Neurocalibration: a neural network that can tell camera calibration parameters,” Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, 1999, págs. 463-468 vol.1.  [23] K. Bilal y J. Qureshi, “Nature inspired optimization techniques for Camera calibration,” Emerging Technologies, 2008. ICET 2008. 4th International Conference on, 2008, págs. 27-31.  [24] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley Professional, 1989.Universidad del Valle – School of Computer and Systems Engineering Slide 47
48. 48. References – Suite 3  [25] Qiang Ji y Yongmian Zhang, “Camera calibration with genetic algorithms,” Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, vol. 31, 2001, págs. 120-130.  [26] M. Roberts y A. Naftel, “A genetic algorithm approach to camera calibration in 3D machine vision,” 1994, págs. 12/1-12/5.  [27] J. Kennedy y R. Eberhart, “Particle swarm optimization,” Neural Networks, 1995. Proceedings., IEEE International Conference on, 1995, págs. 1942-1948 vol.4.  [28] C.M. Bishop, Neural networks for pattern recognition, Oxford University Press, 1995.  [29] Junghee Jun y Choongwon Kim, “Robust camera calibration using neural network,” 1999, págs. 694-697 vol.1.  [30] Jean-Yves Bouguet, “Camera Calibration Toolbox for Matlab,” Toolbox, California Institute of Technology CALTECH.  [31] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, 2000, págs. 1330-1334.  [32] R. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off- the-shelf TV cameras and lenses,” Robotics and Automation, IEEE Journal of, vol. 3, 1987, págs. 323-344.  [33] J. Heikkilä, “Geometric Camera Calibration Using Circular Control Points,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, 2000, págs. 1066-1077.  [34] W. Sun y J. Cooperstock, “An empirical evaluation of factors influencing camera calibration accuracy using three publicly available techniques,” Machine Vision and Applications, vol. 17, Abr. 2006, págs. 51-67.  [35] S. Kumar, M. Thakur, B. Raman, y N. Sukavanam, “Stereo camera calibration using real coded genetic algorithm,” TENCON 2008 - 2008 IEEE Region 10 Conference, 2008, págs. 1-5.  [36] Yingjie Xing, Qiao Liu, Jing Sun, y Long Hu, “Camera Calibration Based on Improved Genetic Algorithm,” Automation and Logistics, 2007 IEEE International Conference on, 2007, págs. 2596-2601.  [37] J. Chai y S. Ma, “Robust epipolar geometry using genetic algorithm,” Computer Vision — ACCV98, 1997, págs. 272-279.Universidad del Valle – School of Computer and Systems Engineering Slide 48
49. 49. References – Suite 4  [38] G.R. Whitehead, “Estimating Intrinsic Camera Parameters from the Fundamental Matrix Using an Evolutionary Approach,” EURASIP Journal on Applied Signal Processing, vol. vol. 2004, 2004, págs. pp. 1113-1124.  [39] P.H.S. Torr, “Bayesian Model Estimation and Selection for Epipolar Geometry and Generic Manifold Fitting,” Int. J. Comput. Vision, vol. 50, 2002, págs. 35-61.  [40] Z. Zhang, “Determining the Epipolar Geometry and its Uncertainty: A Review,” Int. J. Comput. Vision, vol. 27, 1998, págs. 161-195.  [41] Z. Zhang, R. Deriche, O. Faugeras, y Q. Luong, “A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry,” Artif. Intell., vol. 78, 1995, págs. 87-119.  [42] R. Subbarao y P. Meer, “Beyond RANSAC: User Independent Robust Regression,” Computer Vision and Pattern Recognition Workshop, 2006. CVPRW 06. Conference on, 2006, pág. 101.  [43] Z. Zhang, “Determining the Epipolar Geometry and its Uncertainty: A Review,” Int. J. Comput. Vision, vol. 27, 1998, págs. 161-195.  [44] P.H.S. Torr y D.W. Murray, “The Development and Comparison of Robust Methodsfor Estimating the Fundamental Matrix,” Int. J. Comput. Vision, vol. 24, 1997, págs. 271-300.  [45] Yi-Jun Huang y Wei-Jun Liu, “Robust estimation for the fundamental matrix based on LTS and bucketing,” 2009, págs. 486-491.  [46] M. Trujillo and E. Izquierdo (UK), “Robust Estimation of the Fundamental Matrix by Exploiting Disparity Redundancies,” ACTA Press, Sep. 2003.  [47] M.A. Fischler y R.C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, Jun. 1981, págs. 381–395.  [48] B. Tordoff y D. Murray, “Guided Sampling and Consensus for Motion Estimation,” ACM Digital Library, 2002.  [49] Robotics Research Group Visual Geometry Group. Multi-view and oxford colleges building reconstruction. <http://www.robots.ox.ac.uk/ vgg/data/data-mview.html>.  [50] <www.mathworks.com/image-video-processing>Universidad del Valle – School of Computer and Systems Engineering Slide 49
50. 50. Indoors by Iván Cabezas THANKSUniversidad del Valle – School of Computer and Systems Engineering Slide 50
51. 51. Derivation of EpipolarGeometry from Proyective Matrixes Appendix A
52. 52. Epipolar Geometry  Relation between 𝒎 and 𝒎′ through 𝑷 and 𝑷′ 𝒎 = 𝑷𝑴 𝑷+ 𝒎 = 𝑷+ 𝑷𝑴 𝑷+ 𝒎 = 𝑴 𝒎′ = 𝑷′𝑴 𝒎′ = 𝑷′𝑷+ 𝒎 Figure A1. Epipolar GeometryUniversidad del Valle – School of Computer and Systems Engineering Slide 52
53. 53. Epipolar Geometry  Relation between 𝒎 and 𝒎′ through 𝑷 and 𝑷′ 𝒎′ = 𝑷′𝑷+ 𝒎  Epipolar line equation 𝒍′ = 𝒆′ × 𝒎′ 𝒍′ = 𝒆′ 𝒙 𝒎′ 𝒍′ = 𝒆′ 𝒙 (𝑷′ 𝑷+ )𝒎 Figure A1. Epipolar Geometry 𝓕 = 𝒆′ 𝒙 𝑷′ 𝑷+ 𝒍′ = 𝓕𝒎Universidad del Valle – School of Computer and Systems Engineering Slide 53
54. 54. Epipolar Geometry  Epipolar line equation 𝒍′ = 𝓕𝒎  Fundamental matrix equation 𝒎′ 𝑻 𝒍′ = 0 𝒎′ 𝑻 𝓕𝒎 = 0 Figure A1. Epipolar GeometryUniversidad del Valle – School of Computer and Systems Engineering Slide 54
55. 55. Epipolar Geometry 3D Appendix BUniversidad del Valle – School of Computer and Systems Engineering
56. 56. Epipolar Geometry 3D Figure B1. Epipolar Geometry in 3DUniversidad del Valle – School of Computer and Systems Engineering Slide 56
57. 57. Results Corridor Stereo Pair Source: http://www.robots.ox.ac.uk/ Appendix CUniversidad del Valle – School of Computer and Systems Engineering
58. 58. Results Corridor Stereo Pair Figure C1. Disparity-Based Clustering of CorrespondencesUniversidad del Valle – School of Computer and Systems Engineering Slide 58
59. 59. Results Corridor Stereo Pair Figure C2. Elitistic Set of CorrespondencesUniversidad del Valle – School of Computer and Systems Engineering Slide 59
60. 60. Results Corridor Stereo Pair Figure C3. Epipolar LinesUniversidad del Valle – School of Computer and Systems Engineering Slide 60
61. 61. Accuracy and Precision Appendix DUniversidad del Valle – School of Computer and Systems Engineering
62. 62. Accuracy and Precision Figure D1. Accuracy and PrecisionUniversidad del Valle – School of Computer and Systems Engineering Slide 62
63. 63. Stereo Capture Appendix EUniversidad del Valle – School of Computer and Systems Engineering
64. 64. Stereo Capture Video E1. Stereo Rig and Corresponding PointsUniversidad del Valle – School of Computer and Systems Engineering Slide 64
65. 65. Correspondences Filtering Appendix FUniversidad del Valle – School of Computer and Systems Engineering
66. 66. Correspondences Filtering Figure F1. Bad Located and False Matches Filtering Figure F2. Epipolar Line for the Proposed GA-based + RANSAC + Guide SamplingUniversidad del Valle – School of Computer and Systems Engineering Slide 66