SlideShare a Scribd company logo
1 of 84
Download to read offline
SILESIAN UNIVERSITY OF TECHNOLOGY
Faculty of Automatic Control, Electronics and Computer Science
MASTER THESIS
Research on accuracy of geometric
reconstruction using digital cameras
Author: Marek Kubica
Supervisor:dr inż. Henryk Palus
Gliwice 2005
2
1. Introduction ……………………………………………………………………….…….. 5
- Purpose of the work ………………………………………………………………….. 5
2. Model of the camera ……………………………………………………………….…… 6
- Construction of the common digital camera and lens defects ……………………….. 6
- Description of geometric model of the camera …………………………………...… 11
• Different coordinate frames …………………………………………………….. 12
• Calibration matrix ………………………………………………………………. 13
• Mirror matrix and mirror constraint …………………………………………….. 14
3. 2D homography ………………………………………………………………………... 16
4. Minimizing the distortions from the image ………………………………………….. 18
- Radial distortion correction ……………………………………………………….… 18
- Homography correction …………………………………………………………….. 24
- Mirror pole optimization ……………………………………………………………. 26
- Mirror pole correction ………………………………………………………………. 28
5. Calibration of the camera ……………………………………………………………... 29
- Mirror pole ………………………………………………………………………….. 29
- Vanishing mirror line ……………………………………………………………….. 31
• Projection of object on the mirror ………………………………………………. 31
• Calculation of vanishing mirror line ……………………………………………. 32
• Horizontal correction …………………………………………………………… 33
- Mirror angle and principal point ……………………………………………………. 34
- 3D reconstruction and scale factor ………………………………………………….. 36
6. Calibration and measuring algorithm ………………………………………………... 38
- Calibration procedure and setting the scene ………………………………………… 39
- Measuring procedure ………………………………………………………………... 42
- Example on a real data ……………………………………………………………… 43
7. Accuracy measures ……………………………………………………………………. 47
- Description of the experimental accuracy measures and procedures ………………. 47
- Accuracy of calibration procedure ………………………………………………….. 48
- Accuracy of distance measuring ……………………………………………………. 49
- Influence of position of measured object …………………………………………… 50
- Influence of resolution of the camera ……………………………………………….. 51
8. Case study ………………………………………………………………………………. 53
9. Summary ……………………………………………………………………………….. 58
- Accomplished goals ………………………………………………………………… 58
- Proposals for the future research ……………………………………………………. 58
- Futures and industrial application …………………………………………………... 59
3
10. Index of figures and tables ……………………………………………………………. 60
11. References ……………………………………………………………………………… 62
12. Appendixes ……………………………………………………………………………... 63
- Distortion minimizing source code …………………………………………………. 63
• Source code for radial distortion removal algorithm …………………………… 63
• Source code for homography correction ………………………………………... 64
• Source code for mirror pole optimization ………………………………………. 68
• Source code for mirror pole correction …………………………………………. 69
- Source code for calibration procedure ……………………………………………… 70
• Source code for mirror pole calculation ………………………………………… 70
• Source code for projection of object on the mirror plane ………………………. 71
• Source code for vanishing mirror line calculation ……………………………… 72
• Source code for mirror angle, focal distance and central point calculation …….. 74
• Source code for scale factor calculation ………………………………………… 77
• Source code for main file for camera calibration parameters …………………... 79
- Source code for measuring algorithm ………………………………………………. 82
4
Acknowledgments
At the beginning I would like to thank all the kind people by who I get opportunity to
work on the subject described in my master thesis. To international coordinator from my
department Mrs. Joanna Polańska for creating me opportunity to study abroad as an Erasmus
student on the Karel de Grote-Hogeschool in Antwerp.
To Rudi Penne for a very good cooperation, comments and suggestions. To Luc Mertens who
give me opportunity to work in the Industrial Vision Laboratory on the department of
Industrial Science and Technology on KDG and Daniel Senft with who I can always
discussed my problems and ideas.
5
1. Introduction
- Purpose of the work
The 3D reconstruction of real world scenes gives great opportunities and opens a wide
range of applications in the robot or industrial vision. It is the most important sense for
humans in the exploration of the universe and it should also be the basic and the primary
sensing element of artificial intelligence.
Aim of my work was to build complete calibration algorithm for a 3D reconstruction
for a metric purpose and make research how to increase accuracy of such a system working
with different models of disturbances created by the camera also work on different
mathematical methods to neglect effects from all sources of nonlinearities which have a very
strong impact on the linear model of the camera.
I will describe the creation mechanism of a disturbances in the digital camera and what
is the main source of it. Describe the method for removing the nonlinear errors from the
image.
For a 3D reconstruction system with a camera and a mirror plane I will describe this system
precisely its properties and construction. Give a ready solution for stable calculation of an
intrinsic and extrinsic calibration parameters.
Introduce whole algorithm for a calibration procedure and measuring procedure with an
examples on a real data using Matlab 6.5 software.
Describe total accuracy of the system and specify which parts requires special care to
maintain best precision.
6
2. Model of the camera
At the beginning we will look closer to our measuring instrument which is digital
camera.
We will track all the way of our measurement path starting from the light reflected from the
measured object, passing the lens and finally to be captured by the image capturing sensor and
converted to the digital representation.
I will explain which optical phenomena causes the biggest disturbances to signal
which we are processing and what construction details are important to be taken into
consideration in calibration process and during measurements.
I will describe what mathematical model of a camera I use to calculate all parameters
of our system and using simple linear algebra how we are going to reconstruct from simple
two dimensional picture real three dimensional coordinates of measured object in reference to
the center of the camera.
- Construction of the common digital camera
During all the years of photography the general idea of taking pictures did not change
much only the information capturing and storage significantly evolve. We will look closer to
the beginning part of the process because in the preprocessing part of the digital camera to the
signal are not added any disturbances we will pass it and take a closer look on the part when
the information is still transmitted as a signal in the form of light.
The idea and that part of the process is very simple. The light is reflected from the object, is
passing through the lenses and is projected on the sensing element, which in digital camera is
CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) sensor.
Figure 2.1 General idea of construction of the commonly used digital camera
A CCD is build of photo sites, typically arranged in an X-Y matrix of rows and
columns. Each photo site, in turn, is build of a photodiode and an adjacent charge holding
region, which is shielded from light. The photodiode converts light (photons) into charge
(electrons). The number of electrons collected is proportional to the light intensity. Typically,
light is collected over the entire sensor simultaneously and then transferred to the adjacent
charge transfer cells within the columns.
Next, the charge is read out: each row of data is moved to a separate horizontal charge
transfer register. Charge packets for each row are read out serially and sensed by a charge-to-
voltage converter and amplifier.
7
This architecture produces a low-noise, high-performance imager. That optimization,
however, makes integrating other electronics onto the silicon impractical. In addition,
operating the CCD requires application of several clock signals, clock levels, and bias
voltages, complicating system integration and increasing power consumption, overall system
size, and cost.
A CMOS sensor, is made with standard silicon processes in high-volume foundries.
Peripheral electronics, such as digital logic, clock drivers, or analog-to-digital converters, can
be readily integrated with the same fabrication process. CMOS sensors can also benefit from
process and material improvements made in mainstream semiconductor technology. To
achieve these benefits, the CMOS sensors architecture is arranged more like a memory cell or
flat-panel display. Each photo site contains a photodiode that converts light to electrons, a
charge-to-voltage conversion section, a reset and select transistor and an amplifier section.
Overlaying the entire sensor is a grid of metal interconnects to apply timing and readout
signals, and an array of column output signal interconnects. The column lines connect to a set
of decode and readout (multiplexing) electronics that are arranged by column outside of the
pixel array.
This architecture allows the signals from the entire array, from subsections, or even
from a single pixel to be readout by a simple X-Y addressing technique {Ref. 1}.
Figure 2.2 CCD and CMOS image capture sensors
Both techniques brings some strengths and weakness but regardless them all for us the
most important is that the image coordinates are Euclidean coordinates having equal scales in
both axial directions. In the cameras with CCD sensor , there is biggest possibility of having a
non-square pixels. If image coordinates are measured in pixels how it is in our case then this
has the extra effect of introducing unequal scale factor in each direction.
8
The biggest source of errors in our image are the lenses. Usually it is the lens which
distorts the image mostly depending on the quality of it. In commonly used simple cameras
with used very cheap lenses which through several optical phenomena change and deflects the
measured object on the image almost unreversible.
The first phenomena is chromatic aberration. Chromatic aberration arises from
dispersion, the property that the refractive index of glass differs with wavelength
(see Figure 2.3). There are two types of chromatic aberration: longitudinal aberration and
lateral aberration.
Figure 2.3 Chromatic aberration
- Longitudinal chromatic aberration causes different wavelengths to focus on different image
planes.
- Lateral chromatic aberration is the color fringing that occurs because the magnification of
the image differs with wavelength.
There are several ways of removing chromatic aberration. For production use very
exotic glasses with very low dispersion like "Hi-UD" glass produced by Cannon {Ref. 3}.
Use lens with a very big focal distance so the light do not have to be refracted so much, or use
system of two or three lenses with different types of glass so aberration of one lens is
corrected by the another. But all those solutions are very expensive and usually we will have
to neglect this by preprocessing {Ref 2}.
Most photographic lenses are composed of elements with spherical surfaces. Such
elements are relatively easy to manufacture, but their shape is not ideal for the formation of a
sharp image. Spherical aberration is an image imperfection that is due to the spherical lens
shape, Figure 2.4 illustrates the aberration for a single, positive element. Light that hits the
lens close to the optical axis is focused at position ‘c’. The light that traverses the margins of
the lens comes to a focus at a position ‘a’ closer to the lens.
3
10. Index of figures and tables ……………………………………………………………. 60
11. References ……………………………………………………………………………… 62
12. Appendixes ……………………………………………………………………………... 63
- Distortion minimizing source code …………………………………………………. 63
• Source code for radial distortion removal algorithm …………………………… 63
• Source code for homography correction ………………………………………... 64
• Source code for mirror pole optimization ………………………………………. 68
• Source code for mirror pole correction …………………………………………. 69
- Source code for calibration procedure ……………………………………………… 70
• Source code for mirror pole calculation ………………………………………… 70
• Source code for projection of object on the mirror plane ………………………. 71
• Source code for vanishing mirror line calculation ……………………………… 72
• Source code for mirror angle, focal distance and central point calculation …….. 74
• Source code for scale factor calculation ………………………………………… 77
• Source code for main file for camera calibration parameters …………………... 79
- Source code for measuring algorithm ………………………………………………. 82
10
Figure 2.5 Simple lens with undercorrected astigmatism. T - tangential surface; S - sagittal surface;
P - Petzval surface
As a consequence, when the image center is in focus the image corners are out of
focus, with tangential details blurred to a greater extent than sagittal details. Although off-axis
stigmatic imaging is not possible in this case, there is a surface lying between the ‘S’ and ‘T’
surfaces that can be considered to define the positions of best focus.
The surface P (see Figure 2.5) is the Petzval surface, named after the mathematician Joseph
Mikza Petzval. It is a surface that is defined for any lens, but that does not relate directly to
the image quality - unless astigmatism is completely absent. In the presence of astigmatism
the image is always curved (whether it concerns S, T, or both) even if P is flat.
All this phenomena together will cause quite big distortions to our image what will
result in radial distortions (see Figure 2.6) on the image and finally error of our measurement.
The commonly observed are pillow or barrel distortions, easy and almost possible completely
to remove, but sometimes distortions are more complex and we will observe wave distortions.
Because the way they are created are well known to us we can easily model and remove them
by recalculating position of all pixels on the image.
a) b) c)
Figure 2.6 Different kinds of radial distortions a) barrel, b) pillow, c) wave
But problem to obtain very good sharpness on object and its mirror reflection, which
usually is almost impossible, makes the objects edges blurred. This cause uncertainties in
coordinates of points which creates calibration object and points which determine edges of
object which we are going to measure. Although in case when we are calibrating the camera
we can choose such an object so we can almost minimize this error to zero. But in case of
measured objects we have to use some edge detection techniques to get better precision of
measurement. Due to mechanical construction of the camera we have to remember that
calibration parameters of camera are changing with every change of camera settings {Ref.
2,3,4}.
11
- Description of geometric model of the camera
The camera is a simple mapping of 3D world objects on 2D image, now we will look
closer to the central projection pinhole model represented by matrices with specific properties
which describe mapping between 3D world and 2D image. The idea of 3D reconstruction use
simple statement that using simple image of the object with the second image carrying the
depth information is enough to determine real space coordinates.
We will describe special case of stereo vision, mirror or catadioptric vision, where
instead of two cameras the depth information we will extract from the reflection of the object
in the mirror. This result in series of simplification. Both views are captured by the same
camera with identical camera parameters. It also simplify preprocessing when we deal only
with one image. Instead of two epipoles from two cameras we obtain only one mirror pole ‘e’.
In classical stereo vision position of two cameras is determined by six parameters in our
situation three parameters are only necessary {Ref. 10}.
The pinhole model of the camera is defined simply by the retinal plane ‘R’ and the
center of the camera ‘C’. In this model the image ‘n’ of the point in the 3D real space ‘N’ is
obtained by projection of ‘N’ on retinal plane ‘R’ from camera center ‘C’ (see Figure 5.6).
For our considerations we will use a frontal pinhole model where the retinal plane is between
the camera center and the object. Because the model is linear all the nonlinearities like radial
distortions in the image should be removed before any calculations. We also assume a camera
model with square pixels correcting any difference of scaling in axial direction in
preprocessing together with radial distortions.
a) Frontal view b) Top view
Figure 2.7 Two views on the frontal pinhole model of the camera. a) frontal view , b) top view.
‘R’ – retinal plane, ‘M’ – mirror plane, optical axis is a line perpendicular too ‘R’ through ‘C’, hinge ‘g’ is a
line of intersection of the mirror plane with the retinal plane , ‘f’- focal distance between ‘C’ and ‘c’ measured
in pixel units, horizon ‘h’ is a perpendicular line to ‘k’ through ‘e’, ‘φ’ – angle between mirror plane and
retinal plane
Considering this model with all our assumptions there are only three intrinsic
parameters left to determine. Coordinates of the principal point ‘c’ (uc , vc ) defined in pixel
coordinates by the line perpendicular to retinal plane ‘R’ from camera center ‘C’ and focal
length ‘f’ measured in pixels and defined as a distance between principal point ‘c’ and camera
center ‘C’. And three extrinsic parameters, mirror angle ‘φ’ between retinal plane and mirror
plane, shortest distance ‘d’ from the center of the camera ‘C’ to the mirror plane and camera
angle ‘θ’ defined as angle between horizon ‘h’ and ‘u’ axis of image (see Figure 2.8).
The mirror is represented as a mirror plane ‘M’, the cross-section of mirror plane and retinal
plane creates hinge ‘g’.
12
Horizontal plane ‘H’ perpendicular to the hinge ‘g’ and crossing the camera center ‘C’ defines
the horizon of the image ‘h’. The line perpendicular to the mirror plane and passing camera
center together with the horizon define mirror pole called also vanishing point. We should
prevent situation when mirror angle φ = 0, then mirror pole and principal point are equal e =
c.
Figure 2.8 Axis of horizontal pixel coordinates U and V and horizontal standard coordinates X Y Z
• Different coordinate frames
For our calculation we use three coordinate systems:
- pixel coordinate frame: it is referenced to the image captured from the digital camera
usually with the origin in the upper left corner and the units in pixels. Denoted by
- ( u, v ).
- horizontal pixel coordinate frame: it describes the pixel coordinate frame but with
the U axis parallel to the X axis covering the horizon. Coordinates are denoted by the
(uh, vh).
- horizontal standard coordinate frame: it is referenced to the principal point ‘c’
where the origin is placed with unit as focal distance (pixel coordinates are divided by
the focal distance ‘f’ to simplify further calculations), coordinates are denoted by (x ,
y, 1). The X axis is parallel to horizon and should be oriented in such a way that the
mirror pole is on its negative part.
- camera referenced 3D frame: it is referenced to the camera center ‘C’, with units as
a focal distance. Coordinates are denoted by (x, y, z) the retinal plane is defined as a
plane z=1.
In the following text points on the image plane will be denoted by ‘n’ for points of object, ‘n’’
points for object reflection and ‘n’’’ for points form mirror projection. The upper case letter
will be used for points in real space 3D coordinates frame respectively ‘N’ for an object
points and ‘N’’ for object reflection.
13
• Calibration matrix
The intrinsic calibration matrix ‘K’ as it follows from our assumptions is defined by
three calibration parameters focal distance ‘f’ and the coordinates of the principal point ‘c’
(uc, vc). To convert the pixel coordinate frame to horizontal pixel coordinate we introduce
rotation matrix ‘Rθ’ {Ref. 5,6}.










=
100
0
0
c
c
vf
uf
K
(2.1),










−=
100
0cossin
0sincos
θθ
θθ
θR
(2.2)
To convert coordinates of point from pixel frame to horizontal standard frame we simply
multiply it by (K Rθ)-1
( )










=










−
11
1
v
u
KRy
x
θ
(2.3),



















 −
=










−
1100
cossin
sincos
1
1
v
u
vcff
uff
y
x c
θθ
θθ
(2.4)
From the upper equation follows interesting observation. Because in camera
referenced 3D frame ( to which will be the 3D reconstruction referenced, with origin placed in
the camera center ‘C‘ ) the retinal plane ‘R’ is described by the equation z=1, so every image
point let say ‘n’ in horizontal standard coordinate frame has coordinates (x, y, 1) and in
camera referenced 3D frame has coordinates (x, y, 1). From our model assumptions the point
‘N’ in 3D real world space is lying on the ray Cn and has coordinates k(x, y, 1). Taking the
problem reversely if the coordinates (X, Y, Z) are camera referenced real world frame then
they can be considered as a homogeneous coordinates for the image ‘n’ of ‘N’ so if z ≠ 0 then
the point ‘n’ is finite point of retinal plane ‘R’ with horizontal standard coordinates
(x/z, y/z, 1).
14
• Mirror matrix and mirror constraint
Because the mirror is our object of our interests like camera lets define its matrix
representing the reflection with respect to the mirror plane ‘M’, we also define additional
mirror plane ‘Mo’ parallel to ‘M’ and passing through the camera center ‘C’ what will help in
this determinations.
Figure 2.9 View on the frontal pinhole model of the camera from the top with reflection ‘C’’ point of the
camera center ‘C’ with respect to the mirror plane ‘M’
The reflection can be decomposed into a translation and linear part. The linear part is
corresponding to the reflection with respect to the plane ‘Mo’ and in camera referenced real
world frame it can be described by multiplication by matrix ‘So’:










−
=
ϕϕ
ϕϕ
2cos02sin
010
2sin02cos
So
(2.5)
Following this idea, if ‘C’’ is reflection of ‘C’ with respect to ‘M‘ then:
( )sind20,,sind2-C' ϕϕ= (2.6)
Finally since the reflection with respect to mirror plane ‘M’ can be expressed as a reflection
with respect to ‘Mo’ followed by the translation by the vector ‘C’’ we can joint both operation
together in one so called mirror matrix ‘S’:
4
Acknowledgments
At the beginning I would like to thank all the kind people by who I get opportunity to
work on the subject described in my master thesis. To international coordinator from my
department Mrs. Joanna Polańska for creating me opportunity to study abroad as an Erasmus
student on the Karel de Grote-Hogeschool in Antwerp.
To Rudi Penne for a very good cooperation, comments and suggestions. To Luc Mertens who
give me opportunity to work in the Industrial Vision Laboratory on the department of
Industrial Science and Technology on KDG and Daniel Senft with who I can always
discussed my problems and ideas.
16
3. 2D homography
Direct linear transformation (DLT) algorithm of 2D homography computes the
projective transformation of 2D plane to different 2D plane. This algorithm is usually used to
bring to the frontal view planes of our interest from the image or reversely, in our research we
will use it to calculate calibration parameters and also to neglect some nonlinearities from the
image.
We assume a set of four points ‘ni‘ in a plane ‘n’, no three of them collinear, and
assume that they are visible to us so we can determine coordinates of them ni (ui, vi, 1). Also
this points forms known to us pattern represented by points ‘n’i’ on a plane ‘n’’. Since the
points of projective plane ‘n’ are in correspondence with the other projective plane ‘n’’ by
projective mapping, algebraically it means that homogeneous coordinates of points ‘ni‘
transforms to homogenous coordinates of points ‘n’i’ by homography matrix ‘H’.
This equation holds only for homogeneous vectors with the same direction but may differ in
magnitude by a nonzero scalar ‘λ’. So we have to write:
λnHn '
ii = where










=Η
987
654
321
hhh
hhh
hhh
(3.1)
If we substitute for ni = (ui, vi, 1) and n’i = (u’i, v’i, 1) and apply the cross product with
(u v 1 )T
to both sides, ‘λ’ is eliminated and we obtain three linear homogeneous equations
with the unknowns hj (for j = 1,..,9) where the third equation is linearly dependent on the first
and second one.










=



















 −−−
−−−
−−−
0
0
0
000
000
1
1
000
9
1
''
''
''
''
''
''
h
h
uvuuu
vvvuv
uvuuu
vu
vvvuv
vu iiiii
iiiii
iiiii
ii
iiiii
ii M
(3.2)
So the four points of model plane n’i (u’i, v’i, 1) and the four points from the image
ni (ui, vi, 1), gives us a system of eight linear homogeneous equations with the unknowns hj.
Since no three of the ‘n’i’ are collinear, this system of equating has rank eight and it is
sufficient to determine matrix ‘H’ up to a global factor. It is recommended for our calculation
because of the noise of the image, to use more than four correspondences, and to use them as
many as possible. In our situation we will use all points from our calibration pattern. With ‘k’
number of correspondences we have a coefficient matrix ‘M’ with size 2k by 9 for the system
of equations with hj unknowns {Ref. 5}.
















−−−
−−−
−−−
−−−
=Μ
kkkkkkk
kkkkkkk
uvuuuvu
vvvuvvu
uvuuuvu
vvvuvvu
''''
''''
1
'
11
'
11
'
1
'
1
1
'
11
'
11
'
1
'
1
0001
1000
0001
1000
MMMMMMMMM
(3.3)
17
Solving the system using singular value decomposition we obtain a non-zero solution
in the form of a vector with coefficients of searched homography matrix ‘H’.
T
VDUM ⋅⋅= (3.4)
When we represent the solution of singular value decomposition as in the equation above,
the solution for matrix ‘H’ is in the last column of matrix ‘V’ where: ‘U’ is a unitary matrix
and ‘D’ is diagonal matrix of the same dimension as ‘M’, with nonnegative diagonal
elements in decreasing order.
It is recommended that image plane should be normalized so the midpoint of the image plane
and the model plane are similar, and the biggest distance of all points to this origin is less than
the square root of two.
Below we can see example of computed homography matrix ‘H’ for an object plane
points, plain itself, its frontal view and model used to calculate it (see Figure 3.1).










=Η
0.6396-0.0301-0.0575
0.0029-0.5991-0.2408-
0.00060.12720.3916-
Figure 3.1 Plane of object points (blue color dots) and its frontal view (green color dots) obtained using a
homography matrix calculated for it. Model plane which was used to calculate homography matrix is plotted
using black crosses
18
4. Minimizing the distortions from the image
The accuracy of the whole procedure mostly depends on the quality of the picture and
how precisely we process the image to fit the pinhole model of the camera. The mathematical
pinhole model is only the projection of some point in space on a retinal plane from the center
of the camera so it is a linear model. This means that before we make any calculations, we
should get rid of all nonlinear disturbances form the image.
Radial distortions are responsible for the biggest part of that disturbances, the causes
which created them is well known to us. So to avoid them, it is enough to model them and use
some optimization techniques to choose coefficients such that the errors performed by them
will be minimal.
There are also many sources of errors which are very hard to estimate and it is impossible
model them, like the disturbances of the used mirror, or just simply the numerical error of
calculation of the coordinates of points from the calibration pattern. But fortunately they all
produce a very minimal error and we will use the 2D homography correction method and
mirror pole optimization to avoid them all or to minimize the influence of them.
- Radial distortion correction
Radial distortion created by the different magnification of the image in different
distance to the axis of the lenses and by astigmatism, disturbs the image strongly so it is very
important to correct this error very carefully.
The idea is simple, to obtain the undistorted image we multiply the coordinates of each
pixel by the modeled function of radius ‘r’ from the center of the radial distortion (see
equations 4.1 to 4.5). However, to obtain better results we use different models for ‘u’ and ‘v’
coordinates.
pd (ud,vd) – coordinates of the distorted pixel.
p (u,v) – coordinates of the undistorted pixel.
rdc (uc,vc) – coordinates of center of radial distortion
Mu(r) – model of distortion for u coordinate.
Mv(r) – model of distortion for v coordinate.
( ) )( cdc uurMuuu −∗+= (4.1)
( ) )( cdc vvrMvvv −∗+= (4.2)
22
)()( cdcd vvuur −+−= (4.3)
For the model Mu(r) and Mv(r) we use the Taylor series expansion with one
significant change M(0) ~ 1, and limit it to the third component because the further expansion
has a very small influence to the accuracy of the algorithm.
( ) 3
4
2
321 rararaarMu ∗+∗+∗+= (4.4)
( ) 3
4
2
321 rbrbrbbrMv ∗+∗+∗+= (4.5)
Doing this we will obtaining new coordinates of the pixel without distortions. After
computations for all pixels we will notice that the image is stretched, the picture will be
bigger so to keep the original size of the image we have to cut the borders.
19
Because the image is a discrete function and the model function is continuous one, some
pixels after correction and rounding can occupy the same place so in this way we will lose a
little part of information. Our image will contain also some blank spaces, to fill them we
calculate median value from the neighborhood (see Figure 4.4).
To create the best model of the radial distortions we follow the given procedure. First
we take the image of a calibration grid, and obtain the coordinates of centers of gravity of the
objects which created it. Special care should be taken to place the calibration object parallel
position to the camera so the image of object is not in perspective view, than the calibration
objects will cover moreover the whole surface of the image and the radial distortion model
parameters will be calculated precisely for whole area of the image.
a) Grid of 11x11 squares. b) Grid of 19x19 blobs.
Figure 4.1 Two images of calibration grids a) grid created from 11x11 square objects and b) grid of
19x19 circular objects. Both pictures are taken with the same camera settings to compare which objects are
better for the radial distortion model calculations
For further calculation we advise to use the calibration pattern which is created from
squares because of several reasons. It is easy to detect edges and automatically calculate
centers on square shape than for the circular one and the calculation are more stable what can
be proved by simple calculations. We simply separate the middle vertical column of the grid
and calculate coefficients of the line which best fits them, and for every point calculate the
distance to this line. Calculating the ‘mean’ value of the distance to this line and standard
deviation we can estimate which object is better (See Table 4.1).
Grid of squares Grid of blobs
Mean value [pixel] 0.1522 0.3020
Standard deviation [pixel] 0.1256 0.2388
Table 4.1 Mean value of the distance of centers of objects to the line passing through them and the
standard deviation for it, for a grid of 11x11 squares and a grid of 19x19 spots
During the research process when trying to get better stability of the calibration
algorithm two methods of finding parameters for radial distortions model were invented and
tested. The first one was pattern grid reconstruction, the second pattern geometry
reconstruction.
20
The idea of pattern grid reconstruction is to reconstruct the grid how it should look
like without radial distortions. Because the distortions are smallest in the middle of the picture
so we take the distance of middle two points of middle column of objects as a reference one.
Also using points of middle column we calculate the angle by which the calibration grid is
rotated in reference to the image axis ‘u’. Using this two information we reconstruct the grid
of calibration pattern how it should look like without radial distortions. It is enough now to
find such parameters of the model that the difference between the reconstructed pattern grid
and the calculated pattern will be the smallest.
Figure 4.2 Plot represents the position of the points before the radial distortions removing (red dots),
after it (green dots), the reconstructed calibration grid (blue crosses) and also the central point of radial
distortions marked with the black cross
When we have coordinates of ‘n’ number of calibration object calculated from the image
pi (upi, vpi) and that which we reconstruct ri(uri, vri) we can define function which we will
minimize to find optimal parameters of the model.
( ) )( cici uuprMuuu −∗+= (4.6)
( ) )( cici vvprMvvv −∗+= (4.7)
22
)()( cici vvpuupr −+−= (4.8)
In equations above ( 4.6, 4.7 ) as function Mu(r) and Mv(r) we substitute equations 4.4 and
4.5 respectively.
21
( ) ( )( )∑=
−+−=
n
i
iiii vvruurF
1
22
(4.9)
Now minimizing function ‘F’ we are looking for an optimal values of following variables:
the center of the radial distortions (uc, vc) (with starting point in the center of the image ) and
coefficients [ a1 a2 a3 a4 ] and [ b1 b2 b3 b4 ] (for both with starting points [1 0 0 0]).
For pattern grid reconstruction method we obtain the result shown in Table 4.2.
Experiments were made with the PHILIPS Inca 311 camera with resolution 1280x1024 pixels
(see chapter 6).
Mu(r) a1 a2 a3 a4
0.9960154318228 -0.0000198040083 0.0000000522360 0.0000000000500
Mv(r) b1 b2 b3 b4
1.0061756365458 -0.0000942254838 0.0000002431012 -0.0000000000912
uc vc
641.7754 509.0369
Table 4.2 Value of parameters of radial distortion models and coordinates of center of radial distortions
calculated using pattern grid reconstruction algorithm
In pattern geometry reconstruction the idea is to bring the rows and column of pattern
points so they create straight lines. The biggest advantage of pattern geometry reconstruction
is that it does not need any other data than the coordinates of points of the calibration grid. It
use only the simple fact that in reality the calibration pattern points in every row and column
create perfect straight lines.
Figure 4.3 Plot presents the position of the points before the radial distortions removing (red dots), and
after it (green dots), and also the central point of radial distortions marked with the black cross
5
1. Introduction
- Purpose of the work
The 3D reconstruction of real world scenes gives great opportunities and opens a wide
range of applications in the robot or industrial vision. It is the most important sense for
humans in the exploration of the universe and it should also be the basic and the primary
sensing element of artificial intelligence.
Aim of my work was to build complete calibration algorithm for a 3D reconstruction
for a metric purpose and make research how to increase accuracy of such a system working
with different models of disturbances created by the camera also work on different
mathematical methods to neglect effects from all sources of nonlinearities which have a very
strong impact on the linear model of the camera.
I will describe the creation mechanism of a disturbances in the digital camera and what
is the main source of it. Describe the method for removing the nonlinear errors from the
image.
For a 3D reconstruction system with a camera and a mirror plane I will describe this system
precisely its properties and construction. Give a ready solution for stable calculation of an
intrinsic and extrinsic calibration parameters.
Introduce whole algorithm for a calibration procedure and measuring procedure with an
examples on a real data using Matlab 6.5 software.
Describe total accuracy of the system and specify which parts requires special care to
maintain best precision.
23
Finally having all parameters of our model of radial distortions using equations 4.1 to
4.5 we can restore the image how it should look like without any disturbances.
a) b)
c) d)
Figure 4.4 Result of radial distortion removing algorithm: a) original distorted picture, b) picture after
recalculation position of all pixels, c) picture after restoration of its original resolution, d) picture after median
filtering
24
- Homography correction
There is a lot of distortion in the image which we cannot measure or model in any
manner. There is some error created during the calculation of the coordinates of our
calibration grid, the mirror itself is not perfectly flat and the reflection of the object can be
disturbed in some way and the CCD sensor of the camera also could be a source of error.
Altogether they produce a bigger or smaller effect on the accuracy of our procedure.
But using the fact that we know perfectly the geometry of the calibration pattern using
2D homography, we can in some part neglect all of this nonlinearities without knowing the
mechanism of its creation.
The calculation of the homography matrix ‘H’ is the same as was described in chapter
3, to calculate it we use coordinates obtained from the image. Having this matrix we
recalculate the coordinates of all points (u, v, 1) using points from planar geometrical model
(up, vp, 1).










Η=










11
p
p
v
u
v
u
(4.16)
The homography matrix should be calculated separately for object and mirror
reflection. The results of 2D homography correction are presented in the Table 4.4.
Figure 4.5 Dots presents the points of calibration object and its mirror reflection before the 2D
homography correction, with crosses are presented points after correction. The change is not significant but it
restores the geometry of the calibration pattern
25
In the following table is shown exemplary data for one of our calibration pictures,
homography matrix for object plane ‘H’ and ‘u’ and ‘v’ coordinates before and after 2D
homography correction:










=
0.6396-0.0301-0.0575
0.0029-0.5991-0.2408-
0.00060.12720.3916-
H
Before 2D homography correction After 2D homography correction
u V u V
753.2659 25.5276 752.8638 26.3765
828.7765 60.8509 828.0358 60.7551
910.1132 98.3507 910.3024 98.3782
1001.5088 139.8043 1000.7175 139.7279
1101.2258 184.1600 1100.5551 185.3868
730.8510 148.0549 730.8856 147.8688
803.5616 187.7725 803.3536 186.8014
881.8567 230.1468 882.4957 229.3197
968.9695 276.8071 969.2783 275.9427
1063.9518 327.0922 1064.8641 327.2951
709.6388 264.1000 709.8589 264.1011
779.8578 307.7382 779.7860 307.1559
855.5207 354.4177 856.0011 354.0823
939.0063 405.5429 939.3925 405.4272
1030.0988 460.9489 1031.0238 461.8454
689.3885 374.9755 689.7232 375.4079
757.2719 422.3277 757.2591 422.1954
830.6126 473.1347 830.7280 473.0934
910.9229 528.5447 910.9477 528.6682
998.6135 588.8842 998.8937 589.5955
670.0147 481.1723 670.4232 482.0956
735.8371 532.0111 735.7055 532.2646
807.1965 587.0008 806.5937 586.7417
884.6000 646.5927 883.8422 646.1067
969.2026 711.6704 968.3474 711.0483
Table 4.4 Value of coordinates of points before and after 2D homography correction of object plane
26
- Mirror pole optimization.
Recalling information from chapter 2 (see equation 2.9) that having two points n1, n2
and their mirror reflections n’1, n’2 on the image which are projections of points in space
N1,N2 and their mirror reflections N’1,N’2, we can easily calculate mirror pole e(ue, ve).
Unfortunately due to the noise on the image we can not precisely determine its coordinates.
Figure 4.6 Lines for every possible pair of points ‘n’ and ‘n’’. We can easily observe that it is impossible to
determine precisely mirror pole ‘e’
To solve this problem we will again use nonlinear optimization to manipulate the coordinates
of mirror pole ‘e’ and the pairs of points ‘n’, ‘n’’ of object and its mirror reflection. We will
do this by finding the line ‘l’ going through the mirror pole which minimize the distance to
object point ‘n’ and its reflection ‘n’’, optimizing the coordinates of mirror pole , object and
its reflection we minimize sum of that distance for all possible pair of points.
When we have calculated the mirror pole ‘e’ coordinates we have to calculate the
coefficients of line ‘l’ (see Figure 4.8) to simplify the equation we move the origin to the
mirror pole ‘e’ recalculating coordinates of points by:
),(),( 11 ee vvuunvu −−= (4.17)
),('),( 22 ee vvuunvu −−= (4.18)
The equation of a line ‘l’ has simple form:
)( ee vvauu −=− (4.19)
6
2. Model of the camera
At the beginning we will look closer to our measuring instrument which is digital
camera.
We will track all the way of our measurement path starting from the light reflected from the
measured object, passing the lens and finally to be captured by the image capturing sensor and
converted to the digital representation.
I will explain which optical phenomena causes the biggest disturbances to signal
which we are processing and what construction details are important to be taken into
consideration in calibration process and during measurements.
I will describe what mathematical model of a camera I use to calculate all parameters
of our system and using simple linear algebra how we are going to reconstruct from simple
two dimensional picture real three dimensional coordinates of measured object in reference to
the center of the camera.
- Construction of the common digital camera
During all the years of photography the general idea of taking pictures did not change
much only the information capturing and storage significantly evolve. We will look closer to
the beginning part of the process because in the preprocessing part of the digital camera to the
signal are not added any disturbances we will pass it and take a closer look on the part when
the information is still transmitted as a signal in the form of light.
The idea and that part of the process is very simple. The light is reflected from the object, is
passing through the lenses and is projected on the sensing element, which in digital camera is
CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) sensor.
Figure 2.1 General idea of construction of the commonly used digital camera
A CCD is build of photo sites, typically arranged in an X-Y matrix of rows and
columns. Each photo site, in turn, is build of a photodiode and an adjacent charge holding
region, which is shielded from light. The photodiode converts light (photons) into charge
(electrons). The number of electrons collected is proportional to the light intensity. Typically,
light is collected over the entire sensor simultaneously and then transferred to the adjacent
charge transfer cells within the columns.
Next, the charge is read out: each row of data is moved to a separate horizontal charge
transfer register. Charge packets for each row are read out serially and sensed by a charge-to-
voltage converter and amplifier.
28
- Mirror pole correction
The idea from mirror pole optimization we can also use in improving the calculation
of our measuring procedure. When we finally have calculated precisely coordinates of mirror
pole ‘e’ why not use this to correct the coordinates of object and its reflection.
Figure 4.8 Mirror pole correction use fix position of mirror pole ‘e’ to correct the coordinates of object point
‘n’ and its mirror reflection ‘n’’
Having now fix position of the mirror pole ‘e’ we are again calculating the coefficient of
optimal line ‘l’ (using equations 4.17 to 4.21) which minimize the distance of pair of points
‘n’ and ‘n’’ to it and project that points on this line, recalculating all the coefficients from the
image using following equations derived from simple linear system of two equations, ‘l’ and
equation of line perpendicular to ‘l’ passing through point ‘n’ or ‘n’’. {Ref. 11}:
12
+
+⋅
=
i
iii
ci
a
uva
u
(4.23)






+
+⋅
=
12
i
iii
ici
a
uva
av
(4.24)
To obtain the best precision the mirror pole correction we perform after radial distortion
correction.
29
5. Calibration of the camera
- Mirror pole
Calculation of mirror pole ‘e’ is the first step for calculation of the intrinsic parameter
matrix and from the stability of the calculation of mirror pole and vanishing mirror line
depends stability of whole calibration procedure.
To calculate mirror pole ‘e’ we use the fact that all parallel lines of a real scene on the image
are intersecting in one point. On our calibration object each pair of points of object n(ui, vi)
and its mirror reflection n’(ui, vi) create such a line. Having equations for every line
connecting the pairs of points we use least square approximation to get optimal solution.
Figure 5.1 Plot presents every line connecting pairs of points from object and its mirror reflection, the
black circle presents solutions of least square approximation for equation 5.3
At this point the theory is clear but the practical implementation shows that some
solution are better then others.
At the beginning stage of research we use the following equations for obtaining the
coordinates of mirror pole:
( )
( )
( )
( )12
21
22
12
21
vv
uu
vu
vv
uu
vu ee
−
−
∗+=
−
−
∗+
(5.1)
( )
( )
( )
( ) 2
21
12
2
21
12
v
uu
vv
uv
uu
vv
u ee +
−
−
∗=+
−
−
∗
(5.2)
Because the least square approximation minimize the distance in one direction, using the first
equation (5.1) we calculate the ‘ue’ coordinate and second equation (5.2) to calculate the ‘ve’
coordinate of the mirror pole ‘e’.
30
During the experiments the better stability was obtained using the following
normalized equation:
( )
( ) ( )
( )
( ) ( )
( )
( ) ( )2
21
2
21
2112
2
21
2
21
12
2
21
2
21
12
vvuu
vuvu
vvuu
uu
v
vvuu
vv
u
−+−
∗−∗
=
−+−
−
∗+
−+−
−−
∗ (5.3)
Results using equations 5.1 and 5.2 Results using equation 5.3
Mean Std Mean Std
Mirror pole ue -1824.89 31.15 -1831.40 14.35
ve 496.96 5.22 496.96 5.22
Table 5.1 Mirror pole coordinates for two calculation methods (without horizontal correction)
When searching for other algorithms for calculating the mirror pole with the biggest
stability, we perform the tests with mirror pole optimization of the coordinates of the object
and its reflection (see chapter 4 – Mirror pole optimization). Using such approach we obtained
similar stability of the calculation of mirror pole as in case of using equation 5.3, but of course
the advantage of this method is coordinates correction of the calibration object and its
reflection points. The only drawback is that the optimization process itself is time consuming.
31
- Vanishing mirror line
The vanishing mirror line is a line of intersection of the plane ‘MO‘ ( ‘MO’ is a plane
parallel to mirror plane ‘M’ and passing camera center point ‘C’) and the retinal plane R (see
Figure 2.9). Based on this and the mirror pole ‘e’ we will calculate the coordinates of the
central point ‘c’ and the camera angle ‘φ’. To calculate the vanishing mirror line first we need
a projection of object points on the mirror plane.
• Projection of object on the mirror
To obtain the orthographic mirror projection on the image we use to object points ‘n1‘
and ‘n2‘ and its mirror reflections ‘n’1‘ and ‘n’2’. For homogenous coordinates using
equations 5.4 to 5.8 we calculate ‘n’’1‘ and ‘n’’2‘ projection points. Meaning of this equation
is visualized on the figure 4.1.
First step is calculation of coordinates of points ‘t1’ and ‘t2’:
21211 '' nnnnt ∧= (5.4)
21212 '' nnnnt ∧= (5.5)
Next we can observe that to line t1t2 is also belongs points ‘n’’1‘ and ‘n’’2’ so we can write:
2121 '''' nntt = (5.6)
Finally we calculate coordinates of mirror projection ‘n’’1‘ and ‘n’’2‘:
21111 ''' ttnnn ∧= (5.7)
21222 ''' ttnnn ∧= (5.8)
Figure 5.2 Orthographic projection on the mirror plane with use of pair of points ‘n1‘ and ‘n2‘ and its
mirror reflection points ‘n’1‘ and ‘n’2‘
32
Working in the situations with a lot of noise like ours, it is better when we perform the
calculations on the biggest set of data. During our experiments we calculate the projection for
every possible pairs of points. In this way we obtaining instead one solution set of points
which proof that we can not trust only one of them. So we neglect points which were
outstanding with the biggest error and calculate the mean value from the remaining ones.
Figure 5.3 Projection points calculated for every possible pairs of points of a calibration object (blue
color represents projection points red circles represents the median for each set of this points)
• Calculation of vanishing mirror line
To calculate vanishing mirror line we use to points at infinity of perpendicular
directions and homography matrix ‘H’ calculated for the mirror plane (see chapter 3 to check
how to calculate homography matrix).










=










Η
7
4
1
0
0
1
h
h
h
and










=










Η
8
5
2
0
1
0
h
h
h
(5.9)
After that we obtain coordinates of vanishing points which lie on the vanishing mirror line. It
can be easily observed that in fact those coordinates are two columns of homography matrix.
33
• Horizontal correction
Having mirror pole ‘e’ and the vanishing mirror line ‘L’ we can now determine the
camera angle and use it to calculate horizontal pixel coordinates. The horizon is a line going
through the mirror pole ‘e’ perpendicularly to the vanishing mirror line ‘L’, camera angle ‘θ’
is the angle between the ‘u’ axis of the image and the horizon.
Figure 5.4 Determining the horizon, L – vanishing mirror line, e – mirror pole, θ – camera angle
To calculate the horizontal pixel coordinates nhi(uh, vh, 1)
of any image point ni(u, v, 1) we multiply its pixel coordinates by the inverse of the rotation
matrix.




















−=










−
1100
0cossin
0sincos
1
1
v
u
v
u
h
h
θθ
θθ
(5.10)
34
- Mirror angle and principal point
Before calculations of the angle ‘φ’ between the mirror plane ‘M’ and the retinal plane
‘R’ we start with the partial calibration of the image plane. What mean after horizontal
correction when the horizon is parallel to the ‘u’ axis of image, we perform vertical
translation to obtain a ‘u’ axis equals ‘x’ axis but with different origins. After that we can
assume coordinates of the mirror pole ‘e’ equals (ue , 0 ), for the intersection of the horizon
with the vanishing mirror line (uL , 0 ) and (uc , 0 ) for the still unknown central point ‘c’.
Figure 5.5 Model of the camera with calculated all the parameters
For this data we get the following relations:





=−
=−
ϕ
ϕ
tan
tan
f
uu
fuu
cL
ec
(5.11)
But having three unknown ‘uc’ , an ‘f’ and ‘φ’ these two equations are not enough.
At this point again homography helps us to solve this problem. Introducing a new parameter:
22
fuc +=ω (5.12)
7
This architecture produces a low-noise, high-performance imager. That optimization,
however, makes integrating other electronics onto the silicon impractical. In addition,
operating the CCD requires application of several clock signals, clock levels, and bias
voltages, complicating system integration and increasing power consumption, overall system
size, and cost.
A CMOS sensor, is made with standard silicon processes in high-volume foundries.
Peripheral electronics, such as digital logic, clock drivers, or analog-to-digital converters, can
be readily integrated with the same fabrication process. CMOS sensors can also benefit from
process and material improvements made in mainstream semiconductor technology. To
achieve these benefits, the CMOS sensors architecture is arranged more like a memory cell or
flat-panel display. Each photo site contains a photodiode that converts light to electrons, a
charge-to-voltage conversion section, a reset and select transistor and an amplifier section.
Overlaying the entire sensor is a grid of metal interconnects to apply timing and readout
signals, and an array of column output signal interconnects. The column lines connect to a set
of decode and readout (multiplexing) electronics that are arranged by column outside of the
pixel array.
This architecture allows the signals from the entire array, from subsections, or even
from a single pixel to be readout by a simple X-Y addressing technique {Ref. 1}.
Figure 2.2 CCD and CMOS image capture sensors
Both techniques brings some strengths and weakness but regardless them all for us the
most important is that the image coordinates are Euclidean coordinates having equal scales in
both axial directions. In the cameras with CCD sensor , there is biggest possibility of having a
non-square pixels. If image coordinates are measured in pixels how it is in our case then this
has the extra effect of introducing unequal scale factor in each direction.
36
- 3D reconstruction and scale factor
Having all intrinsic and extrinsic parameters and we convert object points from pixel
coordinate frame to horizontal standard coordinate frame multiplying it by (K Rθ)-1
. At that
moment we consider distance ‘d’ between camera center ‘C’ and mirror plane ‘M’ as
unknown and a global scale factor of real 3D coordinates, and we are able to reconstruct
objects to camera referenced 3D coordinate frame with units as a focal distance ‘f’. To do this
we will translate intersection method used in stereo vision to our mirror setting {Ref. 7}.
Figure 5.6 The intersection method for a camera-mirror settings
Point ‘N’ is a point in 3D space and ‘N’’ is its mirror reflection, ‘n’ and ‘n’’ are a
direct image of them on retinal plane ‘R’ both given by the horizontal standard coordinates (x,
y, 1) and (x’, y’,1). To derive the camera referenced real world coordinates of ‘N’ a simple
observation should be made:
if
( )'nSn M=∗ then
∗∧= nCCnN ' (5.17)
Next the line Cn is directed by (x , y , 1)T
and the line C’n* by:
( )'nSno Mo=∗ (5.18)
37
The coordinates of no* can be computed by:
( )T
o yxS 1,',' or
( )T
xyx ϕϕϕϕ 2cos2sin',',2sin2cos' −+ (5.19)
So if we use:
),,( kykxkN ⋅⋅= (5.20)
then ‘N’ can be computed when solving the following system of linear equations with ‘k’ and
‘l’ as unknowns.
ϕϕϕ
ϕϕϕ
cos2)2cos2sin'(
'
sin2)2sin2cos'(
dxlk
lyky
dxlkx
−−=
=
−+=
(5.21)
To scale the reconstructed points to the true distances from the camera center that’s
mean to scale from camera referenced 3D coordinate frame to camera referenced real world
frame we also need to calculate a global scale factor ‘d’. To do this we need information
about real dimensions of the calibration object. It is very important that these dimensions are
determined very accurately because error of this measurement will propagate to all
reconstructed real world point coordinates and cause error in calculating the dimensions.
To calculate the scaling factor ‘d’ we just divide the true length ‘dim’ between two
points from the calibration pattern ‘N1‘ and ‘N2‘ and divide it by the distance calculated
between ‘n1‘ and ‘n2‘ after 3D reconstruction to camera referenced 3D coordinate frame.
( ) ( ) ( )2
21
2
21
2
21
dim
zzyyxx
d
−+−+−
=
(5.22)
It is better to use more of such a pairs from object and its mirror reflection and take the mean
value of them. For reconstruction to camera referenced 3D coordinate frame we substitute ‘d’
equals one.
38
6. Calibration and measuring algorithm
In fief words I should mention on what equipment I perform all my experiments and in
consequence how it influence the results of my research. All the image were captured using
Philips Inca 311 camera which is designed for a compact vision solutions for industrial
applications {Ref. 12}. It can be used for quality assurance, alignment, pattern verification,
object tracking and all kinds of measurements applications.
Figure 6.1 Philips Inca 311 camera
The camera is equipped with monochrome PC2112-LM from Zoran sensor. It is high
performance CMOS imaging sensor with extremely uniform pixel array and low fixed-pattern
noise because of its Distributed Pixel Amplifier architecture. At the output it gives images
with maximum 1280x1024 pixels resolution with 10 bit grey level scale.
39
- Calibration procedure and setting the scene
Before we start the calibration of our measuring system we have to think how to
construct the calibration pattern which will fulfill several assumptions. It should be relatively
easy to calculate the coordinates of pattern objects so we do not need sophisticated automated
system to deal with it. It should be geometrically easy to build mathematical model of it. Give
possibilities to determine certain wanted properties and be symmetric. And it should consist
about twenty to thirty objects so it support large enough amount of data for calculations to
neglect noise from it and not to much so the time of the calculation will be relatively small.
During the research generally three types of calibration patterns were used. It was:
blobs uniformly distributed on the circle, grid of lines and grid of squares (see Figure 6.2).
At the beginning for our camera calibration pattern with uniformly placed blobs on the circle
usually with twelve or twenty four blobs were used because of several reasons. It is very easy
to model mathematically, it allow to determine lines with specific wanted angle between them
which was crucial on that stage of my research. But it has one significant drawback, it should
be printed very accurately to keep its symmetry. When printed on common printer pattern was
scaled in the one direction so the distance between each opposite points were different and the
calibration procedure was corrupted.
a) b)
c)
Figure 6.2 Different calibration patterns: a) blobs uniformly placed on the circle, b) grid of lines,
c) grid of squares
40
In the later stage when requirements were change I started to use gird of five vertical
and horizontal lines. It gives twenty five calibration points, what is enough to minimize the
noise for example in the 2D homography calculations and it ensure quite fast calculations.
Especially for mirror pole optimization, when to big number of coordinates for optimization
can enlarge time of execution. When printed on the commonly used printers and when it
happens that it is scaled in one direction it is not a problem to correct it during the calibration
procedure. But it also has a very important drawback, because in calculations were used the
coordinates of intersection points, it is difficult to build accurate system to determine them.
The best calibration pattern for the algorithm described by my work is the grid of
squares. It has all the advantages of grid of lines and it is very simple to calculate the
coordinates of its objects. It is also pattern which is commonly used in many camera
calibration procedures which can be found in the literature for example by Zhengyou Zhang
{Ref. 8}.
The big importance has also setting correctly the scene of calibration in which we will
perform measurements.
It is obvious that the camera should capture in the image whole calibration image and
its mirror reflection.
The big importance have angle ‘φ’ between the mirror plane ‘M’ and the retinal plane
‘R’. When the angle ‘φ’ is increasing then the distance of mirror pole ‘e’ to central point ‘c’
is also increasing and the distance of vanishing mirror line ‘L’ to central point ‘c’ is
decreasing.
And reversely, when the angle ‘φ’ is decreasing then the distance of mirror pole ‘e’ to central
point ‘c’ is also decreasing and the distance of vanishing mirror line ‘L’ to central point ‘c’ is
increasing (see Figure 2.7). In the situation when the angle ‘φ’ is bigger then 45 º the accuracy
of calculation of mirror pole ‘e’ significantly decrease then the accuracy of calculation of
vanishing mirror line ‘L’ and reversely we have the opposite situation. But we have to keep in
mind that accuracy of calculation of those two values has direct impact on the accuracy of
calculation of all intrinsic and extrinsic parameters and whole calibration procedure. It seams
that the best situation is obtained when the angle between the retinal plane ‘R’ and the mirror
plane ‘M’ equals 45 º then the distance of vanishing mirror line ‘L’ to central point ‘c’ and the
distance of mirror pole ‘e’ to central point ‘c’ are equal and the accuracy of calculation of
both of them is similar.
So setting the measuring scene we have to remember that the positioning the mirror has a big
impact on the accuracy of measuring results.
Preparing for the calibration we also have to determine the dimensions of objects
which will be measured in the future. Because any changes in the setting of the camera, focus
or zoom, can change the intrinsic calibration parameters, so after calibration we should not
change any camera settings any more. In consequence to obtain good parameters of the image
the measured object should be in the same distance to the camera like the calibration pattern
and have moreover similar dimensions to it.
41
Now having all the important information I will describe proposed camera calibration
algorithm which was developed during all the research.
1. Perform radial distortion correction using model obtained for a given camera.
2. Perform mirror pole optimization.
It is the best when at the beginning we calculate mirror pole ‘e’ (using equation 5.3)
and use it for a starting point for a optimization.
3. Perform homography correction.
4. Calculate mirror pole ‘e’.
5. Compute the projection of calibration pattern on the mirror plane.
6. Calculate vanishing mirror line ‘L’.
7. Calculate the camera angle ‘θ’.
8. Translate all the calculated parameters and object points to horizontal pixel coordinate
frame.
9. Calculate the mirror angle ‘φ’.
10. Calculate the intrinsic parameters of the camera, coordinates of the central point ‘c’
and focal distance ‘f’.
11. Translate object points to horizontal standard coordinate frame.
12. Use the triangulation method to translate the calibration pattern points to camera
referenced 3D frame.
13. Using information about dimensions of know calibration pattern calculate the scale
factor ‘d’.
To see the implementation in Matlab 6.5 please refer to appendix.
Performing all this calibration steps we will calculate all the parameters needed for 3D
reconstruction for a measuring purpose. We will use all intrinsic parameters, focal distance ‘f’
and central point ‘c’ and extrinsic one camera angle ‘θ’. To perform mirror pole correction we
will also use mirror pole ‘e’.
8
The biggest source of errors in our image are the lenses. Usually it is the lens which
distorts the image mostly depending on the quality of it. In commonly used simple cameras
with used very cheap lenses which through several optical phenomena change and deflects the
measured object on the image almost unreversible.
The first phenomena is chromatic aberration. Chromatic aberration arises from
dispersion, the property that the refractive index of glass differs with wavelength
(see Figure 2.3). There are two types of chromatic aberration: longitudinal aberration and
lateral aberration.
Figure 2.3 Chromatic aberration
- Longitudinal chromatic aberration causes different wavelengths to focus on different image
planes.
- Lateral chromatic aberration is the color fringing that occurs because the magnification of
the image differs with wavelength.
There are several ways of removing chromatic aberration. For production use very
exotic glasses with very low dispersion like "Hi-UD" glass produced by Cannon {Ref. 3}.
Use lens with a very big focal distance so the light do not have to be refracted so much, or use
system of two or three lenses with different types of glass so aberration of one lens is
corrected by the another. But all those solutions are very expensive and usually we will have
to neglect this by preprocessing {Ref 2}.
Most photographic lenses are composed of elements with spherical surfaces. Such
elements are relatively easy to manufacture, but their shape is not ideal for the formation of a
sharp image. Spherical aberration is an image imperfection that is due to the spherical lens
shape, Figure 2.4 illustrates the aberration for a single, positive element. Light that hits the
lens close to the optical axis is focused at position ‘c’. The light that traverses the margins of
the lens comes to a focus at a position ‘a’ closer to the lens.
43
- Example on a real data
Using equations from the previous chapters and algorithm described by my work lets
follow one example on a real data. We assume that we know the radial distortions model and
will concentrate on the calibration computations and measurements. For a calibration purpose
we take a picture of a grid of lines (see Figure 6.3) and after it we will measure the
dimensions of a cube (see Figure 6.4).
The pixel coordinates of intersection of grid of lines before any corrections are presented
below for an object Od = (u, v) and its reflection Rd = (u, v).
Od = { (750.83, 36.22) ( 824.59, 70.40) (903.91, 107.15) (992.57, 148.24) (1088.29, 192.59)
(729.50, 153.63) (801.00, 192.51) (877.88, 234.32) (962.99, 280.61) (1054.91, 330.60)
(708.96, 266.72) (778.35, 309.78) (852.96, 356.07) (934.78, 406.85) (1023.20, 461.72)
(689.13, 375.84) (756.50, 422.87) (828.93, 473.43) (907.62, 528.35) (992.73, 587.75)
(669.98, 481.22) (735.44, 531.92) (805.81, 586.42) (881.54, 645.08) (963.51, 708.57) }
Rd = { (389.42, 101.66) (310.03, 154.82) (234.67, 205.29) (162.50, 253.62) (93.62, 299.75)
(430.02, 193.78) (348.88, 245.18) (272.19, 293.75) (198.63, 340.34) (128.58, 384.71)
(471.70, 288.37) (388.78, 337.96) (310.73, 384.63) (235.76, 429.46) (164.52, 472.06)
(513.98, 384.30) (429.27, 432.14) (349.89, 476.97) (273.52, 520.11) (201.11, 561.01)
(557.13, 482.22) (470.60, 528.25) (389.85, 571.20) (312.04, 612.59) (238.44, 651.74) }
Figure 6.3 Image of grid of lines calibration pattern ( intersection points marked by red dots, its reflection
marked with green dots and projection of calibration pattern on the mirror plane marked with blue dots)
After removing radial distortions, mirror pole optimization and homography correction we
obtain the following coordinates of calibration pattern ‘Oc’ and its reflection ‘Rc’ and we can
start calculation of camera calibration parameters.
44
Oc = { (752.38, 26.03) (827.78, 60.61) (910.22, 98.41) (1000.71, 139.91) (1100.50, 185.67)
(730.67, 147.65) (803.38, 186.71) (882.71, 229.33) (969.59, 276.01) (1065.17, 327.37)
(709.90, 263.98) (780.08, 307.11) (856.50, 354.07) (940.02, 405.39) (1031.68, 461.71)
(690.01, 375.37) (757.82, 422.17) (831.50, 473.04) (911.87, 528.51) (999.88, 589.26)
(670.95, 482.12) (736.51, 532.26) (807.63, 586.64) (885.05, 645.84) (969.64, 710.52) }
Rc = { (382.64, 93.08) (300.80, 146.05) (222.30, 196.86) (146.95, 245.63) (74.56, 292.49)
(425.77, 188.93) (342.63, 240.15) (262.91, 289.26) (186.39, 336.40) (112.90, 381.68)
(469.32, 285.73) (384.87, 335.15) (303.90, 382.54) (226.20, 428.01) (151.58, 471.68)
(513.30, 383.48) (427.51, 431.07) (345.27, 476.70) (266.38, 520.46) (190.62, 562.49)
(557.72, 482.20) (470.57, 527.93) (387.04, 571.75) (306.93, 613.78) (230.01, 654.14) }
For a corrected pixel coordinates of a calibration pattern and its reflection using described
algorithm we calculated following parameters.
Mirror pole e = ( -1833.50, 511.82 ).
Central point c = ( 463.21, 511.82 ).
Focal distance f = 2236.48.
Camera angle θ = 0.59 º.
Mirror angle φ = 45.76 º.
Now by means of intersection method we reconstruct the coordinates of a calibration object
and calculate the dimension of it in vertical direction ( between points 1-21, 2-22, 3-23, 4-24,
5-25 for object and for reflection between points 1’-21’, 2’-22’, 3’-23’, 4’-24’, 5’-25’) up to a
global scaling factor.
D = { 0.3093 0.3091 0.3089 0.3086 0.3084 }
D’ = { 0.3088 0.3089 0.3091 0.3092 0.3093 }
Knowing the real dimension of calibration pattern which equals dim = 106.5mm we calculate
the scaling factor using mean value of all dimensions for vertical direction. The unit in which
we expressed dimension of a calibration object is important, because it will also determine in
which units the coordinates of a camera referenced real world frame will be expressed.
Scaling factor d = 344.65.
After calculation off all camera calibration parameters using the same settings we can
start our measurements. As an example we will calculate the dimensions of a cube.
45
Figure 6.4 Image of measured object ( marked with red dots and its mirror reflection marked with green
dots)
We obtain from the measuring image following pixel coordinates of a four corners of a cube
Od (u, v) and its reflection Rd (u, v).
Od = { (999, 690) (877, 767) (994, 773) (937, 923) }
Rd = { (477, 656) (409, 720) (284, 704) (432, 846) }
After removing radial distortions and mirror pole correction we obtain following pixel
coordinates.
Oc = { (1012.77, 683.48) (888.63, 761.97) (1009.29, 768.77) (954.03, 923.39) }
Rc = { (481.60, 651.45) (412.36, 718.20) (283.32, 703.15) (436.16, 846.93) }
Now using intersection method we calculate coordinate of a object RO (x, y, z) and its
reflection RR (x, y, z) in camera referenced real world coordinates. Coordinates this refer to
the same unit which was used in calculations of the scaling factor so they are expressed in
[mm].
RO = { (123.44, 38.56, 502.37) (91.90, 54.03, 483.13) (109.50, 51.52, 448.46)
(107.44, 90.09, 489.59) }
RR = { (5.07, 38.56, 617.64) (-13.31, 54.03, 585.59) (-48.44, 51.52, 602.26)
(-7.27, 90.09, 601.30) }
46
Having coordinate of object points we can easily calculate dimensions of the cube calculating
distance between its corners. We have also coordinates of the mirror reflection and we can
also calculate dimension of reflection which should be exactly the same.
Dimensions of the cube:
dim2-1 = 40.05 mm
dim2-3 = 38.96 mm
dim2-4 = 39.79 mm
Dimensions of reflection of the cube:
dim2’-1’ = 40.05 mm
dim2’-3’ = 38.96 mm
dim2’-4’ = 39.79 mm
Real dimension of edge of the cube was 40 mm. The accuracy of the measurements which we
perform will be discussed in the next chapter.
47
7. Accuracy measures
All error calculations are based on the experimental approach using multiple data for
calculating mean value and standard deviation. It is very important to work on precise data, so
we have to know very precisely the dimensions of the calibration objects and carefully
calculate the coordinates of the object on the image.
- Description of the experimental accuracy measures and procedures
As the calibration pattern we use grid of five thin, horizontal and vertical lines. (see
Figure 7.1 a) Using twelve images of such a calibration patterns we estimate the error of each
step of calculation and stability of the whole calibration procedure.
To determine the influence of the position of the measured object on the image and the
distance to the camera we use sixty pictures of the rod (see Figure 7.1 b) positioned in
different places to the image, with different angle and different distances to the camera,
relative to the distance of the camera to the calibration pattern.
To estimate the absolute error of the method and influence of the size of the calibration
pattern relative to the size of the measured object we use twelve pictures of the ruler
( see Figure 7.1 c) ).
a) Calibration image. b) Image of the rod.
c) Image of the ruler.
Figure 7.1 On the figure we see: a) calibration pattern – used to calculate all the calibration parameters
of the camera, b) image of the rod – used to calculate the influence of the position of the object on the accuracy,
c) image of the ruler – measuring different distances on it we can estimate how accuracy is related to the size of
calibration pattern
48
- Accuracy of calibration procedure
Because there is not any one common factor that exists which evaluates the efficiency
of each method, we estimate it on the stability of calculations of the calibration procedure.
The following table show the results calculated for twelve calibration patterns from which we
calculate the mean value of each parameter and standard deviation. The best results were
obtained after the pattern geometry reconstruction based radial distortion correction with
mirror pole optimization and 2D homography correction.
Parameters Procedure 1 Procedure 2 Procedure 3
Mean Std Mean Std Mean Std
e u -1764.82 27.04 -1831.77 13.24 -1827.80 14.03
v 498.47 93.64 509.24 18.76 510.03 7.20
c u -46.77 153.33 441.54 71.70 476.01 27.78
v 498.47 93.64 509.24 18.76 510.03 7.20
uL 3970.12 453.01 2701.82 151.93 2641.55 54.04
d 435.36 30.16 349.42 13.94 341.76 6.59
f 2614.35 110.65 2262.12 76.21 2232.96 26.35
φ 33.30 2.59 45.14 1.92 45.89 0.71
θ 0.02 3.02 0.37 0.64 0.40 0.30
Table 7.1 Comparison of stability of calculation for three different calibration algorithms
Procedure 1 – without any distortion correction.
Procedure 2 – with pattern grid reconstruction based radial distortion correction.
Procedure 4 – with pattern geometry reconstruction radial distortion correction, mirror pole
optimization and 2D homography correction of object and its mirror reflection
Parameters:
‘e’ (u ,v) – horizontal coordinates of mirror pole
‘c’ (u, v) – horizontal coordinates of central point
‘uL’ – ‘u’ coordinate of the vanishing mirror line
scale factor – scale factor based on the known dimension of calibration object
‘f’ – focal length
φ – angle between mirror plane and camera retinal plane
θ – angle of horizontal correction
49
As was previously mentioned, the biggest error is produced by the radial distortions,
so it is obvious that it will also cause the biggest error when we do not remove them. The
following table shows the results of measuring the rod which true length is 57.5 [mm] and one
cm on the ruler 10 [mm], after correction of radial distortions and without it.
Without radial distortion
correction
With radial distortion
correction
Rod
Mean value [mm] 53.09 57.05
Standard deviation [mm] 2.12 1.10
Ruler
Mean value [mm] 9.61 9.83
Standard deviation [mm] 0.19 0.13
Table 7.2 Comparison of the stability of calculation dimensions of measuring object.
The calculations were performed on the sixty images of the rod and twelve images of
the ruler placed in different positions and different distances to the camera.
From the upper table it is clear that removing of radial distortions is crucial to this procedure
and the error without removing radial distortions is unacceptable for measuring purposes.
- Influence of position of measured object
It is also worth mentioning that we can not remove one hundred percent of all
nonlinear distortions. The biggest part of nonlinear error is one which results from radial
distortions. And because the model of the radial distortions is a function of the radius, so the
smallest error is in the center of it, and the error increases the farther we are from it. The
following table shows how the accuracy changes when taking measurements of an object in
the middle of the image and at the borders of it.
Object in the middle of the
image.
Object in the borders of the
image.
Rod
Mean value [mm] 57.14 57.09
Standard deviation [mm] 0.91 1.33
Table 7.3 The influence of the position of object on the image on the accuracy of the measurements
50
It is also important to perform the calibration of the camera in similar circumstances to
that in which the camera will afterwards make measurements. The following table shows how
the accuracy changes when changing the distance of the measured object to the camera
relative to the distance of the calibration pattern. First the measurement of the rod was
performed at the same distance as the distance of the calibration pattern, we then increased
and decreased the distance to the camera.
The same distance
as the calibration
pattern.
Increased distance
to the camera.
Decreased
distance to the
camera.
Rod
Mean value [mm] 56.86 56.45 56.85
Standard deviation [mm] 0.95 0.99 1.29
Table 7.4 The influence of the distance of object to the camera on the accuracy of the measurements
relatively to the position of the calibration pattern.
- Accuracy of distance measuring
When it comes to the practical use of the procedure for calibrating the camera, the first
assumption should be made about what dimensions the measured object will have, and,
according to that the calibration pattern should be prepared. It should be clear that it is
impossible to measure object which are smaller on the image then one pixel, also after setting
the camera the object should be well focused so it should be approximately the same distance
to the camera as it was the calibration object. The object should fit the view so it should not
be too big.
To determine how big the measured object should be according to the calibration
pattern, we perform the calibration of the camera on the grid with dimensions 106.6x106.5
[mm] and measure objects with dimensions in range of 10 – 140 [mm]. As it is presented on
the Figure 7.2 the accuracy is stabilizing and obtained its maximum when approaching the
diameter of the calibration pattern. The absolute error value is tending to the 0.91 % and for
the calibration object diameter the accuracy equals 0.97 [mm].
9
Figure 2.4 Spherical aberration
In this manner the focus position depends on the zone of the lens that is considered.
When the marginal focus is closer to the lens than the axial focus, such as exhibited by the
that positive element (see Figure 2.4), we observe undercorrected spherical aberration.
Conversely, when the marginal focus is located beyond the axial focus, the lens is suffer from
overcorrected spherical aberration.
Ideally, a photographic lens images the world in a plane, where it is recorded by a
sensor. Typically, the sensor is either an approximately flat film or a strictly flat digital array.
Departures from a flat image surface are associated with astigmatism and field curvature, and
lead to a spatial mismatch between the image and the sensor.
As a result, the sensor samples a part of space in front of or behind the sharp image, and its
representation of the image will thus be blurred. Owing to the closely connected natures of
astigmatism and field curvature, it is convenient to treat these Seidel aberrations together.
In the absence of spherical aberration and coma, a lens that is additionally free of
astigmatism offers stigmatic imaging, i.e. points in object space are imaged as true points
somewhere in image space. (Strictly speaking this correct for one color of light only, since
chromatic aberrations lead to blurring too.) A lens that suffers from astigmatism, however,
does not offer stigmatic imaging. In the presence of astigmatism the rendering of an object
detail depends on the orientation of that detail. For instance, a (short) line oriented towards the
image center is called a sagittal (radial) detail, whereas a detail perpendicular to the radial
direction is called a tangential detail. The astigmatic lens may be focused to yield a sharp
image of either the sagittal or the tangential detail, but not simultaneously.
With a real lens, the sagittal and tangential focal surfaces are in fact curved (see Figure
2.5).This figure displays the astigmatism of a simple lens. Here, the sagittal ‘S’ and tangential
‘T’ images are paraboloids which curve inward to the lens.
52
measured object when changing the resolution from 1024:1280 pixels on 512:640 pixels the
error of determined coordinates doubles. The smallest the resolution the error is increasing. In
Table 7.5 we can see how the accuracy of measured rod decrease when decrease the
resolution of captured images.
Building automated process we will also deal with the error of some segmentation or
edge detection of the image to obtain the coordinates of the object and its mirror reflection.
Assuming that error due to this preprocessing equals to one pixel we perform the following
experiment. For the image of rod from Figure 7.1.b we add to the ‘v’ coordinate of one end
value of one pixel. First only to the image of the object itself, then to its reflection, then to
both. Then calculate the dimension for all the cases. Because the object on the image is placed
vertically we in fact enlarge it by one pixel.
Value of pixel added to the v coordinate Measured dimension
[mm]
Difference of dimension
between original image
and with changed
coordinates [mm]
Object Reflection of object
+ 0 + 0 56.5449 0.0
+ 1 + 0 56.6582 0.1132
+ 0 + 1 56.6885 0.1436
+ 1 + 1 56.8018 0.2568
Table 7.6 Table shows the influence of change of pixel coordinates of the object or its reflection and
variation of the resulting change of calculated dimension on one image of the rod
Errors are introduced in the all our calculations and comes from the limitations of the
digital image representation.
Summarizing the measured object should be in the same distance to the camera as the
calibration pattern, should both be the similar size and should be presented in the middle of
the picture.
It is also important that the diameter of the calibration pattern has it’s own error. In our
situation it was printed out on the common inkjet printer. To verify its dimension we simply
measure its diameters with accuracy up to 0.2 mm. This error also propagates to all our
calculations.
53
8. Case study
Using results of my research, described algorithms and ideas I prepare simple complex
system. The task for a system is to measured length of a bolts. As an measuring device I use
simple internet camera with the resolution of 320x240 pixels with poor lens quality, it was
configured to capture black and white images with 256 grey levels. Camera was connected to
the USB port of a PC class computer. Software is written in the LabView 7.1 application
which makes it possible to use prepared Matlab scripts and connect this application to
external system.
User interface of the system is very simple. It displays view from the camera and
calculated parameters for camera calibration procedure, system calibration procedure and
finally length of measured object.
It allows to choose which correction to the measurements should be performed and so
we can turn on or off radial distortions removing , object optimization or homography
correction while calibrating our system. When measuring we can turn on or off radial
distortion removing and mirror pole correction.
Below the screen with the view of the camera we should enter working path for
storing temporary files during execution of the program.
Figure 8.1 Calibration of the camera pattern with calculated parameters of radial distortions
54
To perform radial distortion correction firs we have to calculate its parameters for a
used camera. To do this we use pattern with grid of squares which more or less fills whole
view of the camera.
After pressing the button “Calibrate camera” firs the image stored in the temporary file
“labview.bmp” is analyzed and coordinates of a calibration pattern squares are stored in the
file “distortedpatern.mat”. Then parameters for a camera are calculated and stored in the
temporary file “distortionmodel.mat”.
Parameters these are later used when calibrating system and/or during the measurements. This
step can be omitted when during measuring procedure and calibration of the system procedure
we won’t use radial distortion removing.
Figure 8.2 Calibration of the system pattern with calculated parameters of the system
Before starting the measurements we have to set correctly the scene. Determine the
position of the mirror and the camera which will later allow us to capture in the camera view
measured object and its mirror reflection. Then we can choose which correction in the
calculations should be performed and after pressing the “Calibrate system” button the image
stored in the temporary file “labview.bmp” is analyzed and coordinates of a calibration
pattern squares and their mirror reflection are stored in the file “labview.mat”.
After that all parameters of the system are calculated and stored in the file
“calibrationmatrix.mat” for later measurements and displayed on the screen.
55
Figure 8.3 Measuring the bolt
When all calibration parameters of the camera and the system are known to us we can
start the measurements. The object should be placed more or less in the same position as the
calibration pattern and in such a manner that it is visible to the camera together with its
mirror reflection. After pressing the button “Measure object” temporary image from the
camera “labview.bmp” is analyzed and the coordinates of the far sides of the bolt are stored in
the file “labview.mat”. Then according to the set parameters measuring procedure is
performed and the result is displayed on the screen.
In our case we presented on the Figure 8.3 we measure the bolt with true length
45mm, my system give the result of 43mm. Error of 4.5 % comes from the very poor quality
of used camera.
Capturing the images from the camera is done in fourth steps. The image from the
camera is not streamed directly to the interface but only captured every second stored in the
temporary file on the disk and displayed from it.
In the step with index zero capture window from the camera is created as an output the
window handle is pass on. Step nr one is simple time delay, because the image capturing
operation is time consuming it is necessary to introduce it. In next step with index two the
image which window handle was given in firs step is stored on the disc in the specified file
and directory with specified attributes. The last step with index three is destroying the
captured window and displaying the image from the temporary file on the screen.
56
Figure 8.4 Capturing the image from the camera
Algorithm of the whole interface is quite simple. After pressing the button appropriate
Matlab script is executed. To the script appropriate interface options are transferred after the
script is executed its outputs are displayed on the interface.
Calibrate camera script Calibrate system script Measure object script
cd c:webcam
name = 'labview.BMP';
paterncreation(name);
[ p_opt ] = distortionoptim;
du = p_opt(1);
dv = p_opt(2);
ax1 = p_opt(3);
ax2 = p_opt(4);
ax3 = p_opt(5);
ax4 = p_opt(6);
ay1 = p_opt(7);
ay2 = p_opt(8);
ay3 = p_opt(9);
cd c:webcam
name = 'labview.BMP';
type = 'grid';
dimen = 60;
getgrid(name);
if rd > 0
rdistortionremoval(name);
end
[enh,e,c,f,skale,L,alfa,psi,Rot]=
main(name,type,dimen,op,hm);
eu = e(1), ev = e(2) ;
cu = c(1), cv = c(2) ;
cd c:webcam
name = 'labview.BMP';
getextrema(name);
if rd2 > 0
rdistortionremoval(name);
end
load('calibrationmatrix.mat');
[ Do , Dop , objectdim ] =
measure(name,enh,Rot,skale,alf
a,c,f,1,mp)
Table 8.1 Matlab scripts executed after choosing appropriate action
57
Figure 8.5 Algorithm diagram of application interface
Presented scripts executes only the main files and transfer the interface options to the
functions. Contents of all used functions in the Matlab scripts are presented in the appendixes.
58
9. Summary
- Accomplished goals
Summarizing my master thesis according to the goals designated at the beginning I
fulfilled it in hundred percents. I present ready algorithm for calibration of a digital camera
for a 3D reconstruction and an algorithm for measuring purpose. Perform experiments with
different methods of calculations of camera parameters. Implement algorithms neglecting
nonlinear errors created in the lens system for enhancing accuracy of the system. And finally
made ready exemplary system based on that algorithms. My master thesis give also a solid
background for future papers in this area of study.
- Proposals of future research
The biggest part of the experiments presented in my master thesis and collected data
used for it were performed and gathered on the KDG university in Antwerp, Belgium in the
Industrial Vision Laboratory as a part of the BOF project held by Luc Mertens and Rudi
Penne. Whole practical part was made in the winter semester of academic year 2004/2005 and
because of lack of time some ideas and thesis remains unfinished and unimplemented. But for
future research its worth mentioning about them.
From Table 7.1 in procedure 3 we can observe that the calculations of the mirror pole
‘e’ and central point ‘c’ are much more stable and precise for ‘v’ coordinate then for ‘u’
coordinate. It is caused because we calculate the mirror pole ‘e’ for the one direction which is
convergent with the ‘u’ coordinate. To obtain similar stability for second coordinate it is
possible to use second mirror and calculate the second mirror pole for perpendicular direction.
Then we can repeat the calculation procedure for second direction and we will obtain better
stability for ‘u’ coordinate. Finally for measurements we combine the results using calculated
coordinate of ‘v’ of central point ‘c’ from calculations for firs mirror pole and coordinate of
‘u’ from calculations for second mirror pole.
In the paper of Christian Brauer-Burchardt and Klaus Voss {Ref. 13} we can read
about vanishing point triangle used to calculate the central point ‘c’ which is applicable for
our system and can be tested as an other method of central point calculation.
Having multiple calibration images and some exemplary measurement images we can
perform optimization of calculated parameters and when construct minimization function
correct the accuracy of measurements.
59
- Futures and industrial applications
The possibility of no contact length measures has a wide field of applications for
example in industry in many quality measures systems which can drastically increase speed of
such a system. Nowadays such a automated systems delivers for example KEYENCE and
SIEMENS. Hardware implemented systems with possibility of connection multiple cameras
with simple programming environment. The drawback of that systems is that the measured
object should be precisely placed in parallel position with respect to the camera. Then its
dimensions are calculated in the pixels and multiply by user given factor which scale the
measured distance from pixels to wanted length unit. This method limits the range of possible
applications and can be used only for measurements only in two dimensions.
My master thesis can be also the starting point for further more sophisticated
applications. 3D reconstruction gives wide range of application. For example 3D scanners
where the accuracy of reconstruction of course influence the correctness of representation of
scanned object. In architectural applications for virtual reconstruction of buildings used in
monuments renovation works. In vision guided robot systems.
10
Figure 2.5 Simple lens with undercorrected astigmatism. T - tangential surface; S - sagittal surface;
P - Petzval surface
As a consequence, when the image center is in focus the image corners are out of
focus, with tangential details blurred to a greater extent than sagittal details. Although off-axis
stigmatic imaging is not possible in this case, there is a surface lying between the ‘S’ and ‘T’
surfaces that can be considered to define the positions of best focus.
The surface P (see Figure 2.5) is the Petzval surface, named after the mathematician Joseph
Mikza Petzval. It is a surface that is defined for any lens, but that does not relate directly to
the image quality - unless astigmatism is completely absent. In the presence of astigmatism
the image is always curved (whether it concerns S, T, or both) even if P is flat.
All this phenomena together will cause quite big distortions to our image what will
result in radial distortions (see Figure 2.6) on the image and finally error of our measurement.
The commonly observed are pillow or barrel distortions, easy and almost possible completely
to remove, but sometimes distortions are more complex and we will observe wave distortions.
Because the way they are created are well known to us we can easily model and remove them
by recalculating position of all pixels on the image.
a) b) c)
Figure 2.6 Different kinds of radial distortions a) barrel, b) pillow, c) wave
But problem to obtain very good sharpness on object and its mirror reflection, which
usually is almost impossible, makes the objects edges blurred. This cause uncertainties in
coordinates of points which creates calibration object and points which determine edges of
object which we are going to measure. Although in case when we are calibrating the camera
we can choose such an object so we can almost minimize this error to zero. But in case of
measured objects we have to use some edge detection techniques to get better precision of
measurement. Due to mechanical construction of the camera we have to remember that
calibration parameters of camera are changing with every change of camera settings {Ref.
2,3,4}.
61
Figure 8.2 Calibration of the system pattern with calculated parameters of the system .. 54
Figure 8.3 Measuring the bolt …………………………………………………………... 55
Figure 8.4 Capturing the image from the camera ………………………………………. 56
Figure 8.5 Algorithm diagram of application interface ………………………………… 57
- Index of tables
Table 4.1 Mean value of the distance of centers of objects to the line passing through
them and the standard deviation for it, for a grid of 11x11 squares and a grid of
19x19 spots ………………………………………………………………….. 19
Table 4.2 Value of parameters of radial distortion models and coordinates of center of
radial distortions calculated using pattern grid reconstruction algorithm …... 21
Table 4.3 Value of parameters of radial distortion models and coordinates of center of
radial distortions calculated using pattern geometry reconstruction algorithm 22
Table 4.4 Value of coordinates of points before and after 2D homography correction of
object plane ………………………………………………………………….. 25
Table 5.1 Mirror pole coordinates for two calculation methods ………………………. 30
Table 7.1 Comparison of stability of calculation for three different calibration algorithms
Table 7.2 Comparison of the stability of calculation dimensions of measuring object .. 49
Table 7.3 The influence of the position of object on the image on the accuracy of the
measurements ……………………………………………………………….. 49
Table 7.4 The influence of the distance of object to the camera on the accuracy of the
measurements relatively to the position of the calibration pattern ………….. 50
Table 7.5 The influence of the resolution of the image on the accuracy of the
measurements ……………………………………………………………….. 51
Table 8.1 Matlab scripts executed after choosing appropriate action …………………. 56
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree
dissertation master degree

More Related Content

What's hot

Dissertation or Thesis on Efficient Clustering Scheme in Cognitive Radio Wire...
Dissertation or Thesis on Efficient Clustering Scheme in Cognitive Radio Wire...Dissertation or Thesis on Efficient Clustering Scheme in Cognitive Radio Wire...
Dissertation or Thesis on Efficient Clustering Scheme in Cognitive Radio Wire...aziznitham
 
MasterThesis_LinZhu
MasterThesis_LinZhuMasterThesis_LinZhu
MasterThesis_LinZhuLin Zhu
 
Final Report 9505482 5845742
Final Report 9505482 5845742Final Report 9505482 5845742
Final Report 9505482 5845742Bawantha Liyanage
 
ShawnQuinnCSS565FinalResearchProject
ShawnQuinnCSS565FinalResearchProjectShawnQuinnCSS565FinalResearchProject
ShawnQuinnCSS565FinalResearchProjectShawn Quinn
 
Design and Analysis CMOS Image Sensor
Design and Analysis CMOS Image SensorDesign and Analysis CMOS Image Sensor
Design and Analysis CMOS Image Sensorinventionjournals
 
Distributed Mobile Graphics
Distributed Mobile GraphicsDistributed Mobile Graphics
Distributed Mobile GraphicsJiri Danihelka
 
Neural Network Toolbox MATLAB
Neural Network Toolbox MATLABNeural Network Toolbox MATLAB
Neural Network Toolbox MATLABESCOM
 
Development of a 3D Interactive Virtual Market System with Adaptive Treadmill...
Development of a 3D Interactive Virtual Market System with Adaptive Treadmill...Development of a 3D Interactive Virtual Market System with Adaptive Treadmill...
Development of a 3D Interactive Virtual Market System with Adaptive Treadmill...toukaigi
 
Vision system for robotics and servo controller
Vision system for robotics and servo controllerVision system for robotics and servo controller
Vision system for robotics and servo controllerGowsick Subramaniam
 
T.O.M 3.0 (Final PRINT)
T.O.M 3.0 (Final PRINT)T.O.M 3.0 (Final PRINT)
T.O.M 3.0 (Final PRINT)Amit Bhakta
 
IRJET - Steering Wheel Angle Prediction for Self-Driving Cars
IRJET - Steering Wheel Angle Prediction for Self-Driving CarsIRJET - Steering Wheel Angle Prediction for Self-Driving Cars
IRJET - Steering Wheel Angle Prediction for Self-Driving CarsIRJET Journal
 
An Application of Stereo Image Reprojection from Multi-Angle Images fo...
An  Application  of  Stereo  Image  Reprojection  from  Multi-Angle Images fo...An  Application  of  Stereo  Image  Reprojection  from  Multi-Angle Images fo...
An Application of Stereo Image Reprojection from Multi-Angle Images fo...Tatsuro Matsubara
 
Final Report - Major Project - MAP
Final Report - Major Project - MAPFinal Report - Major Project - MAP
Final Report - Major Project - MAPArjun Aravind
 

What's hot (16)

Dissertation or Thesis on Efficient Clustering Scheme in Cognitive Radio Wire...
Dissertation or Thesis on Efficient Clustering Scheme in Cognitive Radio Wire...Dissertation or Thesis on Efficient Clustering Scheme in Cognitive Radio Wire...
Dissertation or Thesis on Efficient Clustering Scheme in Cognitive Radio Wire...
 
MasterThesis_LinZhu
MasterThesis_LinZhuMasterThesis_LinZhu
MasterThesis_LinZhu
 
Final Report 9505482 5845742
Final Report 9505482 5845742Final Report 9505482 5845742
Final Report 9505482 5845742
 
ShawnQuinnCSS565FinalResearchProject
ShawnQuinnCSS565FinalResearchProjectShawnQuinnCSS565FinalResearchProject
ShawnQuinnCSS565FinalResearchProject
 
Raida ii
Raida iiRaida ii
Raida ii
 
Design and Analysis CMOS Image Sensor
Design and Analysis CMOS Image SensorDesign and Analysis CMOS Image Sensor
Design and Analysis CMOS Image Sensor
 
Distributed Mobile Graphics
Distributed Mobile GraphicsDistributed Mobile Graphics
Distributed Mobile Graphics
 
Neural Network Toolbox MATLAB
Neural Network Toolbox MATLABNeural Network Toolbox MATLAB
Neural Network Toolbox MATLAB
 
1886 1892
1886 18921886 1892
1886 1892
 
Development of a 3D Interactive Virtual Market System with Adaptive Treadmill...
Development of a 3D Interactive Virtual Market System with Adaptive Treadmill...Development of a 3D Interactive Virtual Market System with Adaptive Treadmill...
Development of a 3D Interactive Virtual Market System with Adaptive Treadmill...
 
Vision system for robotics and servo controller
Vision system for robotics and servo controllerVision system for robotics and servo controller
Vision system for robotics and servo controller
 
T.O.M 3.0 (Final PRINT)
T.O.M 3.0 (Final PRINT)T.O.M 3.0 (Final PRINT)
T.O.M 3.0 (Final PRINT)
 
IRJET - Steering Wheel Angle Prediction for Self-Driving Cars
IRJET - Steering Wheel Angle Prediction for Self-Driving CarsIRJET - Steering Wheel Angle Prediction for Self-Driving Cars
IRJET - Steering Wheel Angle Prediction for Self-Driving Cars
 
final (1)
final (1)final (1)
final (1)
 
An Application of Stereo Image Reprojection from Multi-Angle Images fo...
An  Application  of  Stereo  Image  Reprojection  from  Multi-Angle Images fo...An  Application  of  Stereo  Image  Reprojection  from  Multi-Angle Images fo...
An Application of Stereo Image Reprojection from Multi-Angle Images fo...
 
Final Report - Major Project - MAP
Final Report - Major Project - MAPFinal Report - Major Project - MAP
Final Report - Major Project - MAP
 

Viewers also liked

Drager Evita 4, Intensive Care Ventilator.
Drager Evita 4, Intensive Care Ventilator.Drager Evita 4, Intensive Care Ventilator.
Drager Evita 4, Intensive Care Ventilator.ceswyn
 
Biomedical equipment technician skill standards
Biomedical equipment technician skill standardsBiomedical equipment technician skill standards
Biomedical equipment technician skill standardsLệnh Xung
 
Electrocardiogram 2554
Electrocardiogram 2554Electrocardiogram 2554
Electrocardiogram 2554Mew Tadsawiya
 
Therapeutic Ultrasound for Physiotherapy students
Therapeutic Ultrasound for Physiotherapy studentsTherapeutic Ultrasound for Physiotherapy students
Therapeutic Ultrasound for Physiotherapy studentsSaurab Sharma
 
Types of centrifuges
Types of centrifugesTypes of centrifuges
Types of centrifugesShilpa Bhat
 
Tutorial in Basic ECG for Medical Students
Tutorial in Basic ECG for Medical StudentsTutorial in Basic ECG for Medical Students
Tutorial in Basic ECG for Medical StudentsChew Keng Sheng
 
Measurement & calibration of medical equipments
Measurement & calibration of medical equipmentsMeasurement & calibration of medical equipments
Measurement & calibration of medical equipmentsJumaan AlAmri
 
Centrifugation principle and types by Dr. Anurag Yadav
Centrifugation principle and types by Dr. Anurag YadavCentrifugation principle and types by Dr. Anurag Yadav
Centrifugation principle and types by Dr. Anurag YadavDr Anurag Yadav
 
Maintanance & operation of biomedical equipements i
Maintanance & operation of biomedical equipements iMaintanance & operation of biomedical equipements i
Maintanance & operation of biomedical equipements iRolando Perez
 
Lecture 01: Bio medical Equipment Technology
Lecture 01: Bio medical Equipment Technology Lecture 01: Bio medical Equipment Technology
Lecture 01: Bio medical Equipment Technology Asanka Lakmal Morawaka
 
Defibrillator (ppt)
Defibrillator (ppt)Defibrillator (ppt)
Defibrillator (ppt)Nitesh Kumar
 
Biomedical instrumentation PPT
Biomedical instrumentation PPTBiomedical instrumentation PPT
Biomedical instrumentation PPTabhi1802verma
 
centrifuge principle and application
centrifuge principle and applicationcentrifuge principle and application
centrifuge principle and applicationPrakash Mishra
 

Viewers also liked (20)

Calibration
CalibrationCalibration
Calibration
 
Drager Evita 4, Intensive Care Ventilator.
Drager Evita 4, Intensive Care Ventilator.Drager Evita 4, Intensive Care Ventilator.
Drager Evita 4, Intensive Care Ventilator.
 
Biomedical equipment technician skill standards
Biomedical equipment technician skill standardsBiomedical equipment technician skill standards
Biomedical equipment technician skill standards
 
Electrocardiogram 2554
Electrocardiogram 2554Electrocardiogram 2554
Electrocardiogram 2554
 
Profibus PA device calibration and maintenance - Andy Verwer
Profibus PA device calibration and maintenance -  Andy VerwerProfibus PA device calibration and maintenance -  Andy Verwer
Profibus PA device calibration and maintenance - Andy Verwer
 
Therapeutic Ultrasound for Physiotherapy students
Therapeutic Ultrasound for Physiotherapy studentsTherapeutic Ultrasound for Physiotherapy students
Therapeutic Ultrasound for Physiotherapy students
 
Therapeutic ultra sound in physiotherapy
Therapeutic ultra sound in physiotherapyTherapeutic ultra sound in physiotherapy
Therapeutic ultra sound in physiotherapy
 
Types of centrifuges
Types of centrifugesTypes of centrifuges
Types of centrifuges
 
Centrifugation
CentrifugationCentrifugation
Centrifugation
 
Tutorial in Basic ECG for Medical Students
Tutorial in Basic ECG for Medical StudentsTutorial in Basic ECG for Medical Students
Tutorial in Basic ECG for Medical Students
 
Defibrillators
DefibrillatorsDefibrillators
Defibrillators
 
Measurement & calibration of medical equipments
Measurement & calibration of medical equipmentsMeasurement & calibration of medical equipments
Measurement & calibration of medical equipments
 
Centrifugation principle and types by Dr. Anurag Yadav
Centrifugation principle and types by Dr. Anurag YadavCentrifugation principle and types by Dr. Anurag Yadav
Centrifugation principle and types by Dr. Anurag Yadav
 
Maintanance & operation of biomedical equipements i
Maintanance & operation of biomedical equipements iMaintanance & operation of biomedical equipements i
Maintanance & operation of biomedical equipements i
 
Ecg
EcgEcg
Ecg
 
Lecture 01: Bio medical Equipment Technology
Lecture 01: Bio medical Equipment Technology Lecture 01: Bio medical Equipment Technology
Lecture 01: Bio medical Equipment Technology
 
Biochemical analysis instruments
Biochemical analysis instrumentsBiochemical analysis instruments
Biochemical analysis instruments
 
Defibrillator (ppt)
Defibrillator (ppt)Defibrillator (ppt)
Defibrillator (ppt)
 
Biomedical instrumentation PPT
Biomedical instrumentation PPTBiomedical instrumentation PPT
Biomedical instrumentation PPT
 
centrifuge principle and application
centrifuge principle and applicationcentrifuge principle and application
centrifuge principle and application
 

Similar to dissertation master degree

mechatronics lecture notes.pdf
mechatronics lecture notes.pdfmechatronics lecture notes.pdf
mechatronics lecture notes.pdfTsegaye Getachew
 
mechatronics lecture notes.pdf
mechatronics lecture notes.pdfmechatronics lecture notes.pdf
mechatronics lecture notes.pdfLaggo Anelka
 
CCD (Charge Coupled Device)
CCD (Charge Coupled Device)CCD (Charge Coupled Device)
CCD (Charge Coupled Device)Sagar Reddy
 
Seminar report on image sensor
Seminar report on image sensorSeminar report on image sensor
Seminar report on image sensorJaydeepBhayani773
 
3D magnetic steering wheel angle and suspension travel detection
3D magnetic steering wheel angle and suspension travel detection3D magnetic steering wheel angle and suspension travel detection
3D magnetic steering wheel angle and suspension travel detectionBruno Sprícigo
 
Project Report Distance measurement system
Project Report Distance measurement systemProject Report Distance measurement system
Project Report Distance measurement systemkurkute1994
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemkurkute1994
 
Bidirectional Visitor Counter for efficient electricity usage.
Bidirectional Visitor Counter for efficient electricity usage.Bidirectional Visitor Counter for efficient electricity usage.
Bidirectional Visitor Counter for efficient electricity usage.NandaVardhanThupalli
 
Thesis_Walter_PhD_final_updated
Thesis_Walter_PhD_final_updatedThesis_Walter_PhD_final_updated
Thesis_Walter_PhD_final_updatedWalter Rodrigues
 
Accident reporting system using mems
Accident reporting system using memsAccident reporting system using mems
Accident reporting system using memsRohit Sinha
 
mechantronics - assignment 1
mechantronics - assignment 1mechantronics - assignment 1
mechantronics - assignment 1Kerrie Noble
 
Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017Reza Pourramezan
 

Similar to dissertation master degree (20)

mechatronics lecture notes.pdf
mechatronics lecture notes.pdfmechatronics lecture notes.pdf
mechatronics lecture notes.pdf
 
mechatronics lecture notes.pdf
mechatronics lecture notes.pdfmechatronics lecture notes.pdf
mechatronics lecture notes.pdf
 
CCD (Charge Coupled Device)
CCD (Charge Coupled Device)CCD (Charge Coupled Device)
CCD (Charge Coupled Device)
 
Seminar report on image sensor
Seminar report on image sensorSeminar report on image sensor
Seminar report on image sensor
 
3D magnetic steering wheel angle and suspension travel detection
3D magnetic steering wheel angle and suspension travel detection3D magnetic steering wheel angle and suspension travel detection
3D magnetic steering wheel angle and suspension travel detection
 
Project Report Distance measurement system
Project Report Distance measurement systemProject Report Distance measurement system
Project Report Distance measurement system
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation system
 
Hoifodt
HoifodtHoifodt
Hoifodt
 
Bidirectional Visitor Counter for efficient electricity usage.
Bidirectional Visitor Counter for efficient electricity usage.Bidirectional Visitor Counter for efficient electricity usage.
Bidirectional Visitor Counter for efficient electricity usage.
 
Intro photo
Intro photoIntro photo
Intro photo
 
Thesis_Walter_PhD_final_updated
Thesis_Walter_PhD_final_updatedThesis_Walter_PhD_final_updated
Thesis_Walter_PhD_final_updated
 
Accident reporting system using mems
Accident reporting system using memsAccident reporting system using mems
Accident reporting system using mems
 
Final_report
Final_reportFinal_report
Final_report
 
mechantronics - assignment 1
mechantronics - assignment 1mechantronics - assignment 1
mechantronics - assignment 1
 
Jung.Rapport
Jung.RapportJung.Rapport
Jung.Rapport
 
FINAL REPORT
FINAL REPORTFINAL REPORT
FINAL REPORT
 
Report
ReportReport
Report
 
Pid
PidPid
Pid
 
main
mainmain
main
 
Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017
 

dissertation master degree

  • 1. SILESIAN UNIVERSITY OF TECHNOLOGY Faculty of Automatic Control, Electronics and Computer Science MASTER THESIS Research on accuracy of geometric reconstruction using digital cameras Author: Marek Kubica Supervisor:dr inż. Henryk Palus Gliwice 2005
  • 2. 2 1. Introduction ……………………………………………………………………….…….. 5 - Purpose of the work ………………………………………………………………….. 5 2. Model of the camera ……………………………………………………………….…… 6 - Construction of the common digital camera and lens defects ……………………….. 6 - Description of geometric model of the camera …………………………………...… 11 • Different coordinate frames …………………………………………………….. 12 • Calibration matrix ………………………………………………………………. 13 • Mirror matrix and mirror constraint …………………………………………….. 14 3. 2D homography ………………………………………………………………………... 16 4. Minimizing the distortions from the image ………………………………………….. 18 - Radial distortion correction ……………………………………………………….… 18 - Homography correction …………………………………………………………….. 24 - Mirror pole optimization ……………………………………………………………. 26 - Mirror pole correction ………………………………………………………………. 28 5. Calibration of the camera ……………………………………………………………... 29 - Mirror pole ………………………………………………………………………….. 29 - Vanishing mirror line ……………………………………………………………….. 31 • Projection of object on the mirror ………………………………………………. 31 • Calculation of vanishing mirror line ……………………………………………. 32 • Horizontal correction …………………………………………………………… 33 - Mirror angle and principal point ……………………………………………………. 34 - 3D reconstruction and scale factor ………………………………………………….. 36 6. Calibration and measuring algorithm ………………………………………………... 38 - Calibration procedure and setting the scene ………………………………………… 39 - Measuring procedure ………………………………………………………………... 42 - Example on a real data ……………………………………………………………… 43 7. Accuracy measures ……………………………………………………………………. 47 - Description of the experimental accuracy measures and procedures ………………. 47 - Accuracy of calibration procedure ………………………………………………….. 48 - Accuracy of distance measuring ……………………………………………………. 49 - Influence of position of measured object …………………………………………… 50 - Influence of resolution of the camera ……………………………………………….. 51 8. Case study ………………………………………………………………………………. 53 9. Summary ……………………………………………………………………………….. 58 - Accomplished goals ………………………………………………………………… 58 - Proposals for the future research ……………………………………………………. 58 - Futures and industrial application …………………………………………………... 59
  • 3. 3 10. Index of figures and tables ……………………………………………………………. 60 11. References ……………………………………………………………………………… 62 12. Appendixes ……………………………………………………………………………... 63 - Distortion minimizing source code …………………………………………………. 63 • Source code for radial distortion removal algorithm …………………………… 63 • Source code for homography correction ………………………………………... 64 • Source code for mirror pole optimization ………………………………………. 68 • Source code for mirror pole correction …………………………………………. 69 - Source code for calibration procedure ……………………………………………… 70 • Source code for mirror pole calculation ………………………………………… 70 • Source code for projection of object on the mirror plane ………………………. 71 • Source code for vanishing mirror line calculation ……………………………… 72 • Source code for mirror angle, focal distance and central point calculation …….. 74 • Source code for scale factor calculation ………………………………………… 77 • Source code for main file for camera calibration parameters …………………... 79 - Source code for measuring algorithm ………………………………………………. 82
  • 4. 4 Acknowledgments At the beginning I would like to thank all the kind people by who I get opportunity to work on the subject described in my master thesis. To international coordinator from my department Mrs. Joanna Polańska for creating me opportunity to study abroad as an Erasmus student on the Karel de Grote-Hogeschool in Antwerp. To Rudi Penne for a very good cooperation, comments and suggestions. To Luc Mertens who give me opportunity to work in the Industrial Vision Laboratory on the department of Industrial Science and Technology on KDG and Daniel Senft with who I can always discussed my problems and ideas.
  • 5. 5 1. Introduction - Purpose of the work The 3D reconstruction of real world scenes gives great opportunities and opens a wide range of applications in the robot or industrial vision. It is the most important sense for humans in the exploration of the universe and it should also be the basic and the primary sensing element of artificial intelligence. Aim of my work was to build complete calibration algorithm for a 3D reconstruction for a metric purpose and make research how to increase accuracy of such a system working with different models of disturbances created by the camera also work on different mathematical methods to neglect effects from all sources of nonlinearities which have a very strong impact on the linear model of the camera. I will describe the creation mechanism of a disturbances in the digital camera and what is the main source of it. Describe the method for removing the nonlinear errors from the image. For a 3D reconstruction system with a camera and a mirror plane I will describe this system precisely its properties and construction. Give a ready solution for stable calculation of an intrinsic and extrinsic calibration parameters. Introduce whole algorithm for a calibration procedure and measuring procedure with an examples on a real data using Matlab 6.5 software. Describe total accuracy of the system and specify which parts requires special care to maintain best precision.
  • 6. 6 2. Model of the camera At the beginning we will look closer to our measuring instrument which is digital camera. We will track all the way of our measurement path starting from the light reflected from the measured object, passing the lens and finally to be captured by the image capturing sensor and converted to the digital representation. I will explain which optical phenomena causes the biggest disturbances to signal which we are processing and what construction details are important to be taken into consideration in calibration process and during measurements. I will describe what mathematical model of a camera I use to calculate all parameters of our system and using simple linear algebra how we are going to reconstruct from simple two dimensional picture real three dimensional coordinates of measured object in reference to the center of the camera. - Construction of the common digital camera During all the years of photography the general idea of taking pictures did not change much only the information capturing and storage significantly evolve. We will look closer to the beginning part of the process because in the preprocessing part of the digital camera to the signal are not added any disturbances we will pass it and take a closer look on the part when the information is still transmitted as a signal in the form of light. The idea and that part of the process is very simple. The light is reflected from the object, is passing through the lenses and is projected on the sensing element, which in digital camera is CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) sensor. Figure 2.1 General idea of construction of the commonly used digital camera A CCD is build of photo sites, typically arranged in an X-Y matrix of rows and columns. Each photo site, in turn, is build of a photodiode and an adjacent charge holding region, which is shielded from light. The photodiode converts light (photons) into charge (electrons). The number of electrons collected is proportional to the light intensity. Typically, light is collected over the entire sensor simultaneously and then transferred to the adjacent charge transfer cells within the columns. Next, the charge is read out: each row of data is moved to a separate horizontal charge transfer register. Charge packets for each row are read out serially and sensed by a charge-to- voltage converter and amplifier.
  • 7. 7 This architecture produces a low-noise, high-performance imager. That optimization, however, makes integrating other electronics onto the silicon impractical. In addition, operating the CCD requires application of several clock signals, clock levels, and bias voltages, complicating system integration and increasing power consumption, overall system size, and cost. A CMOS sensor, is made with standard silicon processes in high-volume foundries. Peripheral electronics, such as digital logic, clock drivers, or analog-to-digital converters, can be readily integrated with the same fabrication process. CMOS sensors can also benefit from process and material improvements made in mainstream semiconductor technology. To achieve these benefits, the CMOS sensors architecture is arranged more like a memory cell or flat-panel display. Each photo site contains a photodiode that converts light to electrons, a charge-to-voltage conversion section, a reset and select transistor and an amplifier section. Overlaying the entire sensor is a grid of metal interconnects to apply timing and readout signals, and an array of column output signal interconnects. The column lines connect to a set of decode and readout (multiplexing) electronics that are arranged by column outside of the pixel array. This architecture allows the signals from the entire array, from subsections, or even from a single pixel to be readout by a simple X-Y addressing technique {Ref. 1}. Figure 2.2 CCD and CMOS image capture sensors Both techniques brings some strengths and weakness but regardless them all for us the most important is that the image coordinates are Euclidean coordinates having equal scales in both axial directions. In the cameras with CCD sensor , there is biggest possibility of having a non-square pixels. If image coordinates are measured in pixels how it is in our case then this has the extra effect of introducing unequal scale factor in each direction.
  • 8. 8 The biggest source of errors in our image are the lenses. Usually it is the lens which distorts the image mostly depending on the quality of it. In commonly used simple cameras with used very cheap lenses which through several optical phenomena change and deflects the measured object on the image almost unreversible. The first phenomena is chromatic aberration. Chromatic aberration arises from dispersion, the property that the refractive index of glass differs with wavelength (see Figure 2.3). There are two types of chromatic aberration: longitudinal aberration and lateral aberration. Figure 2.3 Chromatic aberration - Longitudinal chromatic aberration causes different wavelengths to focus on different image planes. - Lateral chromatic aberration is the color fringing that occurs because the magnification of the image differs with wavelength. There are several ways of removing chromatic aberration. For production use very exotic glasses with very low dispersion like "Hi-UD" glass produced by Cannon {Ref. 3}. Use lens with a very big focal distance so the light do not have to be refracted so much, or use system of two or three lenses with different types of glass so aberration of one lens is corrected by the another. But all those solutions are very expensive and usually we will have to neglect this by preprocessing {Ref 2}. Most photographic lenses are composed of elements with spherical surfaces. Such elements are relatively easy to manufacture, but their shape is not ideal for the formation of a sharp image. Spherical aberration is an image imperfection that is due to the spherical lens shape, Figure 2.4 illustrates the aberration for a single, positive element. Light that hits the lens close to the optical axis is focused at position ‘c’. The light that traverses the margins of the lens comes to a focus at a position ‘a’ closer to the lens.
  • 9. 3 10. Index of figures and tables ……………………………………………………………. 60 11. References ……………………………………………………………………………… 62 12. Appendixes ……………………………………………………………………………... 63 - Distortion minimizing source code …………………………………………………. 63 • Source code for radial distortion removal algorithm …………………………… 63 • Source code for homography correction ………………………………………... 64 • Source code for mirror pole optimization ………………………………………. 68 • Source code for mirror pole correction …………………………………………. 69 - Source code for calibration procedure ……………………………………………… 70 • Source code for mirror pole calculation ………………………………………… 70 • Source code for projection of object on the mirror plane ………………………. 71 • Source code for vanishing mirror line calculation ……………………………… 72 • Source code for mirror angle, focal distance and central point calculation …….. 74 • Source code for scale factor calculation ………………………………………… 77 • Source code for main file for camera calibration parameters …………………... 79 - Source code for measuring algorithm ………………………………………………. 82
  • 10. 10 Figure 2.5 Simple lens with undercorrected astigmatism. T - tangential surface; S - sagittal surface; P - Petzval surface As a consequence, when the image center is in focus the image corners are out of focus, with tangential details blurred to a greater extent than sagittal details. Although off-axis stigmatic imaging is not possible in this case, there is a surface lying between the ‘S’ and ‘T’ surfaces that can be considered to define the positions of best focus. The surface P (see Figure 2.5) is the Petzval surface, named after the mathematician Joseph Mikza Petzval. It is a surface that is defined for any lens, but that does not relate directly to the image quality - unless astigmatism is completely absent. In the presence of astigmatism the image is always curved (whether it concerns S, T, or both) even if P is flat. All this phenomena together will cause quite big distortions to our image what will result in radial distortions (see Figure 2.6) on the image and finally error of our measurement. The commonly observed are pillow or barrel distortions, easy and almost possible completely to remove, but sometimes distortions are more complex and we will observe wave distortions. Because the way they are created are well known to us we can easily model and remove them by recalculating position of all pixels on the image. a) b) c) Figure 2.6 Different kinds of radial distortions a) barrel, b) pillow, c) wave But problem to obtain very good sharpness on object and its mirror reflection, which usually is almost impossible, makes the objects edges blurred. This cause uncertainties in coordinates of points which creates calibration object and points which determine edges of object which we are going to measure. Although in case when we are calibrating the camera we can choose such an object so we can almost minimize this error to zero. But in case of measured objects we have to use some edge detection techniques to get better precision of measurement. Due to mechanical construction of the camera we have to remember that calibration parameters of camera are changing with every change of camera settings {Ref. 2,3,4}.
  • 11. 11 - Description of geometric model of the camera The camera is a simple mapping of 3D world objects on 2D image, now we will look closer to the central projection pinhole model represented by matrices with specific properties which describe mapping between 3D world and 2D image. The idea of 3D reconstruction use simple statement that using simple image of the object with the second image carrying the depth information is enough to determine real space coordinates. We will describe special case of stereo vision, mirror or catadioptric vision, where instead of two cameras the depth information we will extract from the reflection of the object in the mirror. This result in series of simplification. Both views are captured by the same camera with identical camera parameters. It also simplify preprocessing when we deal only with one image. Instead of two epipoles from two cameras we obtain only one mirror pole ‘e’. In classical stereo vision position of two cameras is determined by six parameters in our situation three parameters are only necessary {Ref. 10}. The pinhole model of the camera is defined simply by the retinal plane ‘R’ and the center of the camera ‘C’. In this model the image ‘n’ of the point in the 3D real space ‘N’ is obtained by projection of ‘N’ on retinal plane ‘R’ from camera center ‘C’ (see Figure 5.6). For our considerations we will use a frontal pinhole model where the retinal plane is between the camera center and the object. Because the model is linear all the nonlinearities like radial distortions in the image should be removed before any calculations. We also assume a camera model with square pixels correcting any difference of scaling in axial direction in preprocessing together with radial distortions. a) Frontal view b) Top view Figure 2.7 Two views on the frontal pinhole model of the camera. a) frontal view , b) top view. ‘R’ – retinal plane, ‘M’ – mirror plane, optical axis is a line perpendicular too ‘R’ through ‘C’, hinge ‘g’ is a line of intersection of the mirror plane with the retinal plane , ‘f’- focal distance between ‘C’ and ‘c’ measured in pixel units, horizon ‘h’ is a perpendicular line to ‘k’ through ‘e’, ‘φ’ – angle between mirror plane and retinal plane Considering this model with all our assumptions there are only three intrinsic parameters left to determine. Coordinates of the principal point ‘c’ (uc , vc ) defined in pixel coordinates by the line perpendicular to retinal plane ‘R’ from camera center ‘C’ and focal length ‘f’ measured in pixels and defined as a distance between principal point ‘c’ and camera center ‘C’. And three extrinsic parameters, mirror angle ‘φ’ between retinal plane and mirror plane, shortest distance ‘d’ from the center of the camera ‘C’ to the mirror plane and camera angle ‘θ’ defined as angle between horizon ‘h’ and ‘u’ axis of image (see Figure 2.8). The mirror is represented as a mirror plane ‘M’, the cross-section of mirror plane and retinal plane creates hinge ‘g’.
  • 12. 12 Horizontal plane ‘H’ perpendicular to the hinge ‘g’ and crossing the camera center ‘C’ defines the horizon of the image ‘h’. The line perpendicular to the mirror plane and passing camera center together with the horizon define mirror pole called also vanishing point. We should prevent situation when mirror angle φ = 0, then mirror pole and principal point are equal e = c. Figure 2.8 Axis of horizontal pixel coordinates U and V and horizontal standard coordinates X Y Z • Different coordinate frames For our calculation we use three coordinate systems: - pixel coordinate frame: it is referenced to the image captured from the digital camera usually with the origin in the upper left corner and the units in pixels. Denoted by - ( u, v ). - horizontal pixel coordinate frame: it describes the pixel coordinate frame but with the U axis parallel to the X axis covering the horizon. Coordinates are denoted by the (uh, vh). - horizontal standard coordinate frame: it is referenced to the principal point ‘c’ where the origin is placed with unit as focal distance (pixel coordinates are divided by the focal distance ‘f’ to simplify further calculations), coordinates are denoted by (x , y, 1). The X axis is parallel to horizon and should be oriented in such a way that the mirror pole is on its negative part. - camera referenced 3D frame: it is referenced to the camera center ‘C’, with units as a focal distance. Coordinates are denoted by (x, y, z) the retinal plane is defined as a plane z=1. In the following text points on the image plane will be denoted by ‘n’ for points of object, ‘n’’ points for object reflection and ‘n’’’ for points form mirror projection. The upper case letter will be used for points in real space 3D coordinates frame respectively ‘N’ for an object points and ‘N’’ for object reflection.
  • 13. 13 • Calibration matrix The intrinsic calibration matrix ‘K’ as it follows from our assumptions is defined by three calibration parameters focal distance ‘f’ and the coordinates of the principal point ‘c’ (uc, vc). To convert the pixel coordinate frame to horizontal pixel coordinate we introduce rotation matrix ‘Rθ’ {Ref. 5,6}.           = 100 0 0 c c vf uf K (2.1),           −= 100 0cossin 0sincos θθ θθ θR (2.2) To convert coordinates of point from pixel frame to horizontal standard frame we simply multiply it by (K Rθ)-1 ( )           =           − 11 1 v u KRy x θ (2.3),                     − =           − 1100 cossin sincos 1 1 v u vcff uff y x c θθ θθ (2.4) From the upper equation follows interesting observation. Because in camera referenced 3D frame ( to which will be the 3D reconstruction referenced, with origin placed in the camera center ‘C‘ ) the retinal plane ‘R’ is described by the equation z=1, so every image point let say ‘n’ in horizontal standard coordinate frame has coordinates (x, y, 1) and in camera referenced 3D frame has coordinates (x, y, 1). From our model assumptions the point ‘N’ in 3D real world space is lying on the ray Cn and has coordinates k(x, y, 1). Taking the problem reversely if the coordinates (X, Y, Z) are camera referenced real world frame then they can be considered as a homogeneous coordinates for the image ‘n’ of ‘N’ so if z ≠ 0 then the point ‘n’ is finite point of retinal plane ‘R’ with horizontal standard coordinates (x/z, y/z, 1).
  • 14. 14 • Mirror matrix and mirror constraint Because the mirror is our object of our interests like camera lets define its matrix representing the reflection with respect to the mirror plane ‘M’, we also define additional mirror plane ‘Mo’ parallel to ‘M’ and passing through the camera center ‘C’ what will help in this determinations. Figure 2.9 View on the frontal pinhole model of the camera from the top with reflection ‘C’’ point of the camera center ‘C’ with respect to the mirror plane ‘M’ The reflection can be decomposed into a translation and linear part. The linear part is corresponding to the reflection with respect to the plane ‘Mo’ and in camera referenced real world frame it can be described by multiplication by matrix ‘So’:           − = ϕϕ ϕϕ 2cos02sin 010 2sin02cos So (2.5) Following this idea, if ‘C’’ is reflection of ‘C’ with respect to ‘M‘ then: ( )sind20,,sind2-C' ϕϕ= (2.6) Finally since the reflection with respect to mirror plane ‘M’ can be expressed as a reflection with respect to ‘Mo’ followed by the translation by the vector ‘C’’ we can joint both operation together in one so called mirror matrix ‘S’:
  • 15. 4 Acknowledgments At the beginning I would like to thank all the kind people by who I get opportunity to work on the subject described in my master thesis. To international coordinator from my department Mrs. Joanna Polańska for creating me opportunity to study abroad as an Erasmus student on the Karel de Grote-Hogeschool in Antwerp. To Rudi Penne for a very good cooperation, comments and suggestions. To Luc Mertens who give me opportunity to work in the Industrial Vision Laboratory on the department of Industrial Science and Technology on KDG and Daniel Senft with who I can always discussed my problems and ideas.
  • 16. 16 3. 2D homography Direct linear transformation (DLT) algorithm of 2D homography computes the projective transformation of 2D plane to different 2D plane. This algorithm is usually used to bring to the frontal view planes of our interest from the image or reversely, in our research we will use it to calculate calibration parameters and also to neglect some nonlinearities from the image. We assume a set of four points ‘ni‘ in a plane ‘n’, no three of them collinear, and assume that they are visible to us so we can determine coordinates of them ni (ui, vi, 1). Also this points forms known to us pattern represented by points ‘n’i’ on a plane ‘n’’. Since the points of projective plane ‘n’ are in correspondence with the other projective plane ‘n’’ by projective mapping, algebraically it means that homogeneous coordinates of points ‘ni‘ transforms to homogenous coordinates of points ‘n’i’ by homography matrix ‘H’. This equation holds only for homogeneous vectors with the same direction but may differ in magnitude by a nonzero scalar ‘λ’. So we have to write: λnHn ' ii = where           =Η 987 654 321 hhh hhh hhh (3.1) If we substitute for ni = (ui, vi, 1) and n’i = (u’i, v’i, 1) and apply the cross product with (u v 1 )T to both sides, ‘λ’ is eliminated and we obtain three linear homogeneous equations with the unknowns hj (for j = 1,..,9) where the third equation is linearly dependent on the first and second one.           =                     −−− −−− −−− 0 0 0 000 000 1 1 000 9 1 '' '' '' '' '' '' h h uvuuu vvvuv uvuuu vu vvvuv vu iiiii iiiii iiiii ii iiiii ii M (3.2) So the four points of model plane n’i (u’i, v’i, 1) and the four points from the image ni (ui, vi, 1), gives us a system of eight linear homogeneous equations with the unknowns hj. Since no three of the ‘n’i’ are collinear, this system of equating has rank eight and it is sufficient to determine matrix ‘H’ up to a global factor. It is recommended for our calculation because of the noise of the image, to use more than four correspondences, and to use them as many as possible. In our situation we will use all points from our calibration pattern. With ‘k’ number of correspondences we have a coefficient matrix ‘M’ with size 2k by 9 for the system of equations with hj unknowns {Ref. 5}.                 −−− −−− −−− −−− =Μ kkkkkkk kkkkkkk uvuuuvu vvvuvvu uvuuuvu vvvuvvu '''' '''' 1 ' 11 ' 11 ' 1 ' 1 1 ' 11 ' 11 ' 1 ' 1 0001 1000 0001 1000 MMMMMMMMM (3.3)
  • 17. 17 Solving the system using singular value decomposition we obtain a non-zero solution in the form of a vector with coefficients of searched homography matrix ‘H’. T VDUM ⋅⋅= (3.4) When we represent the solution of singular value decomposition as in the equation above, the solution for matrix ‘H’ is in the last column of matrix ‘V’ where: ‘U’ is a unitary matrix and ‘D’ is diagonal matrix of the same dimension as ‘M’, with nonnegative diagonal elements in decreasing order. It is recommended that image plane should be normalized so the midpoint of the image plane and the model plane are similar, and the biggest distance of all points to this origin is less than the square root of two. Below we can see example of computed homography matrix ‘H’ for an object plane points, plain itself, its frontal view and model used to calculate it (see Figure 3.1).           =Η 0.6396-0.0301-0.0575 0.0029-0.5991-0.2408- 0.00060.12720.3916- Figure 3.1 Plane of object points (blue color dots) and its frontal view (green color dots) obtained using a homography matrix calculated for it. Model plane which was used to calculate homography matrix is plotted using black crosses
  • 18. 18 4. Minimizing the distortions from the image The accuracy of the whole procedure mostly depends on the quality of the picture and how precisely we process the image to fit the pinhole model of the camera. The mathematical pinhole model is only the projection of some point in space on a retinal plane from the center of the camera so it is a linear model. This means that before we make any calculations, we should get rid of all nonlinear disturbances form the image. Radial distortions are responsible for the biggest part of that disturbances, the causes which created them is well known to us. So to avoid them, it is enough to model them and use some optimization techniques to choose coefficients such that the errors performed by them will be minimal. There are also many sources of errors which are very hard to estimate and it is impossible model them, like the disturbances of the used mirror, or just simply the numerical error of calculation of the coordinates of points from the calibration pattern. But fortunately they all produce a very minimal error and we will use the 2D homography correction method and mirror pole optimization to avoid them all or to minimize the influence of them. - Radial distortion correction Radial distortion created by the different magnification of the image in different distance to the axis of the lenses and by astigmatism, disturbs the image strongly so it is very important to correct this error very carefully. The idea is simple, to obtain the undistorted image we multiply the coordinates of each pixel by the modeled function of radius ‘r’ from the center of the radial distortion (see equations 4.1 to 4.5). However, to obtain better results we use different models for ‘u’ and ‘v’ coordinates. pd (ud,vd) – coordinates of the distorted pixel. p (u,v) – coordinates of the undistorted pixel. rdc (uc,vc) – coordinates of center of radial distortion Mu(r) – model of distortion for u coordinate. Mv(r) – model of distortion for v coordinate. ( ) )( cdc uurMuuu −∗+= (4.1) ( ) )( cdc vvrMvvv −∗+= (4.2) 22 )()( cdcd vvuur −+−= (4.3) For the model Mu(r) and Mv(r) we use the Taylor series expansion with one significant change M(0) ~ 1, and limit it to the third component because the further expansion has a very small influence to the accuracy of the algorithm. ( ) 3 4 2 321 rararaarMu ∗+∗+∗+= (4.4) ( ) 3 4 2 321 rbrbrbbrMv ∗+∗+∗+= (4.5) Doing this we will obtaining new coordinates of the pixel without distortions. After computations for all pixels we will notice that the image is stretched, the picture will be bigger so to keep the original size of the image we have to cut the borders.
  • 19. 19 Because the image is a discrete function and the model function is continuous one, some pixels after correction and rounding can occupy the same place so in this way we will lose a little part of information. Our image will contain also some blank spaces, to fill them we calculate median value from the neighborhood (see Figure 4.4). To create the best model of the radial distortions we follow the given procedure. First we take the image of a calibration grid, and obtain the coordinates of centers of gravity of the objects which created it. Special care should be taken to place the calibration object parallel position to the camera so the image of object is not in perspective view, than the calibration objects will cover moreover the whole surface of the image and the radial distortion model parameters will be calculated precisely for whole area of the image. a) Grid of 11x11 squares. b) Grid of 19x19 blobs. Figure 4.1 Two images of calibration grids a) grid created from 11x11 square objects and b) grid of 19x19 circular objects. Both pictures are taken with the same camera settings to compare which objects are better for the radial distortion model calculations For further calculation we advise to use the calibration pattern which is created from squares because of several reasons. It is easy to detect edges and automatically calculate centers on square shape than for the circular one and the calculation are more stable what can be proved by simple calculations. We simply separate the middle vertical column of the grid and calculate coefficients of the line which best fits them, and for every point calculate the distance to this line. Calculating the ‘mean’ value of the distance to this line and standard deviation we can estimate which object is better (See Table 4.1). Grid of squares Grid of blobs Mean value [pixel] 0.1522 0.3020 Standard deviation [pixel] 0.1256 0.2388 Table 4.1 Mean value of the distance of centers of objects to the line passing through them and the standard deviation for it, for a grid of 11x11 squares and a grid of 19x19 spots During the research process when trying to get better stability of the calibration algorithm two methods of finding parameters for radial distortions model were invented and tested. The first one was pattern grid reconstruction, the second pattern geometry reconstruction.
  • 20. 20 The idea of pattern grid reconstruction is to reconstruct the grid how it should look like without radial distortions. Because the distortions are smallest in the middle of the picture so we take the distance of middle two points of middle column of objects as a reference one. Also using points of middle column we calculate the angle by which the calibration grid is rotated in reference to the image axis ‘u’. Using this two information we reconstruct the grid of calibration pattern how it should look like without radial distortions. It is enough now to find such parameters of the model that the difference between the reconstructed pattern grid and the calculated pattern will be the smallest. Figure 4.2 Plot represents the position of the points before the radial distortions removing (red dots), after it (green dots), the reconstructed calibration grid (blue crosses) and also the central point of radial distortions marked with the black cross When we have coordinates of ‘n’ number of calibration object calculated from the image pi (upi, vpi) and that which we reconstruct ri(uri, vri) we can define function which we will minimize to find optimal parameters of the model. ( ) )( cici uuprMuuu −∗+= (4.6) ( ) )( cici vvprMvvv −∗+= (4.7) 22 )()( cici vvpuupr −+−= (4.8) In equations above ( 4.6, 4.7 ) as function Mu(r) and Mv(r) we substitute equations 4.4 and 4.5 respectively.
  • 21. 21 ( ) ( )( )∑= −+−= n i iiii vvruurF 1 22 (4.9) Now minimizing function ‘F’ we are looking for an optimal values of following variables: the center of the radial distortions (uc, vc) (with starting point in the center of the image ) and coefficients [ a1 a2 a3 a4 ] and [ b1 b2 b3 b4 ] (for both with starting points [1 0 0 0]). For pattern grid reconstruction method we obtain the result shown in Table 4.2. Experiments were made with the PHILIPS Inca 311 camera with resolution 1280x1024 pixels (see chapter 6). Mu(r) a1 a2 a3 a4 0.9960154318228 -0.0000198040083 0.0000000522360 0.0000000000500 Mv(r) b1 b2 b3 b4 1.0061756365458 -0.0000942254838 0.0000002431012 -0.0000000000912 uc vc 641.7754 509.0369 Table 4.2 Value of parameters of radial distortion models and coordinates of center of radial distortions calculated using pattern grid reconstruction algorithm In pattern geometry reconstruction the idea is to bring the rows and column of pattern points so they create straight lines. The biggest advantage of pattern geometry reconstruction is that it does not need any other data than the coordinates of points of the calibration grid. It use only the simple fact that in reality the calibration pattern points in every row and column create perfect straight lines. Figure 4.3 Plot presents the position of the points before the radial distortions removing (red dots), and after it (green dots), and also the central point of radial distortions marked with the black cross
  • 22. 5 1. Introduction - Purpose of the work The 3D reconstruction of real world scenes gives great opportunities and opens a wide range of applications in the robot or industrial vision. It is the most important sense for humans in the exploration of the universe and it should also be the basic and the primary sensing element of artificial intelligence. Aim of my work was to build complete calibration algorithm for a 3D reconstruction for a metric purpose and make research how to increase accuracy of such a system working with different models of disturbances created by the camera also work on different mathematical methods to neglect effects from all sources of nonlinearities which have a very strong impact on the linear model of the camera. I will describe the creation mechanism of a disturbances in the digital camera and what is the main source of it. Describe the method for removing the nonlinear errors from the image. For a 3D reconstruction system with a camera and a mirror plane I will describe this system precisely its properties and construction. Give a ready solution for stable calculation of an intrinsic and extrinsic calibration parameters. Introduce whole algorithm for a calibration procedure and measuring procedure with an examples on a real data using Matlab 6.5 software. Describe total accuracy of the system and specify which parts requires special care to maintain best precision.
  • 23. 23 Finally having all parameters of our model of radial distortions using equations 4.1 to 4.5 we can restore the image how it should look like without any disturbances. a) b) c) d) Figure 4.4 Result of radial distortion removing algorithm: a) original distorted picture, b) picture after recalculation position of all pixels, c) picture after restoration of its original resolution, d) picture after median filtering
  • 24. 24 - Homography correction There is a lot of distortion in the image which we cannot measure or model in any manner. There is some error created during the calculation of the coordinates of our calibration grid, the mirror itself is not perfectly flat and the reflection of the object can be disturbed in some way and the CCD sensor of the camera also could be a source of error. Altogether they produce a bigger or smaller effect on the accuracy of our procedure. But using the fact that we know perfectly the geometry of the calibration pattern using 2D homography, we can in some part neglect all of this nonlinearities without knowing the mechanism of its creation. The calculation of the homography matrix ‘H’ is the same as was described in chapter 3, to calculate it we use coordinates obtained from the image. Having this matrix we recalculate the coordinates of all points (u, v, 1) using points from planar geometrical model (up, vp, 1).           Η=           11 p p v u v u (4.16) The homography matrix should be calculated separately for object and mirror reflection. The results of 2D homography correction are presented in the Table 4.4. Figure 4.5 Dots presents the points of calibration object and its mirror reflection before the 2D homography correction, with crosses are presented points after correction. The change is not significant but it restores the geometry of the calibration pattern
  • 25. 25 In the following table is shown exemplary data for one of our calibration pictures, homography matrix for object plane ‘H’ and ‘u’ and ‘v’ coordinates before and after 2D homography correction:           = 0.6396-0.0301-0.0575 0.0029-0.5991-0.2408- 0.00060.12720.3916- H Before 2D homography correction After 2D homography correction u V u V 753.2659 25.5276 752.8638 26.3765 828.7765 60.8509 828.0358 60.7551 910.1132 98.3507 910.3024 98.3782 1001.5088 139.8043 1000.7175 139.7279 1101.2258 184.1600 1100.5551 185.3868 730.8510 148.0549 730.8856 147.8688 803.5616 187.7725 803.3536 186.8014 881.8567 230.1468 882.4957 229.3197 968.9695 276.8071 969.2783 275.9427 1063.9518 327.0922 1064.8641 327.2951 709.6388 264.1000 709.8589 264.1011 779.8578 307.7382 779.7860 307.1559 855.5207 354.4177 856.0011 354.0823 939.0063 405.5429 939.3925 405.4272 1030.0988 460.9489 1031.0238 461.8454 689.3885 374.9755 689.7232 375.4079 757.2719 422.3277 757.2591 422.1954 830.6126 473.1347 830.7280 473.0934 910.9229 528.5447 910.9477 528.6682 998.6135 588.8842 998.8937 589.5955 670.0147 481.1723 670.4232 482.0956 735.8371 532.0111 735.7055 532.2646 807.1965 587.0008 806.5937 586.7417 884.6000 646.5927 883.8422 646.1067 969.2026 711.6704 968.3474 711.0483 Table 4.4 Value of coordinates of points before and after 2D homography correction of object plane
  • 26. 26 - Mirror pole optimization. Recalling information from chapter 2 (see equation 2.9) that having two points n1, n2 and their mirror reflections n’1, n’2 on the image which are projections of points in space N1,N2 and their mirror reflections N’1,N’2, we can easily calculate mirror pole e(ue, ve). Unfortunately due to the noise on the image we can not precisely determine its coordinates. Figure 4.6 Lines for every possible pair of points ‘n’ and ‘n’’. We can easily observe that it is impossible to determine precisely mirror pole ‘e’ To solve this problem we will again use nonlinear optimization to manipulate the coordinates of mirror pole ‘e’ and the pairs of points ‘n’, ‘n’’ of object and its mirror reflection. We will do this by finding the line ‘l’ going through the mirror pole which minimize the distance to object point ‘n’ and its reflection ‘n’’, optimizing the coordinates of mirror pole , object and its reflection we minimize sum of that distance for all possible pair of points. When we have calculated the mirror pole ‘e’ coordinates we have to calculate the coefficients of line ‘l’ (see Figure 4.8) to simplify the equation we move the origin to the mirror pole ‘e’ recalculating coordinates of points by: ),(),( 11 ee vvuunvu −−= (4.17) ),('),( 22 ee vvuunvu −−= (4.18) The equation of a line ‘l’ has simple form: )( ee vvauu −=− (4.19)
  • 27. 6 2. Model of the camera At the beginning we will look closer to our measuring instrument which is digital camera. We will track all the way of our measurement path starting from the light reflected from the measured object, passing the lens and finally to be captured by the image capturing sensor and converted to the digital representation. I will explain which optical phenomena causes the biggest disturbances to signal which we are processing and what construction details are important to be taken into consideration in calibration process and during measurements. I will describe what mathematical model of a camera I use to calculate all parameters of our system and using simple linear algebra how we are going to reconstruct from simple two dimensional picture real three dimensional coordinates of measured object in reference to the center of the camera. - Construction of the common digital camera During all the years of photography the general idea of taking pictures did not change much only the information capturing and storage significantly evolve. We will look closer to the beginning part of the process because in the preprocessing part of the digital camera to the signal are not added any disturbances we will pass it and take a closer look on the part when the information is still transmitted as a signal in the form of light. The idea and that part of the process is very simple. The light is reflected from the object, is passing through the lenses and is projected on the sensing element, which in digital camera is CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) sensor. Figure 2.1 General idea of construction of the commonly used digital camera A CCD is build of photo sites, typically arranged in an X-Y matrix of rows and columns. Each photo site, in turn, is build of a photodiode and an adjacent charge holding region, which is shielded from light. The photodiode converts light (photons) into charge (electrons). The number of electrons collected is proportional to the light intensity. Typically, light is collected over the entire sensor simultaneously and then transferred to the adjacent charge transfer cells within the columns. Next, the charge is read out: each row of data is moved to a separate horizontal charge transfer register. Charge packets for each row are read out serially and sensed by a charge-to- voltage converter and amplifier.
  • 28. 28 - Mirror pole correction The idea from mirror pole optimization we can also use in improving the calculation of our measuring procedure. When we finally have calculated precisely coordinates of mirror pole ‘e’ why not use this to correct the coordinates of object and its reflection. Figure 4.8 Mirror pole correction use fix position of mirror pole ‘e’ to correct the coordinates of object point ‘n’ and its mirror reflection ‘n’’ Having now fix position of the mirror pole ‘e’ we are again calculating the coefficient of optimal line ‘l’ (using equations 4.17 to 4.21) which minimize the distance of pair of points ‘n’ and ‘n’’ to it and project that points on this line, recalculating all the coefficients from the image using following equations derived from simple linear system of two equations, ‘l’ and equation of line perpendicular to ‘l’ passing through point ‘n’ or ‘n’’. {Ref. 11}: 12 + +⋅ = i iii ci a uva u (4.23)       + +⋅ = 12 i iii ici a uva av (4.24) To obtain the best precision the mirror pole correction we perform after radial distortion correction.
  • 29. 29 5. Calibration of the camera - Mirror pole Calculation of mirror pole ‘e’ is the first step for calculation of the intrinsic parameter matrix and from the stability of the calculation of mirror pole and vanishing mirror line depends stability of whole calibration procedure. To calculate mirror pole ‘e’ we use the fact that all parallel lines of a real scene on the image are intersecting in one point. On our calibration object each pair of points of object n(ui, vi) and its mirror reflection n’(ui, vi) create such a line. Having equations for every line connecting the pairs of points we use least square approximation to get optimal solution. Figure 5.1 Plot presents every line connecting pairs of points from object and its mirror reflection, the black circle presents solutions of least square approximation for equation 5.3 At this point the theory is clear but the practical implementation shows that some solution are better then others. At the beginning stage of research we use the following equations for obtaining the coordinates of mirror pole: ( ) ( ) ( ) ( )12 21 22 12 21 vv uu vu vv uu vu ee − − ∗+= − − ∗+ (5.1) ( ) ( ) ( ) ( ) 2 21 12 2 21 12 v uu vv uv uu vv u ee + − − ∗=+ − − ∗ (5.2) Because the least square approximation minimize the distance in one direction, using the first equation (5.1) we calculate the ‘ue’ coordinate and second equation (5.2) to calculate the ‘ve’ coordinate of the mirror pole ‘e’.
  • 30. 30 During the experiments the better stability was obtained using the following normalized equation: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )2 21 2 21 2112 2 21 2 21 12 2 21 2 21 12 vvuu vuvu vvuu uu v vvuu vv u −+− ∗−∗ = −+− − ∗+ −+− −− ∗ (5.3) Results using equations 5.1 and 5.2 Results using equation 5.3 Mean Std Mean Std Mirror pole ue -1824.89 31.15 -1831.40 14.35 ve 496.96 5.22 496.96 5.22 Table 5.1 Mirror pole coordinates for two calculation methods (without horizontal correction) When searching for other algorithms for calculating the mirror pole with the biggest stability, we perform the tests with mirror pole optimization of the coordinates of the object and its reflection (see chapter 4 – Mirror pole optimization). Using such approach we obtained similar stability of the calculation of mirror pole as in case of using equation 5.3, but of course the advantage of this method is coordinates correction of the calibration object and its reflection points. The only drawback is that the optimization process itself is time consuming.
  • 31. 31 - Vanishing mirror line The vanishing mirror line is a line of intersection of the plane ‘MO‘ ( ‘MO’ is a plane parallel to mirror plane ‘M’ and passing camera center point ‘C’) and the retinal plane R (see Figure 2.9). Based on this and the mirror pole ‘e’ we will calculate the coordinates of the central point ‘c’ and the camera angle ‘φ’. To calculate the vanishing mirror line first we need a projection of object points on the mirror plane. • Projection of object on the mirror To obtain the orthographic mirror projection on the image we use to object points ‘n1‘ and ‘n2‘ and its mirror reflections ‘n’1‘ and ‘n’2’. For homogenous coordinates using equations 5.4 to 5.8 we calculate ‘n’’1‘ and ‘n’’2‘ projection points. Meaning of this equation is visualized on the figure 4.1. First step is calculation of coordinates of points ‘t1’ and ‘t2’: 21211 '' nnnnt ∧= (5.4) 21212 '' nnnnt ∧= (5.5) Next we can observe that to line t1t2 is also belongs points ‘n’’1‘ and ‘n’’2’ so we can write: 2121 '''' nntt = (5.6) Finally we calculate coordinates of mirror projection ‘n’’1‘ and ‘n’’2‘: 21111 ''' ttnnn ∧= (5.7) 21222 ''' ttnnn ∧= (5.8) Figure 5.2 Orthographic projection on the mirror plane with use of pair of points ‘n1‘ and ‘n2‘ and its mirror reflection points ‘n’1‘ and ‘n’2‘
  • 32. 32 Working in the situations with a lot of noise like ours, it is better when we perform the calculations on the biggest set of data. During our experiments we calculate the projection for every possible pairs of points. In this way we obtaining instead one solution set of points which proof that we can not trust only one of them. So we neglect points which were outstanding with the biggest error and calculate the mean value from the remaining ones. Figure 5.3 Projection points calculated for every possible pairs of points of a calibration object (blue color represents projection points red circles represents the median for each set of this points) • Calculation of vanishing mirror line To calculate vanishing mirror line we use to points at infinity of perpendicular directions and homography matrix ‘H’ calculated for the mirror plane (see chapter 3 to check how to calculate homography matrix).           =           Η 7 4 1 0 0 1 h h h and           =           Η 8 5 2 0 1 0 h h h (5.9) After that we obtain coordinates of vanishing points which lie on the vanishing mirror line. It can be easily observed that in fact those coordinates are two columns of homography matrix.
  • 33. 33 • Horizontal correction Having mirror pole ‘e’ and the vanishing mirror line ‘L’ we can now determine the camera angle and use it to calculate horizontal pixel coordinates. The horizon is a line going through the mirror pole ‘e’ perpendicularly to the vanishing mirror line ‘L’, camera angle ‘θ’ is the angle between the ‘u’ axis of the image and the horizon. Figure 5.4 Determining the horizon, L – vanishing mirror line, e – mirror pole, θ – camera angle To calculate the horizontal pixel coordinates nhi(uh, vh, 1) of any image point ni(u, v, 1) we multiply its pixel coordinates by the inverse of the rotation matrix.                     −=           − 1100 0cossin 0sincos 1 1 v u v u h h θθ θθ (5.10)
  • 34. 34 - Mirror angle and principal point Before calculations of the angle ‘φ’ between the mirror plane ‘M’ and the retinal plane ‘R’ we start with the partial calibration of the image plane. What mean after horizontal correction when the horizon is parallel to the ‘u’ axis of image, we perform vertical translation to obtain a ‘u’ axis equals ‘x’ axis but with different origins. After that we can assume coordinates of the mirror pole ‘e’ equals (ue , 0 ), for the intersection of the horizon with the vanishing mirror line (uL , 0 ) and (uc , 0 ) for the still unknown central point ‘c’. Figure 5.5 Model of the camera with calculated all the parameters For this data we get the following relations:      =− =− ϕ ϕ tan tan f uu fuu cL ec (5.11) But having three unknown ‘uc’ , an ‘f’ and ‘φ’ these two equations are not enough. At this point again homography helps us to solve this problem. Introducing a new parameter: 22 fuc +=ω (5.12)
  • 35. 7 This architecture produces a low-noise, high-performance imager. That optimization, however, makes integrating other electronics onto the silicon impractical. In addition, operating the CCD requires application of several clock signals, clock levels, and bias voltages, complicating system integration and increasing power consumption, overall system size, and cost. A CMOS sensor, is made with standard silicon processes in high-volume foundries. Peripheral electronics, such as digital logic, clock drivers, or analog-to-digital converters, can be readily integrated with the same fabrication process. CMOS sensors can also benefit from process and material improvements made in mainstream semiconductor technology. To achieve these benefits, the CMOS sensors architecture is arranged more like a memory cell or flat-panel display. Each photo site contains a photodiode that converts light to electrons, a charge-to-voltage conversion section, a reset and select transistor and an amplifier section. Overlaying the entire sensor is a grid of metal interconnects to apply timing and readout signals, and an array of column output signal interconnects. The column lines connect to a set of decode and readout (multiplexing) electronics that are arranged by column outside of the pixel array. This architecture allows the signals from the entire array, from subsections, or even from a single pixel to be readout by a simple X-Y addressing technique {Ref. 1}. Figure 2.2 CCD and CMOS image capture sensors Both techniques brings some strengths and weakness but regardless them all for us the most important is that the image coordinates are Euclidean coordinates having equal scales in both axial directions. In the cameras with CCD sensor , there is biggest possibility of having a non-square pixels. If image coordinates are measured in pixels how it is in our case then this has the extra effect of introducing unequal scale factor in each direction.
  • 36. 36 - 3D reconstruction and scale factor Having all intrinsic and extrinsic parameters and we convert object points from pixel coordinate frame to horizontal standard coordinate frame multiplying it by (K Rθ)-1 . At that moment we consider distance ‘d’ between camera center ‘C’ and mirror plane ‘M’ as unknown and a global scale factor of real 3D coordinates, and we are able to reconstruct objects to camera referenced 3D coordinate frame with units as a focal distance ‘f’. To do this we will translate intersection method used in stereo vision to our mirror setting {Ref. 7}. Figure 5.6 The intersection method for a camera-mirror settings Point ‘N’ is a point in 3D space and ‘N’’ is its mirror reflection, ‘n’ and ‘n’’ are a direct image of them on retinal plane ‘R’ both given by the horizontal standard coordinates (x, y, 1) and (x’, y’,1). To derive the camera referenced real world coordinates of ‘N’ a simple observation should be made: if ( )'nSn M=∗ then ∗∧= nCCnN ' (5.17) Next the line Cn is directed by (x , y , 1)T and the line C’n* by: ( )'nSno Mo=∗ (5.18)
  • 37. 37 The coordinates of no* can be computed by: ( )T o yxS 1,',' or ( )T xyx ϕϕϕϕ 2cos2sin',',2sin2cos' −+ (5.19) So if we use: ),,( kykxkN ⋅⋅= (5.20) then ‘N’ can be computed when solving the following system of linear equations with ‘k’ and ‘l’ as unknowns. ϕϕϕ ϕϕϕ cos2)2cos2sin'( ' sin2)2sin2cos'( dxlk lyky dxlkx −−= = −+= (5.21) To scale the reconstructed points to the true distances from the camera center that’s mean to scale from camera referenced 3D coordinate frame to camera referenced real world frame we also need to calculate a global scale factor ‘d’. To do this we need information about real dimensions of the calibration object. It is very important that these dimensions are determined very accurately because error of this measurement will propagate to all reconstructed real world point coordinates and cause error in calculating the dimensions. To calculate the scaling factor ‘d’ we just divide the true length ‘dim’ between two points from the calibration pattern ‘N1‘ and ‘N2‘ and divide it by the distance calculated between ‘n1‘ and ‘n2‘ after 3D reconstruction to camera referenced 3D coordinate frame. ( ) ( ) ( )2 21 2 21 2 21 dim zzyyxx d −+−+− = (5.22) It is better to use more of such a pairs from object and its mirror reflection and take the mean value of them. For reconstruction to camera referenced 3D coordinate frame we substitute ‘d’ equals one.
  • 38. 38 6. Calibration and measuring algorithm In fief words I should mention on what equipment I perform all my experiments and in consequence how it influence the results of my research. All the image were captured using Philips Inca 311 camera which is designed for a compact vision solutions for industrial applications {Ref. 12}. It can be used for quality assurance, alignment, pattern verification, object tracking and all kinds of measurements applications. Figure 6.1 Philips Inca 311 camera The camera is equipped with monochrome PC2112-LM from Zoran sensor. It is high performance CMOS imaging sensor with extremely uniform pixel array and low fixed-pattern noise because of its Distributed Pixel Amplifier architecture. At the output it gives images with maximum 1280x1024 pixels resolution with 10 bit grey level scale.
  • 39. 39 - Calibration procedure and setting the scene Before we start the calibration of our measuring system we have to think how to construct the calibration pattern which will fulfill several assumptions. It should be relatively easy to calculate the coordinates of pattern objects so we do not need sophisticated automated system to deal with it. It should be geometrically easy to build mathematical model of it. Give possibilities to determine certain wanted properties and be symmetric. And it should consist about twenty to thirty objects so it support large enough amount of data for calculations to neglect noise from it and not to much so the time of the calculation will be relatively small. During the research generally three types of calibration patterns were used. It was: blobs uniformly distributed on the circle, grid of lines and grid of squares (see Figure 6.2). At the beginning for our camera calibration pattern with uniformly placed blobs on the circle usually with twelve or twenty four blobs were used because of several reasons. It is very easy to model mathematically, it allow to determine lines with specific wanted angle between them which was crucial on that stage of my research. But it has one significant drawback, it should be printed very accurately to keep its symmetry. When printed on common printer pattern was scaled in the one direction so the distance between each opposite points were different and the calibration procedure was corrupted. a) b) c) Figure 6.2 Different calibration patterns: a) blobs uniformly placed on the circle, b) grid of lines, c) grid of squares
  • 40. 40 In the later stage when requirements were change I started to use gird of five vertical and horizontal lines. It gives twenty five calibration points, what is enough to minimize the noise for example in the 2D homography calculations and it ensure quite fast calculations. Especially for mirror pole optimization, when to big number of coordinates for optimization can enlarge time of execution. When printed on the commonly used printers and when it happens that it is scaled in one direction it is not a problem to correct it during the calibration procedure. But it also has a very important drawback, because in calculations were used the coordinates of intersection points, it is difficult to build accurate system to determine them. The best calibration pattern for the algorithm described by my work is the grid of squares. It has all the advantages of grid of lines and it is very simple to calculate the coordinates of its objects. It is also pattern which is commonly used in many camera calibration procedures which can be found in the literature for example by Zhengyou Zhang {Ref. 8}. The big importance has also setting correctly the scene of calibration in which we will perform measurements. It is obvious that the camera should capture in the image whole calibration image and its mirror reflection. The big importance have angle ‘φ’ between the mirror plane ‘M’ and the retinal plane ‘R’. When the angle ‘φ’ is increasing then the distance of mirror pole ‘e’ to central point ‘c’ is also increasing and the distance of vanishing mirror line ‘L’ to central point ‘c’ is decreasing. And reversely, when the angle ‘φ’ is decreasing then the distance of mirror pole ‘e’ to central point ‘c’ is also decreasing and the distance of vanishing mirror line ‘L’ to central point ‘c’ is increasing (see Figure 2.7). In the situation when the angle ‘φ’ is bigger then 45 º the accuracy of calculation of mirror pole ‘e’ significantly decrease then the accuracy of calculation of vanishing mirror line ‘L’ and reversely we have the opposite situation. But we have to keep in mind that accuracy of calculation of those two values has direct impact on the accuracy of calculation of all intrinsic and extrinsic parameters and whole calibration procedure. It seams that the best situation is obtained when the angle between the retinal plane ‘R’ and the mirror plane ‘M’ equals 45 º then the distance of vanishing mirror line ‘L’ to central point ‘c’ and the distance of mirror pole ‘e’ to central point ‘c’ are equal and the accuracy of calculation of both of them is similar. So setting the measuring scene we have to remember that the positioning the mirror has a big impact on the accuracy of measuring results. Preparing for the calibration we also have to determine the dimensions of objects which will be measured in the future. Because any changes in the setting of the camera, focus or zoom, can change the intrinsic calibration parameters, so after calibration we should not change any camera settings any more. In consequence to obtain good parameters of the image the measured object should be in the same distance to the camera like the calibration pattern and have moreover similar dimensions to it.
  • 41. 41 Now having all the important information I will describe proposed camera calibration algorithm which was developed during all the research. 1. Perform radial distortion correction using model obtained for a given camera. 2. Perform mirror pole optimization. It is the best when at the beginning we calculate mirror pole ‘e’ (using equation 5.3) and use it for a starting point for a optimization. 3. Perform homography correction. 4. Calculate mirror pole ‘e’. 5. Compute the projection of calibration pattern on the mirror plane. 6. Calculate vanishing mirror line ‘L’. 7. Calculate the camera angle ‘θ’. 8. Translate all the calculated parameters and object points to horizontal pixel coordinate frame. 9. Calculate the mirror angle ‘φ’. 10. Calculate the intrinsic parameters of the camera, coordinates of the central point ‘c’ and focal distance ‘f’. 11. Translate object points to horizontal standard coordinate frame. 12. Use the triangulation method to translate the calibration pattern points to camera referenced 3D frame. 13. Using information about dimensions of know calibration pattern calculate the scale factor ‘d’. To see the implementation in Matlab 6.5 please refer to appendix. Performing all this calibration steps we will calculate all the parameters needed for 3D reconstruction for a measuring purpose. We will use all intrinsic parameters, focal distance ‘f’ and central point ‘c’ and extrinsic one camera angle ‘θ’. To perform mirror pole correction we will also use mirror pole ‘e’.
  • 42. 8 The biggest source of errors in our image are the lenses. Usually it is the lens which distorts the image mostly depending on the quality of it. In commonly used simple cameras with used very cheap lenses which through several optical phenomena change and deflects the measured object on the image almost unreversible. The first phenomena is chromatic aberration. Chromatic aberration arises from dispersion, the property that the refractive index of glass differs with wavelength (see Figure 2.3). There are two types of chromatic aberration: longitudinal aberration and lateral aberration. Figure 2.3 Chromatic aberration - Longitudinal chromatic aberration causes different wavelengths to focus on different image planes. - Lateral chromatic aberration is the color fringing that occurs because the magnification of the image differs with wavelength. There are several ways of removing chromatic aberration. For production use very exotic glasses with very low dispersion like "Hi-UD" glass produced by Cannon {Ref. 3}. Use lens with a very big focal distance so the light do not have to be refracted so much, or use system of two or three lenses with different types of glass so aberration of one lens is corrected by the another. But all those solutions are very expensive and usually we will have to neglect this by preprocessing {Ref 2}. Most photographic lenses are composed of elements with spherical surfaces. Such elements are relatively easy to manufacture, but their shape is not ideal for the formation of a sharp image. Spherical aberration is an image imperfection that is due to the spherical lens shape, Figure 2.4 illustrates the aberration for a single, positive element. Light that hits the lens close to the optical axis is focused at position ‘c’. The light that traverses the margins of the lens comes to a focus at a position ‘a’ closer to the lens.
  • 43. 43 - Example on a real data Using equations from the previous chapters and algorithm described by my work lets follow one example on a real data. We assume that we know the radial distortions model and will concentrate on the calibration computations and measurements. For a calibration purpose we take a picture of a grid of lines (see Figure 6.3) and after it we will measure the dimensions of a cube (see Figure 6.4). The pixel coordinates of intersection of grid of lines before any corrections are presented below for an object Od = (u, v) and its reflection Rd = (u, v). Od = { (750.83, 36.22) ( 824.59, 70.40) (903.91, 107.15) (992.57, 148.24) (1088.29, 192.59) (729.50, 153.63) (801.00, 192.51) (877.88, 234.32) (962.99, 280.61) (1054.91, 330.60) (708.96, 266.72) (778.35, 309.78) (852.96, 356.07) (934.78, 406.85) (1023.20, 461.72) (689.13, 375.84) (756.50, 422.87) (828.93, 473.43) (907.62, 528.35) (992.73, 587.75) (669.98, 481.22) (735.44, 531.92) (805.81, 586.42) (881.54, 645.08) (963.51, 708.57) } Rd = { (389.42, 101.66) (310.03, 154.82) (234.67, 205.29) (162.50, 253.62) (93.62, 299.75) (430.02, 193.78) (348.88, 245.18) (272.19, 293.75) (198.63, 340.34) (128.58, 384.71) (471.70, 288.37) (388.78, 337.96) (310.73, 384.63) (235.76, 429.46) (164.52, 472.06) (513.98, 384.30) (429.27, 432.14) (349.89, 476.97) (273.52, 520.11) (201.11, 561.01) (557.13, 482.22) (470.60, 528.25) (389.85, 571.20) (312.04, 612.59) (238.44, 651.74) } Figure 6.3 Image of grid of lines calibration pattern ( intersection points marked by red dots, its reflection marked with green dots and projection of calibration pattern on the mirror plane marked with blue dots) After removing radial distortions, mirror pole optimization and homography correction we obtain the following coordinates of calibration pattern ‘Oc’ and its reflection ‘Rc’ and we can start calculation of camera calibration parameters.
  • 44. 44 Oc = { (752.38, 26.03) (827.78, 60.61) (910.22, 98.41) (1000.71, 139.91) (1100.50, 185.67) (730.67, 147.65) (803.38, 186.71) (882.71, 229.33) (969.59, 276.01) (1065.17, 327.37) (709.90, 263.98) (780.08, 307.11) (856.50, 354.07) (940.02, 405.39) (1031.68, 461.71) (690.01, 375.37) (757.82, 422.17) (831.50, 473.04) (911.87, 528.51) (999.88, 589.26) (670.95, 482.12) (736.51, 532.26) (807.63, 586.64) (885.05, 645.84) (969.64, 710.52) } Rc = { (382.64, 93.08) (300.80, 146.05) (222.30, 196.86) (146.95, 245.63) (74.56, 292.49) (425.77, 188.93) (342.63, 240.15) (262.91, 289.26) (186.39, 336.40) (112.90, 381.68) (469.32, 285.73) (384.87, 335.15) (303.90, 382.54) (226.20, 428.01) (151.58, 471.68) (513.30, 383.48) (427.51, 431.07) (345.27, 476.70) (266.38, 520.46) (190.62, 562.49) (557.72, 482.20) (470.57, 527.93) (387.04, 571.75) (306.93, 613.78) (230.01, 654.14) } For a corrected pixel coordinates of a calibration pattern and its reflection using described algorithm we calculated following parameters. Mirror pole e = ( -1833.50, 511.82 ). Central point c = ( 463.21, 511.82 ). Focal distance f = 2236.48. Camera angle θ = 0.59 º. Mirror angle φ = 45.76 º. Now by means of intersection method we reconstruct the coordinates of a calibration object and calculate the dimension of it in vertical direction ( between points 1-21, 2-22, 3-23, 4-24, 5-25 for object and for reflection between points 1’-21’, 2’-22’, 3’-23’, 4’-24’, 5’-25’) up to a global scaling factor. D = { 0.3093 0.3091 0.3089 0.3086 0.3084 } D’ = { 0.3088 0.3089 0.3091 0.3092 0.3093 } Knowing the real dimension of calibration pattern which equals dim = 106.5mm we calculate the scaling factor using mean value of all dimensions for vertical direction. The unit in which we expressed dimension of a calibration object is important, because it will also determine in which units the coordinates of a camera referenced real world frame will be expressed. Scaling factor d = 344.65. After calculation off all camera calibration parameters using the same settings we can start our measurements. As an example we will calculate the dimensions of a cube.
  • 45. 45 Figure 6.4 Image of measured object ( marked with red dots and its mirror reflection marked with green dots) We obtain from the measuring image following pixel coordinates of a four corners of a cube Od (u, v) and its reflection Rd (u, v). Od = { (999, 690) (877, 767) (994, 773) (937, 923) } Rd = { (477, 656) (409, 720) (284, 704) (432, 846) } After removing radial distortions and mirror pole correction we obtain following pixel coordinates. Oc = { (1012.77, 683.48) (888.63, 761.97) (1009.29, 768.77) (954.03, 923.39) } Rc = { (481.60, 651.45) (412.36, 718.20) (283.32, 703.15) (436.16, 846.93) } Now using intersection method we calculate coordinate of a object RO (x, y, z) and its reflection RR (x, y, z) in camera referenced real world coordinates. Coordinates this refer to the same unit which was used in calculations of the scaling factor so they are expressed in [mm]. RO = { (123.44, 38.56, 502.37) (91.90, 54.03, 483.13) (109.50, 51.52, 448.46) (107.44, 90.09, 489.59) } RR = { (5.07, 38.56, 617.64) (-13.31, 54.03, 585.59) (-48.44, 51.52, 602.26) (-7.27, 90.09, 601.30) }
  • 46. 46 Having coordinate of object points we can easily calculate dimensions of the cube calculating distance between its corners. We have also coordinates of the mirror reflection and we can also calculate dimension of reflection which should be exactly the same. Dimensions of the cube: dim2-1 = 40.05 mm dim2-3 = 38.96 mm dim2-4 = 39.79 mm Dimensions of reflection of the cube: dim2’-1’ = 40.05 mm dim2’-3’ = 38.96 mm dim2’-4’ = 39.79 mm Real dimension of edge of the cube was 40 mm. The accuracy of the measurements which we perform will be discussed in the next chapter.
  • 47. 47 7. Accuracy measures All error calculations are based on the experimental approach using multiple data for calculating mean value and standard deviation. It is very important to work on precise data, so we have to know very precisely the dimensions of the calibration objects and carefully calculate the coordinates of the object on the image. - Description of the experimental accuracy measures and procedures As the calibration pattern we use grid of five thin, horizontal and vertical lines. (see Figure 7.1 a) Using twelve images of such a calibration patterns we estimate the error of each step of calculation and stability of the whole calibration procedure. To determine the influence of the position of the measured object on the image and the distance to the camera we use sixty pictures of the rod (see Figure 7.1 b) positioned in different places to the image, with different angle and different distances to the camera, relative to the distance of the camera to the calibration pattern. To estimate the absolute error of the method and influence of the size of the calibration pattern relative to the size of the measured object we use twelve pictures of the ruler ( see Figure 7.1 c) ). a) Calibration image. b) Image of the rod. c) Image of the ruler. Figure 7.1 On the figure we see: a) calibration pattern – used to calculate all the calibration parameters of the camera, b) image of the rod – used to calculate the influence of the position of the object on the accuracy, c) image of the ruler – measuring different distances on it we can estimate how accuracy is related to the size of calibration pattern
  • 48. 48 - Accuracy of calibration procedure Because there is not any one common factor that exists which evaluates the efficiency of each method, we estimate it on the stability of calculations of the calibration procedure. The following table show the results calculated for twelve calibration patterns from which we calculate the mean value of each parameter and standard deviation. The best results were obtained after the pattern geometry reconstruction based radial distortion correction with mirror pole optimization and 2D homography correction. Parameters Procedure 1 Procedure 2 Procedure 3 Mean Std Mean Std Mean Std e u -1764.82 27.04 -1831.77 13.24 -1827.80 14.03 v 498.47 93.64 509.24 18.76 510.03 7.20 c u -46.77 153.33 441.54 71.70 476.01 27.78 v 498.47 93.64 509.24 18.76 510.03 7.20 uL 3970.12 453.01 2701.82 151.93 2641.55 54.04 d 435.36 30.16 349.42 13.94 341.76 6.59 f 2614.35 110.65 2262.12 76.21 2232.96 26.35 φ 33.30 2.59 45.14 1.92 45.89 0.71 θ 0.02 3.02 0.37 0.64 0.40 0.30 Table 7.1 Comparison of stability of calculation for three different calibration algorithms Procedure 1 – without any distortion correction. Procedure 2 – with pattern grid reconstruction based radial distortion correction. Procedure 4 – with pattern geometry reconstruction radial distortion correction, mirror pole optimization and 2D homography correction of object and its mirror reflection Parameters: ‘e’ (u ,v) – horizontal coordinates of mirror pole ‘c’ (u, v) – horizontal coordinates of central point ‘uL’ – ‘u’ coordinate of the vanishing mirror line scale factor – scale factor based on the known dimension of calibration object ‘f’ – focal length φ – angle between mirror plane and camera retinal plane θ – angle of horizontal correction
  • 49. 49 As was previously mentioned, the biggest error is produced by the radial distortions, so it is obvious that it will also cause the biggest error when we do not remove them. The following table shows the results of measuring the rod which true length is 57.5 [mm] and one cm on the ruler 10 [mm], after correction of radial distortions and without it. Without radial distortion correction With radial distortion correction Rod Mean value [mm] 53.09 57.05 Standard deviation [mm] 2.12 1.10 Ruler Mean value [mm] 9.61 9.83 Standard deviation [mm] 0.19 0.13 Table 7.2 Comparison of the stability of calculation dimensions of measuring object. The calculations were performed on the sixty images of the rod and twelve images of the ruler placed in different positions and different distances to the camera. From the upper table it is clear that removing of radial distortions is crucial to this procedure and the error without removing radial distortions is unacceptable for measuring purposes. - Influence of position of measured object It is also worth mentioning that we can not remove one hundred percent of all nonlinear distortions. The biggest part of nonlinear error is one which results from radial distortions. And because the model of the radial distortions is a function of the radius, so the smallest error is in the center of it, and the error increases the farther we are from it. The following table shows how the accuracy changes when taking measurements of an object in the middle of the image and at the borders of it. Object in the middle of the image. Object in the borders of the image. Rod Mean value [mm] 57.14 57.09 Standard deviation [mm] 0.91 1.33 Table 7.3 The influence of the position of object on the image on the accuracy of the measurements
  • 50. 50 It is also important to perform the calibration of the camera in similar circumstances to that in which the camera will afterwards make measurements. The following table shows how the accuracy changes when changing the distance of the measured object to the camera relative to the distance of the calibration pattern. First the measurement of the rod was performed at the same distance as the distance of the calibration pattern, we then increased and decreased the distance to the camera. The same distance as the calibration pattern. Increased distance to the camera. Decreased distance to the camera. Rod Mean value [mm] 56.86 56.45 56.85 Standard deviation [mm] 0.95 0.99 1.29 Table 7.4 The influence of the distance of object to the camera on the accuracy of the measurements relatively to the position of the calibration pattern. - Accuracy of distance measuring When it comes to the practical use of the procedure for calibrating the camera, the first assumption should be made about what dimensions the measured object will have, and, according to that the calibration pattern should be prepared. It should be clear that it is impossible to measure object which are smaller on the image then one pixel, also after setting the camera the object should be well focused so it should be approximately the same distance to the camera as it was the calibration object. The object should fit the view so it should not be too big. To determine how big the measured object should be according to the calibration pattern, we perform the calibration of the camera on the grid with dimensions 106.6x106.5 [mm] and measure objects with dimensions in range of 10 – 140 [mm]. As it is presented on the Figure 7.2 the accuracy is stabilizing and obtained its maximum when approaching the diameter of the calibration pattern. The absolute error value is tending to the 0.91 % and for the calibration object diameter the accuracy equals 0.97 [mm].
  • 51. 9 Figure 2.4 Spherical aberration In this manner the focus position depends on the zone of the lens that is considered. When the marginal focus is closer to the lens than the axial focus, such as exhibited by the that positive element (see Figure 2.4), we observe undercorrected spherical aberration. Conversely, when the marginal focus is located beyond the axial focus, the lens is suffer from overcorrected spherical aberration. Ideally, a photographic lens images the world in a plane, where it is recorded by a sensor. Typically, the sensor is either an approximately flat film or a strictly flat digital array. Departures from a flat image surface are associated with astigmatism and field curvature, and lead to a spatial mismatch between the image and the sensor. As a result, the sensor samples a part of space in front of or behind the sharp image, and its representation of the image will thus be blurred. Owing to the closely connected natures of astigmatism and field curvature, it is convenient to treat these Seidel aberrations together. In the absence of spherical aberration and coma, a lens that is additionally free of astigmatism offers stigmatic imaging, i.e. points in object space are imaged as true points somewhere in image space. (Strictly speaking this correct for one color of light only, since chromatic aberrations lead to blurring too.) A lens that suffers from astigmatism, however, does not offer stigmatic imaging. In the presence of astigmatism the rendering of an object detail depends on the orientation of that detail. For instance, a (short) line oriented towards the image center is called a sagittal (radial) detail, whereas a detail perpendicular to the radial direction is called a tangential detail. The astigmatic lens may be focused to yield a sharp image of either the sagittal or the tangential detail, but not simultaneously. With a real lens, the sagittal and tangential focal surfaces are in fact curved (see Figure 2.5).This figure displays the astigmatism of a simple lens. Here, the sagittal ‘S’ and tangential ‘T’ images are paraboloids which curve inward to the lens.
  • 52. 52 measured object when changing the resolution from 1024:1280 pixels on 512:640 pixels the error of determined coordinates doubles. The smallest the resolution the error is increasing. In Table 7.5 we can see how the accuracy of measured rod decrease when decrease the resolution of captured images. Building automated process we will also deal with the error of some segmentation or edge detection of the image to obtain the coordinates of the object and its mirror reflection. Assuming that error due to this preprocessing equals to one pixel we perform the following experiment. For the image of rod from Figure 7.1.b we add to the ‘v’ coordinate of one end value of one pixel. First only to the image of the object itself, then to its reflection, then to both. Then calculate the dimension for all the cases. Because the object on the image is placed vertically we in fact enlarge it by one pixel. Value of pixel added to the v coordinate Measured dimension [mm] Difference of dimension between original image and with changed coordinates [mm] Object Reflection of object + 0 + 0 56.5449 0.0 + 1 + 0 56.6582 0.1132 + 0 + 1 56.6885 0.1436 + 1 + 1 56.8018 0.2568 Table 7.6 Table shows the influence of change of pixel coordinates of the object or its reflection and variation of the resulting change of calculated dimension on one image of the rod Errors are introduced in the all our calculations and comes from the limitations of the digital image representation. Summarizing the measured object should be in the same distance to the camera as the calibration pattern, should both be the similar size and should be presented in the middle of the picture. It is also important that the diameter of the calibration pattern has it’s own error. In our situation it was printed out on the common inkjet printer. To verify its dimension we simply measure its diameters with accuracy up to 0.2 mm. This error also propagates to all our calculations.
  • 53. 53 8. Case study Using results of my research, described algorithms and ideas I prepare simple complex system. The task for a system is to measured length of a bolts. As an measuring device I use simple internet camera with the resolution of 320x240 pixels with poor lens quality, it was configured to capture black and white images with 256 grey levels. Camera was connected to the USB port of a PC class computer. Software is written in the LabView 7.1 application which makes it possible to use prepared Matlab scripts and connect this application to external system. User interface of the system is very simple. It displays view from the camera and calculated parameters for camera calibration procedure, system calibration procedure and finally length of measured object. It allows to choose which correction to the measurements should be performed and so we can turn on or off radial distortions removing , object optimization or homography correction while calibrating our system. When measuring we can turn on or off radial distortion removing and mirror pole correction. Below the screen with the view of the camera we should enter working path for storing temporary files during execution of the program. Figure 8.1 Calibration of the camera pattern with calculated parameters of radial distortions
  • 54. 54 To perform radial distortion correction firs we have to calculate its parameters for a used camera. To do this we use pattern with grid of squares which more or less fills whole view of the camera. After pressing the button “Calibrate camera” firs the image stored in the temporary file “labview.bmp” is analyzed and coordinates of a calibration pattern squares are stored in the file “distortedpatern.mat”. Then parameters for a camera are calculated and stored in the temporary file “distortionmodel.mat”. Parameters these are later used when calibrating system and/or during the measurements. This step can be omitted when during measuring procedure and calibration of the system procedure we won’t use radial distortion removing. Figure 8.2 Calibration of the system pattern with calculated parameters of the system Before starting the measurements we have to set correctly the scene. Determine the position of the mirror and the camera which will later allow us to capture in the camera view measured object and its mirror reflection. Then we can choose which correction in the calculations should be performed and after pressing the “Calibrate system” button the image stored in the temporary file “labview.bmp” is analyzed and coordinates of a calibration pattern squares and their mirror reflection are stored in the file “labview.mat”. After that all parameters of the system are calculated and stored in the file “calibrationmatrix.mat” for later measurements and displayed on the screen.
  • 55. 55 Figure 8.3 Measuring the bolt When all calibration parameters of the camera and the system are known to us we can start the measurements. The object should be placed more or less in the same position as the calibration pattern and in such a manner that it is visible to the camera together with its mirror reflection. After pressing the button “Measure object” temporary image from the camera “labview.bmp” is analyzed and the coordinates of the far sides of the bolt are stored in the file “labview.mat”. Then according to the set parameters measuring procedure is performed and the result is displayed on the screen. In our case we presented on the Figure 8.3 we measure the bolt with true length 45mm, my system give the result of 43mm. Error of 4.5 % comes from the very poor quality of used camera. Capturing the images from the camera is done in fourth steps. The image from the camera is not streamed directly to the interface but only captured every second stored in the temporary file on the disk and displayed from it. In the step with index zero capture window from the camera is created as an output the window handle is pass on. Step nr one is simple time delay, because the image capturing operation is time consuming it is necessary to introduce it. In next step with index two the image which window handle was given in firs step is stored on the disc in the specified file and directory with specified attributes. The last step with index three is destroying the captured window and displaying the image from the temporary file on the screen.
  • 56. 56 Figure 8.4 Capturing the image from the camera Algorithm of the whole interface is quite simple. After pressing the button appropriate Matlab script is executed. To the script appropriate interface options are transferred after the script is executed its outputs are displayed on the interface. Calibrate camera script Calibrate system script Measure object script cd c:webcam name = 'labview.BMP'; paterncreation(name); [ p_opt ] = distortionoptim; du = p_opt(1); dv = p_opt(2); ax1 = p_opt(3); ax2 = p_opt(4); ax3 = p_opt(5); ax4 = p_opt(6); ay1 = p_opt(7); ay2 = p_opt(8); ay3 = p_opt(9); cd c:webcam name = 'labview.BMP'; type = 'grid'; dimen = 60; getgrid(name); if rd > 0 rdistortionremoval(name); end [enh,e,c,f,skale,L,alfa,psi,Rot]= main(name,type,dimen,op,hm); eu = e(1), ev = e(2) ; cu = c(1), cv = c(2) ; cd c:webcam name = 'labview.BMP'; getextrema(name); if rd2 > 0 rdistortionremoval(name); end load('calibrationmatrix.mat'); [ Do , Dop , objectdim ] = measure(name,enh,Rot,skale,alf a,c,f,1,mp) Table 8.1 Matlab scripts executed after choosing appropriate action
  • 57. 57 Figure 8.5 Algorithm diagram of application interface Presented scripts executes only the main files and transfer the interface options to the functions. Contents of all used functions in the Matlab scripts are presented in the appendixes.
  • 58. 58 9. Summary - Accomplished goals Summarizing my master thesis according to the goals designated at the beginning I fulfilled it in hundred percents. I present ready algorithm for calibration of a digital camera for a 3D reconstruction and an algorithm for measuring purpose. Perform experiments with different methods of calculations of camera parameters. Implement algorithms neglecting nonlinear errors created in the lens system for enhancing accuracy of the system. And finally made ready exemplary system based on that algorithms. My master thesis give also a solid background for future papers in this area of study. - Proposals of future research The biggest part of the experiments presented in my master thesis and collected data used for it were performed and gathered on the KDG university in Antwerp, Belgium in the Industrial Vision Laboratory as a part of the BOF project held by Luc Mertens and Rudi Penne. Whole practical part was made in the winter semester of academic year 2004/2005 and because of lack of time some ideas and thesis remains unfinished and unimplemented. But for future research its worth mentioning about them. From Table 7.1 in procedure 3 we can observe that the calculations of the mirror pole ‘e’ and central point ‘c’ are much more stable and precise for ‘v’ coordinate then for ‘u’ coordinate. It is caused because we calculate the mirror pole ‘e’ for the one direction which is convergent with the ‘u’ coordinate. To obtain similar stability for second coordinate it is possible to use second mirror and calculate the second mirror pole for perpendicular direction. Then we can repeat the calculation procedure for second direction and we will obtain better stability for ‘u’ coordinate. Finally for measurements we combine the results using calculated coordinate of ‘v’ of central point ‘c’ from calculations for firs mirror pole and coordinate of ‘u’ from calculations for second mirror pole. In the paper of Christian Brauer-Burchardt and Klaus Voss {Ref. 13} we can read about vanishing point triangle used to calculate the central point ‘c’ which is applicable for our system and can be tested as an other method of central point calculation. Having multiple calibration images and some exemplary measurement images we can perform optimization of calculated parameters and when construct minimization function correct the accuracy of measurements.
  • 59. 59 - Futures and industrial applications The possibility of no contact length measures has a wide field of applications for example in industry in many quality measures systems which can drastically increase speed of such a system. Nowadays such a automated systems delivers for example KEYENCE and SIEMENS. Hardware implemented systems with possibility of connection multiple cameras with simple programming environment. The drawback of that systems is that the measured object should be precisely placed in parallel position with respect to the camera. Then its dimensions are calculated in the pixels and multiply by user given factor which scale the measured distance from pixels to wanted length unit. This method limits the range of possible applications and can be used only for measurements only in two dimensions. My master thesis can be also the starting point for further more sophisticated applications. 3D reconstruction gives wide range of application. For example 3D scanners where the accuracy of reconstruction of course influence the correctness of representation of scanned object. In architectural applications for virtual reconstruction of buildings used in monuments renovation works. In vision guided robot systems.
  • 60. 10 Figure 2.5 Simple lens with undercorrected astigmatism. T - tangential surface; S - sagittal surface; P - Petzval surface As a consequence, when the image center is in focus the image corners are out of focus, with tangential details blurred to a greater extent than sagittal details. Although off-axis stigmatic imaging is not possible in this case, there is a surface lying between the ‘S’ and ‘T’ surfaces that can be considered to define the positions of best focus. The surface P (see Figure 2.5) is the Petzval surface, named after the mathematician Joseph Mikza Petzval. It is a surface that is defined for any lens, but that does not relate directly to the image quality - unless astigmatism is completely absent. In the presence of astigmatism the image is always curved (whether it concerns S, T, or both) even if P is flat. All this phenomena together will cause quite big distortions to our image what will result in radial distortions (see Figure 2.6) on the image and finally error of our measurement. The commonly observed are pillow or barrel distortions, easy and almost possible completely to remove, but sometimes distortions are more complex and we will observe wave distortions. Because the way they are created are well known to us we can easily model and remove them by recalculating position of all pixels on the image. a) b) c) Figure 2.6 Different kinds of radial distortions a) barrel, b) pillow, c) wave But problem to obtain very good sharpness on object and its mirror reflection, which usually is almost impossible, makes the objects edges blurred. This cause uncertainties in coordinates of points which creates calibration object and points which determine edges of object which we are going to measure. Although in case when we are calibrating the camera we can choose such an object so we can almost minimize this error to zero. But in case of measured objects we have to use some edge detection techniques to get better precision of measurement. Due to mechanical construction of the camera we have to remember that calibration parameters of camera are changing with every change of camera settings {Ref. 2,3,4}.
  • 61. 61 Figure 8.2 Calibration of the system pattern with calculated parameters of the system .. 54 Figure 8.3 Measuring the bolt …………………………………………………………... 55 Figure 8.4 Capturing the image from the camera ………………………………………. 56 Figure 8.5 Algorithm diagram of application interface ………………………………… 57 - Index of tables Table 4.1 Mean value of the distance of centers of objects to the line passing through them and the standard deviation for it, for a grid of 11x11 squares and a grid of 19x19 spots ………………………………………………………………….. 19 Table 4.2 Value of parameters of radial distortion models and coordinates of center of radial distortions calculated using pattern grid reconstruction algorithm …... 21 Table 4.3 Value of parameters of radial distortion models and coordinates of center of radial distortions calculated using pattern geometry reconstruction algorithm 22 Table 4.4 Value of coordinates of points before and after 2D homography correction of object plane ………………………………………………………………….. 25 Table 5.1 Mirror pole coordinates for two calculation methods ………………………. 30 Table 7.1 Comparison of stability of calculation for three different calibration algorithms Table 7.2 Comparison of the stability of calculation dimensions of measuring object .. 49 Table 7.3 The influence of the position of object on the image on the accuracy of the measurements ……………………………………………………………….. 49 Table 7.4 The influence of the distance of object to the camera on the accuracy of the measurements relatively to the position of the calibration pattern ………….. 50 Table 7.5 The influence of the resolution of the image on the accuracy of the measurements ……………………………………………………………….. 51 Table 8.1 Matlab scripts executed after choosing appropriate action …………………. 56