The frame camera is used by users of digital single-lens reflex cameras (DSLRs) as a shorthand for an image sensor format which is the same size as 35mm format (36 mm × 24 mm) film.
Panoramic imagery is created either by digitally stitching together multiple images from the same position (left/right, up/down) or by rotating a camera with conventional optics, and an area or line sensor.
3. Introduction (I)
3
The frame camera is used by users of digital single-
lens reflex cameras (DSLRs) as a shorthand for an
image sensor format which is the same size as
35mm format (36 mm × 24 mm) film.
Figure 1. kinds of the frame camera size
4. Introduction (II)
4
Panoramic imagery is created either by digitally
stitching together multiple images from the
same position (left/right, up/down) or by
rotating a camera with conventional optics, and
an area or line sensor.
Figure 2. circular images
Figure 3. Panoramic imagery
5. Geometry of Digital Frame Camera
5
The geometry of image formation can be modeled as a
perspective projection, which describes the relationship
between the camera lens system and the image plane.
Figure 5. Perspective projection
Figure 4. Geometry of
Digital Frame Camera
6. Geometry of Panoramic Camera
6
Panoramic photogrammetry is based on a cylindrical
imaging model, as generated by numerous analogue and
digital panoramic cameras or by a computational fusion
of individual central perspective images. Assuming the
camera rotation corresponds to a horizontal scan, the
resulting panoramic image has central perspective
imaging properties in the vertical direction only.
Figure 6. Coordinate systems defining a cylindrical panorama
7. Objective
7
Building point clouds and textured meshes from both
cameras in indoor. Furthermore, the same imaging
resources can be to accuracy evaluation.
8. Methodology
8
Acquires a pair of fisheye images that are then stitched
inside (a) the mobile phone or with (b) the Action
Direction desktop application. In addition, the fisheye
images were stitched with software for panoramic
photography (PTGui (c) and Autopano Giga (d)). (Fig. 6).
Figure 7. The different options for the
generation of equirectangular projections
Required a preliminary calibration project for the
estimation of distortion coefficients and relative
orientation between the fisheye image pair
Default solutions
9. Methodology
9
Metric accuracy of single fisheye images
Relative orientation of front and rear-facing images
Images have the typical configuration for
camera calibration, i.e. several convergent
images with roll variations.
The calibration procedure aims at determining the
relative position and attitude of front- and rear-facing
images, as well as their distortion parameters.
3D Modeling
10. 3D modeling software
10
The spherical camera model is also available
in some 3D modeling software from images
11. Results
11
Table 1. Accuracy achieved with the front-facing cameraMetric accuracy of single fisheye images
Such results confirm the good metric
quality of the Samsung Gear 360 when the
original fisheye images are used for
photogrammetric applications
The used software is ContextCapture,
which allows one to process fisheye images
with a mathematical formulation based on
the asymmetric camera model.
12. Results
12
Relative orientation of front and rear-facing
images
Fig. 8. The special calibration tool used to estimate the
relative orientation between front and rear facing images.
PTGui project with all images and the final result in which
only a pair of front and rear facing images are used for
generating the equirectangular projection.
The RMS of pixel coordinates achieved
with PTGui was about ±8.5 pixels, that is
not an optimal result.
Results with AutoPano Giga were instead
better. The achieved RMS of image
coordinates was ±3.3 pixels
13. Results
13
Evaluation Of Metric Accuracy With
Equirectangular Projections
Table 1. The statistics for control points and check
points with the equirectangular projections
generated with different solutions.
Figure 9. One of the equirectangular projections
and the 8 targets used as control and check
points.
14. Results
14
HDR 360o photo capturingSLR photo capturing
Coloured point cloud (11.989.171 points,
medium quality settings) and textured
mesh were obtained by using 60 frame
images (18 Mpx). Photo capturing took 2.5
hours.
Coloured point cloud (2.373.124 points,
medium quality settings) and textured
mesh were obtained by using 12 spherical
images (50 Mpx).
Camera position
15. Results
15
Canon 550 D SLR point cloud HDR 3600 point cloud FARO Focus 3D X130
coloured point cloud
As it is shown in the results, the geometric
definition is very good, very close to the
one obtained by laser scanning.
Nevertheless, some strong shadows can be
seen on the vaults and near the floor.
Coloured point cloud (41.892.875 points
after registration and subsampling,
medium quality settings) was obtained by
using 4 scan stations. Scan data capturing
took 50 min.
Comparisons of point cloud
16. Results
16
Canon 550 D SRL textured mesh HDR 3600 textured mesh
Point cloud resolution obtained with HDR 3600 is clearly
lower but enough for general geometry documentation.
Furthermore, dense point cloud showed is obtained by
using “medium” PhotoScan settings and therefore, point
cloud with higher density could be obtained if selected
“high” or “ultra-high” settings.
Comparisons of textured mesh
17. Results
17
Table 2. Canon 550 D SRL frames compared
to NCTech iSTAR HDR spherical images
Comparison of frame camera methodology
against the spherical methodology
18. Results
18
At first sight, the point cloud
obtained by using Canon 550 D
SLR data seems more uniform.
The majority of the points showed
a distance difference of 0.002 m,
being the most common values
under 0.025 m.
Canon 550 DSLR point cloud deviation
compared to FARO Focus 3D X130
19. Results
19
The results showed by HDR
3600 data comparison seem
less uniform at first sight.
Nevertheless, the graphic
distribution of values is quite
similar. The majority of the
points showed distance
differences of 0.002 m, being
most common values under
0.025 m.
HDR 3600 point cloud deviation
compared to FARO Focus 3D X130
20. Results
20
The reconstruction from front rear facing fisheye images has a
better accuracy but is partially incomplete, especially the area
above the camera.
The achieved mesh has a better quality than that
generated by equirectangular image
On the other hand, the reconstruction is partially
incomplete, especially the area of the vault, which was
instead modeled in the case of equirectangular
projections
Single equirectangular projections or
pairs of front- and rear-facing images?
21. Conclusion
1. HDR 360 accurate pre-calibrated cameras, image matching techniques also became
competitive in terms of color quality and time-consuming.
2. Point cloud resolution gained from image matching techniques is enough for both general
and detail documentation.
3. Image matching techniques by using HDR 360 accurate pre-calibrated cameras make the task
easier and faster
21
22. Future Work
1. Build a 3D model of a video model, where the video model can be extracted into an image
model.
2. In other cases, for 3D modeling we can consider light. The point is the comparison of the
intensity of light from inside and outside the room.
22
23. References
A. Pérez Ramos and G. Robleda Prieto, 2016, Only Image Based For The 3D
Metric Survey Of Gothic Structures By Using Frame Cameras And
Panoramic Cameras, ISPRS, Volume XLI-B5.
L. Barazzetti, M. Previtali, and F. Roncoroni, 2017, 3D Modelling With The
Samsung Gear 360, ISPRS, Volume XLII-2/W3
23