SlideShare a Scribd company logo
1 of 11
Download to read offline
Name: Muhammad Irsyadi Firdaus
Student ID: P66067055
DIGITAL PHOTOGRAMMETRY
3D reconstruction by photogrammetry and 4D deformation measurement
1. Objective
In this project, we want to generate a 3D reconstruction of an object and 4D deformation
measurement.
2. Material and Methods
• Equipment
There is several equipment that used in this project such as:
- Hardware
Sony alpha 6300 camera to take the object picture. Camera and its specification can be
seen in Figure 1 and Table 1.
Figure 1. Sony Alpha 6300 Camera
Table 1. Sony Alpha 6300 Camera Specification
Specification Description
Focal Length 20 mm
Pixel Size 3.92 micron
Sensor Size 23.5 x 15.6 mm
Image Size 6000 x 4000 pixel
Sensor Type CMOS
Effective pixel 24.2 megapixel
- Software
a. Agisoft Photoscan Pro, to generate 3D Model
b. CloudCompare was used to do comparison point cloud between before and after
deformation
• Material
The object of this project is a deflated ball and full ball taken by Sony Alpha 6300
Camera that has 20 mm focal length, 6000 x 4000 pixel image, and 23.5 x 15.6 mm
sensor size.
To generate 3D model of an object, firstly I took several images of the object. I took
the pictures from many directions to cover all of the object shape. In some detail area
Name: Muhammad Irsyadi Firdaus
Student ID: P66067055
such as curve or complex area, I took the picture more than the other area to clearly
see the object texture. Total image used in this project are 72 images of the deflated
ball and 65 images of the full ball. The object can be seen in Figure 2.
To do the accuracy analysis, a code target need to put on the object in some locations
and took the images together with the object. So the pictures used to generate 3D
model also included the code target in one photo file. The value of a distance of each
code target that need to be added into Agisoft Photoscan software to set the scale of
3D model.
In addition, to know the deformation between the object of full ball and deflated ball.
We can use CloudCompare software by calculating cloud to cloud. Make sure that
every image has been done align. it is necessary to create the same conjugate point in
every 3D object.
a b
Figure 2. (a) full ball image, (b) deflated ball image
• Methods
To generate a 3D reconstruction of an object and 4D deformation measurement, some
steps need to be done following the project workflow showed in Figure 3.
Name: Muhammad Irsyadi Firdaus
Student ID: P66067055
Object Images
(Deflated Ball)
Object Images
(Full Ball)
Align Photo Align Photo
Camera Calibration
and Optimization
Camera Calibration
and Optimization
Add Coded Targets
and Distance
Add Coded Targets
and Distance
Re - Optimise Re - Optimise
Build Dense
Point cloud
Build Dense
Point cloud
Build Mesh
Model
Build Mesh
Model
Build Textured
Model
Build Textured
Model
3D Model
Analysis
Conclusion
3D Model
Analysis
Qualitative and Quantitative
Accuracy Analysis
Figure 3. Generating 3D reconstruction of an object and 4D deformation
measurement workflow
We have two drawing objects: deflated ball and full ball. The steps to generate 3D
models on both objects are the same. Detail of project workflow will mention below:
1. For the object images, in Agisoft Photoscan software, I did the photo alignment
process to create sparse point cloud.
2. After aligned the object photo, camera calibration and optimization need to be
done to improve the sparse point cloud quality. Here we can create the boundary
region box to select the specific object that want to create the model.
3. Then added the value of a distance of each code target to the model by creating 4
markers on the different photos
4. Build the dense point cloud from sparse point cloud data. After dense point cloud
was built, we can remove some data that we don’t want to process such as the
outliers point cloud.
5. Build shaded, solid and wireframe mesh model.
6. Build textured model.
Name: Muhammad Irsyadi Firdaus
Student ID: P66067055
7. Analysis of textured 3D model. In this part, we can analysis the 3D model by
looking for what kind of object that not visible or any kind analysis.
8. Export the point cloud data of model to .las file to do the accuracy analysis.
9. Qualitative and quantitative accuracy analysis of point cloud between 3D model.
10. Conclude the analysis explanation
3. Results and Analysis
a. Dense Point Cloud, Mesh and Textured 3D Model
The first main objective of this project is to generate 3D model using Agisoft
Photoscan workflow step. In this project we got several result according to the
process followed in Agisoft Photoscan. The results are the sparse point cloud, dense
point cloud, mesh model and textured model. Each result image can be seen in Figure
4 and figure 5.
Figure 4. full ball. (a) Sparse Point Cloud, (b) Dense Point Cloud, (c) Dense Cloud
Classes, (d) Shaded, (e) Solid, (f) Wireframe Mesh Object, and (g) Textured Object
a b c
d e f
g
Name: Muhammad Irsyadi Firdaus
Student ID: P66067055
Figure 5. deflated ball. (a) Sparse Point Cloud, (b) Dense Point Cloud, (c) Shaded,
(d) Solid, (e) Wireframe Mesh Object, (f) Dense Cloud Classes, and (g) Textured
Object
From the result above, we can see that the 3D model of this ball can be generated
very detail. The sphere on the object and the color can be seen clearly looks like the
real appearance got form image. The appearance of 3D textured model with real
image can be seen in Figure 6.
Figure 6. The Appearance of (a) 3D Textured Model and (b) Real Image of ball
a b
g
c
d e
f
a b
Name: Muhammad Irsyadi Firdaus
Student ID: P66067055
From that figure 6, we can see that the sphere shape on the object can be generated
clearly. Also, the model color looks similar with the real color provided by the image.
We also can see some light shadows were captured and generated together with the
model. This is a good approach in term of generating 3D model using non-metric digital
camera. In my assumption, the result of 3D model was influenced by four factors.
1. The light or environmental condition around the object. We know that in
generating 3D model, we need to capture the object images using digital camera.
Digital camera is a passive sensor that only receive the visible light from the object
in front of it. If we capture the object image in bad or less light condition, then the
3D result can be bad in visibility or the texture color will be looked so dark.
2. The object itself. Some objects may difficult to reconstruct due to the material or
the shape factor. In this project I choose the rubber ball as the object, then the 3D
model result becomes so good because the object is not move and the visible light
can easily reflect to the camera well. We need to do more experience in generating
3D model from many kind of different object material to know the effect of
material itself.
3. The image taking technique including the object to camera position distance and
image overlap. In taking the picture, we need to consider the distance to object. If
we take the picture too far from the object, then the result of 3D model will not
good because some of object details will be lost or cannot be recognized. The other
factor is the image overlap. To create good 3D model, high overlap images are
needed to avoid the gap between two images. More overlap, then good quality 3D
model will be generated. So here, because I used 72 images of the deflated ball
and 65 images of the full ball, it makes sense that I got very good 3D model result.
4. The camera specification and setting. High resolution and stable camera can
perform good 3D model result. A camera that can capture the object size as small
as possible without any blur distortion can provide very good image. It means that
the 3D model also can be generated more detail than if we used low-resolution
camera.
Beside on those good point of view, I found a case that occured and make the result
has a little distortion. This problem appeared on the bottom of the 3D model result
that can be seen in Figure 7.
Figure 7. Textured Object (a) full ball and (b) deflated ball
a b
Name: Muhammad Irsyadi Firdaus
Student ID: P66067055
From the figure 7, we can see that some areas have dense point cloud distortion. Also,
there we can see that some areas of the object are missing and appear as a hole. The
reason of this problem due to some factors. First factor comes from the aligning
photos process related to images number used. Textured model was generated from
dense point cloud produced from aligning photos process that theoretically based on
image matching concept. To know the relationship between aligning photo process
and number of photo, we can investigate the dense point cloud of the object showed
on Figure 8.
Figure 8. Dense point cloud (a) full ball and (b) deflated ball
From the figure 8, we can see that there are some outlier points appear at the
boundary of object. I already deleted some outlier points. But here I still found some
of it. Dense point cloud was generated from sparse point cloud from aligning photos
process. Aligning photos process was affected by how many images that we used.
Here although I processed 72 images of the deflated ball and 65 images of the full ball,
but I only process few of the bottom-object-images that means only few information
of this area we will get. More images of this area needed to get more information of
the bottom-area. So, we can conclude that more images used in building 3D model
can perform good quality result.
Taking picture technique also can affect the 3D model result. As I mentioned before,
we need to take into consideration of the object distance and image overlap. Here I
checked every image I took and I found that for the bottom-area-object, I captured
the image a little bit far and no image captured from the bottom direction. Image of
bottom-area and camera position of this project can be seen in Figure 9.
a b
Name: Muhammad Irsyadi Firdaus
Student ID: P66067055
Figure 9. Camera Position. (a) full ball, (b) deflated ball
We can see (in figure 9) that no camera position was located on the bottom of object.
The bottom-area images were captured as the tilt image taken from the upper
position so here I know that no information available on the bottom of object. In my
opinion, if we want to create good result on the top-direction, then we need to take
the image from bottom-direction too. From this discussion we can conclude that
taking the picture far away from the object can decrease the object detail. Also, no
information of an area can create the blank space or a hole on the object. In addition,
the shape of an object also affects the outcome of the 3D model. In this case, the
object is spherical so it requires more photos to create a 3D model because there are
parts of the object that are difficult to capture by the photo.
b. Qualitative and Quantitative Accuracy
After we analyze the appearance of 3D model result, then we can analyze the
qualitative and quantitative accuracy of 3D model itself. First we input the point cloud
data from two object result. In Cloud Compare software, we did the alignment
process to align this two point cloud data becomes close each other. After the process
was finished, we can see this pointclouds ware moved closer to each other as shown
in Figure 10.
a
b
Name: Muhammad Irsyadi Firdaus
Student ID: P66067055
Figure 10. Result of Aligning Two Pointcloud Data
The yellow point cloud represents the data from full ball object and the red point
cloud represents the data from deflated ball object. From the figure above we can
see that those two point clouds were not aligned well. We still can see the object
differences. Point cloud from full ball object position is down to the point cloud data
from deflated ball object. In my assumption, this problem caused by the aligning
process done before. In the aligning process, I used full ball point cloud data as the
reference and set four points as the conjugate point. The position of conjugate points
can be seen in Figure 11.
Figure 11. Conjugate Points Position on 3D Model Pointcloud Data
After we analyze the qualitative accuracy, we can analyze the quantitative accuracy
between these two point cloud data. Because here I analyze the accuracy between
Name: Muhammad Irsyadi Firdaus
Student ID: P66067055
two data set, then this accuracy is the relative accuracy. After the alignment process,
I got the RMS error of this process was 0.431 mm with the transformation matrix is:
From that result, we can see that the RMS error only 0.431 mm means that the
alignment process was well performed in generating aligned point cloud data.
We also can analyze the accuracy of 3D model itself by seeing the error of scale bar
measurement on model. In this project, four measurements were set from four
markers at all different images like showed in Figure 12. From four measurements, I
got 0.0002 m error accuracy. It shows that any measurement in this model has 0.2
mm error position of the full ball object and 0.3 mm of the deflated ball object.
Figure 12. Marker (a) full ball and (b) deflated ball
c. 4D Deformation Measurement
To know more about the deformation measurement between full ball and deflated
ball, we can calculate the distance between these two point cloud data and show the
distance color scale of the 4D model point cloud data. To calculate the distance, I set
the full ball point cloud data as the reference. After I run the cloud to cloud distance
function, then I got the result like shown in Figure 13.
From this project, I got the mean deformation value is 8.395 mm, maximum
deformation is 9.802 cm and the standard deviation is 4.039 mm. It means that the
mean deformation between these two point cloud data is 8.395 mm. It is a quite small
value but it makes sense because before we can see that 3D model point cloud data
was well aligned. We can check it by seeing the color scale bar. Blue color represents
the smallest deformation while the red color represents the greatest deformation.
a b
Name: Muhammad Irsyadi Firdaus
Student ID: P66067055
Figure 13. Distribution of deformation between Two Point cloud Data
The explanation of this condition is:
- Alignment process can affect the result. This factor I already explained before where
the number of conjugate points and its location can affect the result. Used more
points located on separate location can increase the accuracy and quality of point
cloud aligned result.
- Scale of point cloud data. In cloud compare software, I can see the coordinate scale
of these two data. Full ball point cloud data has 1 while the deflated point cloud data
has 1 scale factor. In alignment process, we need to make all the data in same scale
or adjust its scale so both of the data and the result are in same scale factor.
4. Conclusion
From the result and analysis above, the conclusion that we got is:
a. Capture the object image in bad or less light condition can make 3D result has bad
visibility.
b. 3D model of static and solid object can be generated in good quality.
c. To create good 3D model, high overlap images are needed to avoid the gap between
two images.
d. 3D model also can be generated more detail if we used high-resolution camera.
e. More images used in building 3D model can perform good quality result.
f. Taking the picture far away from the object can decrease the object detail.
g. No information of an area can create the blank space or a hole on the object.
h. In image matching, more conjugate points in separate location are needed to produce
high accuracy 3D model.
i. RMS error of the alignment process is 0.431 mm while mean deformation is 8.935
mm
j. Distance value of two point cloud data affected by point cloud alignment process and
the scale of point cloud data.
k. Scale measurement accuracy on 3D model has 0.2 mm error position.

More Related Content

What's hot

Efficient fingerprint image enhancement algorithm based on gabor filter
Efficient fingerprint image enhancement algorithm based on gabor filterEfficient fingerprint image enhancement algorithm based on gabor filter
Efficient fingerprint image enhancement algorithm based on gabor filtereSAT Publishing House
 
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODFORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODeditorijcres
 
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUESA STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
 
Implement the morphological operations: Dilation, Erosion, Opening and Closing
Implement the morphological operations: Dilation, Erosion, Opening and ClosingImplement the morphological operations: Dilation, Erosion, Opening and Closing
Implement the morphological operations: Dilation, Erosion, Opening and ClosingNational Cheng Kung University
 
GRPHICS01 - Introduction to 3D Graphics
GRPHICS01 - Introduction to 3D GraphicsGRPHICS01 - Introduction to 3D Graphics
GRPHICS01 - Introduction to 3D GraphicsMichael Heron
 
Multimedia content based retrieval in digital libraries
Multimedia content based retrieval in digital librariesMultimedia content based retrieval in digital libraries
Multimedia content based retrieval in digital librariesMazin Alwaaly
 
COM2304: Digital Image Fundamentals - I
COM2304: Digital Image Fundamentals - I COM2304: Digital Image Fundamentals - I
COM2304: Digital Image Fundamentals - I Hemantha Kulathilake
 
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...IJERA Editor
 
Image segmentation
Image segmentationImage segmentation
Image segmentationRania H
 
Practical Digital Image Processing 3
 Practical Digital Image Processing 3 Practical Digital Image Processing 3
Practical Digital Image Processing 3Aly Abdelkareem
 
Practical Digital Image Processing 5
Practical Digital Image Processing 5Practical Digital Image Processing 5
Practical Digital Image Processing 5Aly Abdelkareem
 
Review of Digital Image Forgery Detection
Review of Digital Image Forgery DetectionReview of Digital Image Forgery Detection
Review of Digital Image Forgery Detectionrahulmonikasharma
 
Fuzzy c-means clustering for image segmentation
Fuzzy c-means  clustering for image segmentationFuzzy c-means  clustering for image segmentation
Fuzzy c-means clustering for image segmentationDharmesh Patel
 

What's hot (20)

Efficient fingerprint image enhancement algorithm based on gabor filter
Efficient fingerprint image enhancement algorithm based on gabor filterEfficient fingerprint image enhancement algorithm based on gabor filter
Efficient fingerprint image enhancement algorithm based on gabor filter
 
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODFORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHOD
 
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUESA STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
 
Gabor Filter
Gabor FilterGabor Filter
Gabor Filter
 
PPT s12-machine vision-s2
PPT s12-machine vision-s2PPT s12-machine vision-s2
PPT s12-machine vision-s2
 
Implement the morphological operations: Dilation, Erosion, Opening and Closing
Implement the morphological operations: Dilation, Erosion, Opening and ClosingImplement the morphological operations: Dilation, Erosion, Opening and Closing
Implement the morphological operations: Dilation, Erosion, Opening and Closing
 
Visual realism
Visual realismVisual realism
Visual realism
 
GRPHICS01 - Introduction to 3D Graphics
GRPHICS01 - Introduction to 3D GraphicsGRPHICS01 - Introduction to 3D Graphics
GRPHICS01 - Introduction to 3D Graphics
 
Multimedia content based retrieval in digital libraries
Multimedia content based retrieval in digital librariesMultimedia content based retrieval in digital libraries
Multimedia content based retrieval in digital libraries
 
COM2304: Digital Image Fundamentals - I
COM2304: Digital Image Fundamentals - I COM2304: Digital Image Fundamentals - I
COM2304: Digital Image Fundamentals - I
 
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...
 
Image segmentation
Image segmentationImage segmentation
Image segmentation
 
Practical Digital Image Processing 3
 Practical Digital Image Processing 3 Practical Digital Image Processing 3
Practical Digital Image Processing 3
 
Practical Digital Image Processing 5
Practical Digital Image Processing 5Practical Digital Image Processing 5
Practical Digital Image Processing 5
 
Review of Digital Image Forgery Detection
Review of Digital Image Forgery DetectionReview of Digital Image Forgery Detection
Review of Digital Image Forgery Detection
 
Fuzzy c-means clustering for image segmentation
Fuzzy c-means  clustering for image segmentationFuzzy c-means  clustering for image segmentation
Fuzzy c-means clustering for image segmentation
 
Ee 583 lecture10
Ee 583 lecture10Ee 583 lecture10
Ee 583 lecture10
 
Dip Image Segmentation
Dip Image SegmentationDip Image Segmentation
Dip Image Segmentation
 
PPT s06-machine vision-s2
PPT s06-machine vision-s2PPT s06-machine vision-s2
PPT s06-machine vision-s2
 
297 short story
297 short story 297 short story
297 short story
 

Similar to 3D reconstruction by photogrammetry and 4D deformation measurement

Work In Progress
Work In ProgressWork In Progress
Work In Progresssamluk
 
Minimalism photorealism 3d interior
Minimalism photorealism 3d interiorMinimalism photorealism 3d interior
Minimalism photorealism 3d interiorGuning Deng
 
IRJET- Robo Goalkeeper
IRJET- Robo GoalkeeperIRJET- Robo Goalkeeper
IRJET- Robo GoalkeeperIRJET Journal
 
Implementation of Picwords to Warping Pictures and Keywords through Calligram
Implementation of Picwords to Warping Pictures and Keywords through CalligramImplementation of Picwords to Warping Pictures and Keywords through Calligram
Implementation of Picwords to Warping Pictures and Keywords through CalligramIRJET Journal
 
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
MATLAB Code + Description : Real-Time Object Motion Detection and TrackingMATLAB Code + Description : Real-Time Object Motion Detection and Tracking
MATLAB Code + Description : Real-Time Object Motion Detection and TrackingAhmed Gad
 
2 D3 D Concersion Swaggmedia
2 D3 D Concersion   Swaggmedia2 D3 D Concersion   Swaggmedia
2 D3 D Concersion SwaggmediaCraig Nobles
 
Digital Techniques Presentation
Digital Techniques PresentationDigital Techniques Presentation
Digital Techniques Presentationpiglet1987
 
Computer_Graphics.pptx
Computer_Graphics.pptxComputer_Graphics.pptx
Computer_Graphics.pptxjohn6938
 
Introduction to Game Programming: Using C# and Unity 3D - Chapter 2 (Preview)
Introduction to Game Programming: Using C# and Unity 3D - Chapter 2 (Preview)Introduction to Game Programming: Using C# and Unity 3D - Chapter 2 (Preview)
Introduction to Game Programming: Using C# and Unity 3D - Chapter 2 (Preview)noorcon
 
THE 3D MODELLING USING FRAME CAMERAS AND PANORAMIC CAMERA
THE 3D MODELLING USING FRAME CAMERAS AND PANORAMIC CAMERATHE 3D MODELLING USING FRAME CAMERAS AND PANORAMIC CAMERA
THE 3D MODELLING USING FRAME CAMERAS AND PANORAMIC CAMERANational Cheng Kung University
 
IRJET- 3-D Face Image Identification from Video Streaming using Map Reduc...
IRJET-  	  3-D Face Image Identification from Video Streaming using Map Reduc...IRJET-  	  3-D Face Image Identification from Video Streaming using Map Reduc...
IRJET- 3-D Face Image Identification from Video Streaming using Map Reduc...IRJET Journal
 
SECRET IMAGE TRANSMISSION THROUGH MOSAIC IMAGE
SECRET IMAGE TRANSMISSION THROUGH MOSAIC IMAGESECRET IMAGE TRANSMISSION THROUGH MOSAIC IMAGE
SECRET IMAGE TRANSMISSION THROUGH MOSAIC IMAGEcsandit
 
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an ObjectAnkur Tyagi
 
Correcting garment set deformalities on virtual human model using transparanc...
Correcting garment set deformalities on virtual human model using transparanc...Correcting garment set deformalities on virtual human model using transparanc...
Correcting garment set deformalities on virtual human model using transparanc...eSAT Publishing House
 

Similar to 3D reconstruction by photogrammetry and 4D deformation measurement (20)

Work In Progress
Work In ProgressWork In Progress
Work In Progress
 
Minimalism photorealism 3d interior
Minimalism photorealism 3d interiorMinimalism photorealism 3d interior
Minimalism photorealism 3d interior
 
IRJET- Robo Goalkeeper
IRJET- Robo GoalkeeperIRJET- Robo Goalkeeper
IRJET- Robo Goalkeeper
 
Design and Modeling with Autodesk 3DS Max
Design and Modeling with Autodesk 3DS MaxDesign and Modeling with Autodesk 3DS Max
Design and Modeling with Autodesk 3DS Max
 
Implementation of Picwords to Warping Pictures and Keywords through Calligram
Implementation of Picwords to Warping Pictures and Keywords through CalligramImplementation of Picwords to Warping Pictures and Keywords through Calligram
Implementation of Picwords to Warping Pictures and Keywords through Calligram
 
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
MATLAB Code + Description : Real-Time Object Motion Detection and TrackingMATLAB Code + Description : Real-Time Object Motion Detection and Tracking
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
 
N046047780
N046047780N046047780
N046047780
 
tutorial
tutorialtutorial
tutorial
 
2 D3 D Concersion Swaggmedia
2 D3 D Concersion   Swaggmedia2 D3 D Concersion   Swaggmedia
2 D3 D Concersion Swaggmedia
 
Digital Techniques Presentation
Digital Techniques PresentationDigital Techniques Presentation
Digital Techniques Presentation
 
94110A
94110A94110A
94110A
 
Computer_Graphics.pptx
Computer_Graphics.pptxComputer_Graphics.pptx
Computer_Graphics.pptx
 
Introduction to Game Programming: Using C# and Unity 3D - Chapter 2 (Preview)
Introduction to Game Programming: Using C# and Unity 3D - Chapter 2 (Preview)Introduction to Game Programming: Using C# and Unity 3D - Chapter 2 (Preview)
Introduction to Game Programming: Using C# and Unity 3D - Chapter 2 (Preview)
 
student 3d max
student 3d maxstudent 3d max
student 3d max
 
THE 3D MODELLING USING FRAME CAMERAS AND PANORAMIC CAMERA
THE 3D MODELLING USING FRAME CAMERAS AND PANORAMIC CAMERATHE 3D MODELLING USING FRAME CAMERAS AND PANORAMIC CAMERA
THE 3D MODELLING USING FRAME CAMERAS AND PANORAMIC CAMERA
 
IRJET- 3-D Face Image Identification from Video Streaming using Map Reduc...
IRJET-  	  3-D Face Image Identification from Video Streaming using Map Reduc...IRJET-  	  3-D Face Image Identification from Video Streaming using Map Reduc...
IRJET- 3-D Face Image Identification from Video Streaming using Map Reduc...
 
SECRET IMAGE TRANSMISSION THROUGH MOSAIC IMAGE
SECRET IMAGE TRANSMISSION THROUGH MOSAIC IMAGESECRET IMAGE TRANSMISSION THROUGH MOSAIC IMAGE
SECRET IMAGE TRANSMISSION THROUGH MOSAIC IMAGE
 
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
 
H05844346
H05844346H05844346
H05844346
 
Correcting garment set deformalities on virtual human model using transparanc...
Correcting garment set deformalities on virtual human model using transparanc...Correcting garment set deformalities on virtual human model using transparanc...
Correcting garment set deformalities on virtual human model using transparanc...
 

More from National Cheng Kung University

Accuracy assessment and 3D Mapping by Consumer Grade Spherical Camera
Accuracy assessment and 3D Mapping by Consumer Grade Spherical CameraAccuracy assessment and 3D Mapping by Consumer Grade Spherical Camera
Accuracy assessment and 3D Mapping by Consumer Grade Spherical CameraNational Cheng Kung University
 
3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...
3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...
3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...National Cheng Kung University
 
3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...
3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...
3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...National Cheng Kung University
 
3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical Camera
3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical Camera3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical Camera
3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical CameraNational Cheng Kung University
 
3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical Camera
3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical Camera3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical Camera
3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical CameraNational Cheng Kung University
 
Satellite Image Classification using Decision Tree, SVM and k-Nearest Neighbor
Satellite Image Classification using Decision Tree, SVM and k-Nearest NeighborSatellite Image Classification using Decision Tree, SVM and k-Nearest Neighbor
Satellite Image Classification using Decision Tree, SVM and k-Nearest NeighborNational Cheng Kung University
 
Optimal Filtering with Kalman Filters and Smoothers Using AndroSensor IMU Data
Optimal Filtering with Kalman Filters and Smoothers Using AndroSensor IMU DataOptimal Filtering with Kalman Filters and Smoothers Using AndroSensor IMU Data
Optimal Filtering with Kalman Filters and Smoothers Using AndroSensor IMU DataNational Cheng Kung University
 
Satellite Image Classification using Decision Tree, SVM and k-Nearest Neighbor
Satellite Image Classification using Decision Tree, SVM and k-Nearest NeighborSatellite Image Classification using Decision Tree, SVM and k-Nearest Neighbor
Satellite Image Classification using Decision Tree, SVM and k-Nearest NeighborNational Cheng Kung University
 
A Method of Mining Association Rules for Geographical Points of Interest
A Method of Mining Association Rules for Geographical Points of InterestA Method of Mining Association Rules for Geographical Points of Interest
A Method of Mining Association Rules for Geographical Points of InterestNational Cheng Kung University
 
Building classification model, tree model, confusion matrix and prediction ac...
Building classification model, tree model, confusion matrix and prediction ac...Building classification model, tree model, confusion matrix and prediction ac...
Building classification model, tree model, confusion matrix and prediction ac...National Cheng Kung University
 
Accuracy Analysis of Three-Dimensional Model Reconstructed by Spherical Video...
Accuracy Analysis of Three-Dimensional Model Reconstructed by Spherical Video...Accuracy Analysis of Three-Dimensional Model Reconstructed by Spherical Video...
Accuracy Analysis of Three-Dimensional Model Reconstructed by Spherical Video...National Cheng Kung University
 
Association Rule (Data Mining) - Frequent Itemset Generation, Closed Frequent...
Association Rule (Data Mining) - Frequent Itemset Generation, Closed Frequent...Association Rule (Data Mining) - Frequent Itemset Generation, Closed Frequent...
Association Rule (Data Mining) - Frequent Itemset Generation, Closed Frequent...National Cheng Kung University
 
The rotation matrix (DCM) and quaternion in Inertial Survey and Navigation Sy...
The rotation matrix (DCM) and quaternion in Inertial Survey and Navigation Sy...The rotation matrix (DCM) and quaternion in Inertial Survey and Navigation Sy...
The rotation matrix (DCM) and quaternion in Inertial Survey and Navigation Sy...National Cheng Kung University
 
SIFT/SURF can achieve scale, rotation and illumination invariant during image...
SIFT/SURF can achieve scale, rotation and illumination invariant during image...SIFT/SURF can achieve scale, rotation and illumination invariant during image...
SIFT/SURF can achieve scale, rotation and illumination invariant during image...National Cheng Kung University
 

More from National Cheng Kung University (20)

Accuracy assessment and 3D Mapping by Consumer Grade Spherical Camera
Accuracy assessment and 3D Mapping by Consumer Grade Spherical CameraAccuracy assessment and 3D Mapping by Consumer Grade Spherical Camera
Accuracy assessment and 3D Mapping by Consumer Grade Spherical Camera
 
3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...
3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...
3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...
 
3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...
3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...
3D Rekonstruksi Bangunan Menggunakan Gambar Panorama Sebagai Upaya Untuk Miti...
 
3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical Camera
3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical Camera3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical Camera
3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical Camera
 
3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical Camera
3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical Camera3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical Camera
3D Indoor and Outdoor Mapping from Point Cloud Generated by Spherical Camera
 
Handbook PPI Tainan Taiwan 2018
Handbook PPI Tainan Taiwan 2018Handbook PPI Tainan Taiwan 2018
Handbook PPI Tainan Taiwan 2018
 
Satellite Image Classification using Decision Tree, SVM and k-Nearest Neighbor
Satellite Image Classification using Decision Tree, SVM and k-Nearest NeighborSatellite Image Classification using Decision Tree, SVM and k-Nearest Neighbor
Satellite Image Classification using Decision Tree, SVM and k-Nearest Neighbor
 
Optimal Filtering with Kalman Filters and Smoothers Using AndroSensor IMU Data
Optimal Filtering with Kalman Filters and Smoothers Using AndroSensor IMU DataOptimal Filtering with Kalman Filters and Smoothers Using AndroSensor IMU Data
Optimal Filtering with Kalman Filters and Smoothers Using AndroSensor IMU Data
 
Satellite Image Classification using Decision Tree, SVM and k-Nearest Neighbor
Satellite Image Classification using Decision Tree, SVM and k-Nearest NeighborSatellite Image Classification using Decision Tree, SVM and k-Nearest Neighbor
Satellite Image Classification using Decision Tree, SVM and k-Nearest Neighbor
 
EKF and RTS smoother toolbox
EKF and RTS smoother toolboxEKF and RTS smoother toolbox
EKF and RTS smoother toolbox
 
Kalman Filter Basic
Kalman Filter BasicKalman Filter Basic
Kalman Filter Basic
 
A Method of Mining Association Rules for Geographical Points of Interest
A Method of Mining Association Rules for Geographical Points of InterestA Method of Mining Association Rules for Geographical Points of Interest
A Method of Mining Association Rules for Geographical Points of Interest
 
DSM Extraction from Pleiades Images Using RSP
DSM Extraction from Pleiades Images Using RSPDSM Extraction from Pleiades Images Using RSP
DSM Extraction from Pleiades Images Using RSP
 
Calibration of Inertial Sensor within Smartphone
Calibration of Inertial Sensor within SmartphoneCalibration of Inertial Sensor within Smartphone
Calibration of Inertial Sensor within Smartphone
 
Pengukuran GPS Menggunakan Trimble Secara Manual
Pengukuran GPS Menggunakan Trimble Secara ManualPengukuran GPS Menggunakan Trimble Secara Manual
Pengukuran GPS Menggunakan Trimble Secara Manual
 
Building classification model, tree model, confusion matrix and prediction ac...
Building classification model, tree model, confusion matrix and prediction ac...Building classification model, tree model, confusion matrix and prediction ac...
Building classification model, tree model, confusion matrix and prediction ac...
 
Accuracy Analysis of Three-Dimensional Model Reconstructed by Spherical Video...
Accuracy Analysis of Three-Dimensional Model Reconstructed by Spherical Video...Accuracy Analysis of Three-Dimensional Model Reconstructed by Spherical Video...
Accuracy Analysis of Three-Dimensional Model Reconstructed by Spherical Video...
 
Association Rule (Data Mining) - Frequent Itemset Generation, Closed Frequent...
Association Rule (Data Mining) - Frequent Itemset Generation, Closed Frequent...Association Rule (Data Mining) - Frequent Itemset Generation, Closed Frequent...
Association Rule (Data Mining) - Frequent Itemset Generation, Closed Frequent...
 
The rotation matrix (DCM) and quaternion in Inertial Survey and Navigation Sy...
The rotation matrix (DCM) and quaternion in Inertial Survey and Navigation Sy...The rotation matrix (DCM) and quaternion in Inertial Survey and Navigation Sy...
The rotation matrix (DCM) and quaternion in Inertial Survey and Navigation Sy...
 
SIFT/SURF can achieve scale, rotation and illumination invariant during image...
SIFT/SURF can achieve scale, rotation and illumination invariant during image...SIFT/SURF can achieve scale, rotation and illumination invariant during image...
SIFT/SURF can achieve scale, rotation and illumination invariant during image...
 

Recently uploaded

Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)dollysharma2066
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)Dr SOUNDIRARAJ N
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
EduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIEduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIkoyaldeepu123
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHC Sai Kiran
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEroselinkalist12
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidNikhilNagaraju
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxk795866
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 

Recently uploaded (20)

Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Serviceyoung call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
EduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIEduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AI
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECH
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfid
 
young call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Serviceyoung call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Service
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptx
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
Design and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdfDesign and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdf
 

3D reconstruction by photogrammetry and 4D deformation measurement

  • 1. Name: Muhammad Irsyadi Firdaus Student ID: P66067055 DIGITAL PHOTOGRAMMETRY 3D reconstruction by photogrammetry and 4D deformation measurement 1. Objective In this project, we want to generate a 3D reconstruction of an object and 4D deformation measurement. 2. Material and Methods • Equipment There is several equipment that used in this project such as: - Hardware Sony alpha 6300 camera to take the object picture. Camera and its specification can be seen in Figure 1 and Table 1. Figure 1. Sony Alpha 6300 Camera Table 1. Sony Alpha 6300 Camera Specification Specification Description Focal Length 20 mm Pixel Size 3.92 micron Sensor Size 23.5 x 15.6 mm Image Size 6000 x 4000 pixel Sensor Type CMOS Effective pixel 24.2 megapixel - Software a. Agisoft Photoscan Pro, to generate 3D Model b. CloudCompare was used to do comparison point cloud between before and after deformation • Material The object of this project is a deflated ball and full ball taken by Sony Alpha 6300 Camera that has 20 mm focal length, 6000 x 4000 pixel image, and 23.5 x 15.6 mm sensor size. To generate 3D model of an object, firstly I took several images of the object. I took the pictures from many directions to cover all of the object shape. In some detail area
  • 2. Name: Muhammad Irsyadi Firdaus Student ID: P66067055 such as curve or complex area, I took the picture more than the other area to clearly see the object texture. Total image used in this project are 72 images of the deflated ball and 65 images of the full ball. The object can be seen in Figure 2. To do the accuracy analysis, a code target need to put on the object in some locations and took the images together with the object. So the pictures used to generate 3D model also included the code target in one photo file. The value of a distance of each code target that need to be added into Agisoft Photoscan software to set the scale of 3D model. In addition, to know the deformation between the object of full ball and deflated ball. We can use CloudCompare software by calculating cloud to cloud. Make sure that every image has been done align. it is necessary to create the same conjugate point in every 3D object. a b Figure 2. (a) full ball image, (b) deflated ball image • Methods To generate a 3D reconstruction of an object and 4D deformation measurement, some steps need to be done following the project workflow showed in Figure 3.
  • 3. Name: Muhammad Irsyadi Firdaus Student ID: P66067055 Object Images (Deflated Ball) Object Images (Full Ball) Align Photo Align Photo Camera Calibration and Optimization Camera Calibration and Optimization Add Coded Targets and Distance Add Coded Targets and Distance Re - Optimise Re - Optimise Build Dense Point cloud Build Dense Point cloud Build Mesh Model Build Mesh Model Build Textured Model Build Textured Model 3D Model Analysis Conclusion 3D Model Analysis Qualitative and Quantitative Accuracy Analysis Figure 3. Generating 3D reconstruction of an object and 4D deformation measurement workflow We have two drawing objects: deflated ball and full ball. The steps to generate 3D models on both objects are the same. Detail of project workflow will mention below: 1. For the object images, in Agisoft Photoscan software, I did the photo alignment process to create sparse point cloud. 2. After aligned the object photo, camera calibration and optimization need to be done to improve the sparse point cloud quality. Here we can create the boundary region box to select the specific object that want to create the model. 3. Then added the value of a distance of each code target to the model by creating 4 markers on the different photos 4. Build the dense point cloud from sparse point cloud data. After dense point cloud was built, we can remove some data that we don’t want to process such as the outliers point cloud. 5. Build shaded, solid and wireframe mesh model. 6. Build textured model.
  • 4. Name: Muhammad Irsyadi Firdaus Student ID: P66067055 7. Analysis of textured 3D model. In this part, we can analysis the 3D model by looking for what kind of object that not visible or any kind analysis. 8. Export the point cloud data of model to .las file to do the accuracy analysis. 9. Qualitative and quantitative accuracy analysis of point cloud between 3D model. 10. Conclude the analysis explanation 3. Results and Analysis a. Dense Point Cloud, Mesh and Textured 3D Model The first main objective of this project is to generate 3D model using Agisoft Photoscan workflow step. In this project we got several result according to the process followed in Agisoft Photoscan. The results are the sparse point cloud, dense point cloud, mesh model and textured model. Each result image can be seen in Figure 4 and figure 5. Figure 4. full ball. (a) Sparse Point Cloud, (b) Dense Point Cloud, (c) Dense Cloud Classes, (d) Shaded, (e) Solid, (f) Wireframe Mesh Object, and (g) Textured Object a b c d e f g
  • 5. Name: Muhammad Irsyadi Firdaus Student ID: P66067055 Figure 5. deflated ball. (a) Sparse Point Cloud, (b) Dense Point Cloud, (c) Shaded, (d) Solid, (e) Wireframe Mesh Object, (f) Dense Cloud Classes, and (g) Textured Object From the result above, we can see that the 3D model of this ball can be generated very detail. The sphere on the object and the color can be seen clearly looks like the real appearance got form image. The appearance of 3D textured model with real image can be seen in Figure 6. Figure 6. The Appearance of (a) 3D Textured Model and (b) Real Image of ball a b g c d e f a b
  • 6. Name: Muhammad Irsyadi Firdaus Student ID: P66067055 From that figure 6, we can see that the sphere shape on the object can be generated clearly. Also, the model color looks similar with the real color provided by the image. We also can see some light shadows were captured and generated together with the model. This is a good approach in term of generating 3D model using non-metric digital camera. In my assumption, the result of 3D model was influenced by four factors. 1. The light or environmental condition around the object. We know that in generating 3D model, we need to capture the object images using digital camera. Digital camera is a passive sensor that only receive the visible light from the object in front of it. If we capture the object image in bad or less light condition, then the 3D result can be bad in visibility or the texture color will be looked so dark. 2. The object itself. Some objects may difficult to reconstruct due to the material or the shape factor. In this project I choose the rubber ball as the object, then the 3D model result becomes so good because the object is not move and the visible light can easily reflect to the camera well. We need to do more experience in generating 3D model from many kind of different object material to know the effect of material itself. 3. The image taking technique including the object to camera position distance and image overlap. In taking the picture, we need to consider the distance to object. If we take the picture too far from the object, then the result of 3D model will not good because some of object details will be lost or cannot be recognized. The other factor is the image overlap. To create good 3D model, high overlap images are needed to avoid the gap between two images. More overlap, then good quality 3D model will be generated. So here, because I used 72 images of the deflated ball and 65 images of the full ball, it makes sense that I got very good 3D model result. 4. The camera specification and setting. High resolution and stable camera can perform good 3D model result. A camera that can capture the object size as small as possible without any blur distortion can provide very good image. It means that the 3D model also can be generated more detail than if we used low-resolution camera. Beside on those good point of view, I found a case that occured and make the result has a little distortion. This problem appeared on the bottom of the 3D model result that can be seen in Figure 7. Figure 7. Textured Object (a) full ball and (b) deflated ball a b
  • 7. Name: Muhammad Irsyadi Firdaus Student ID: P66067055 From the figure 7, we can see that some areas have dense point cloud distortion. Also, there we can see that some areas of the object are missing and appear as a hole. The reason of this problem due to some factors. First factor comes from the aligning photos process related to images number used. Textured model was generated from dense point cloud produced from aligning photos process that theoretically based on image matching concept. To know the relationship between aligning photo process and number of photo, we can investigate the dense point cloud of the object showed on Figure 8. Figure 8. Dense point cloud (a) full ball and (b) deflated ball From the figure 8, we can see that there are some outlier points appear at the boundary of object. I already deleted some outlier points. But here I still found some of it. Dense point cloud was generated from sparse point cloud from aligning photos process. Aligning photos process was affected by how many images that we used. Here although I processed 72 images of the deflated ball and 65 images of the full ball, but I only process few of the bottom-object-images that means only few information of this area we will get. More images of this area needed to get more information of the bottom-area. So, we can conclude that more images used in building 3D model can perform good quality result. Taking picture technique also can affect the 3D model result. As I mentioned before, we need to take into consideration of the object distance and image overlap. Here I checked every image I took and I found that for the bottom-area-object, I captured the image a little bit far and no image captured from the bottom direction. Image of bottom-area and camera position of this project can be seen in Figure 9. a b
  • 8. Name: Muhammad Irsyadi Firdaus Student ID: P66067055 Figure 9. Camera Position. (a) full ball, (b) deflated ball We can see (in figure 9) that no camera position was located on the bottom of object. The bottom-area images were captured as the tilt image taken from the upper position so here I know that no information available on the bottom of object. In my opinion, if we want to create good result on the top-direction, then we need to take the image from bottom-direction too. From this discussion we can conclude that taking the picture far away from the object can decrease the object detail. Also, no information of an area can create the blank space or a hole on the object. In addition, the shape of an object also affects the outcome of the 3D model. In this case, the object is spherical so it requires more photos to create a 3D model because there are parts of the object that are difficult to capture by the photo. b. Qualitative and Quantitative Accuracy After we analyze the appearance of 3D model result, then we can analyze the qualitative and quantitative accuracy of 3D model itself. First we input the point cloud data from two object result. In Cloud Compare software, we did the alignment process to align this two point cloud data becomes close each other. After the process was finished, we can see this pointclouds ware moved closer to each other as shown in Figure 10. a b
  • 9. Name: Muhammad Irsyadi Firdaus Student ID: P66067055 Figure 10. Result of Aligning Two Pointcloud Data The yellow point cloud represents the data from full ball object and the red point cloud represents the data from deflated ball object. From the figure above we can see that those two point clouds were not aligned well. We still can see the object differences. Point cloud from full ball object position is down to the point cloud data from deflated ball object. In my assumption, this problem caused by the aligning process done before. In the aligning process, I used full ball point cloud data as the reference and set four points as the conjugate point. The position of conjugate points can be seen in Figure 11. Figure 11. Conjugate Points Position on 3D Model Pointcloud Data After we analyze the qualitative accuracy, we can analyze the quantitative accuracy between these two point cloud data. Because here I analyze the accuracy between
  • 10. Name: Muhammad Irsyadi Firdaus Student ID: P66067055 two data set, then this accuracy is the relative accuracy. After the alignment process, I got the RMS error of this process was 0.431 mm with the transformation matrix is: From that result, we can see that the RMS error only 0.431 mm means that the alignment process was well performed in generating aligned point cloud data. We also can analyze the accuracy of 3D model itself by seeing the error of scale bar measurement on model. In this project, four measurements were set from four markers at all different images like showed in Figure 12. From four measurements, I got 0.0002 m error accuracy. It shows that any measurement in this model has 0.2 mm error position of the full ball object and 0.3 mm of the deflated ball object. Figure 12. Marker (a) full ball and (b) deflated ball c. 4D Deformation Measurement To know more about the deformation measurement between full ball and deflated ball, we can calculate the distance between these two point cloud data and show the distance color scale of the 4D model point cloud data. To calculate the distance, I set the full ball point cloud data as the reference. After I run the cloud to cloud distance function, then I got the result like shown in Figure 13. From this project, I got the mean deformation value is 8.395 mm, maximum deformation is 9.802 cm and the standard deviation is 4.039 mm. It means that the mean deformation between these two point cloud data is 8.395 mm. It is a quite small value but it makes sense because before we can see that 3D model point cloud data was well aligned. We can check it by seeing the color scale bar. Blue color represents the smallest deformation while the red color represents the greatest deformation. a b
  • 11. Name: Muhammad Irsyadi Firdaus Student ID: P66067055 Figure 13. Distribution of deformation between Two Point cloud Data The explanation of this condition is: - Alignment process can affect the result. This factor I already explained before where the number of conjugate points and its location can affect the result. Used more points located on separate location can increase the accuracy and quality of point cloud aligned result. - Scale of point cloud data. In cloud compare software, I can see the coordinate scale of these two data. Full ball point cloud data has 1 while the deflated point cloud data has 1 scale factor. In alignment process, we need to make all the data in same scale or adjust its scale so both of the data and the result are in same scale factor. 4. Conclusion From the result and analysis above, the conclusion that we got is: a. Capture the object image in bad or less light condition can make 3D result has bad visibility. b. 3D model of static and solid object can be generated in good quality. c. To create good 3D model, high overlap images are needed to avoid the gap between two images. d. 3D model also can be generated more detail if we used high-resolution camera. e. More images used in building 3D model can perform good quality result. f. Taking the picture far away from the object can decrease the object detail. g. No information of an area can create the blank space or a hole on the object. h. In image matching, more conjugate points in separate location are needed to produce high accuracy 3D model. i. RMS error of the alignment process is 0.431 mm while mean deformation is 8.935 mm j. Distance value of two point cloud data affected by point cloud alignment process and the scale of point cloud data. k. Scale measurement accuracy on 3D model has 0.2 mm error position.