SlideShare a Scribd company logo
1 of 64
Download to read offline
Laser scanning for crack detection and repair
with robotic welding
By
François, Wieckowiak
MSc Robotics Dissertation
Department of Engineering Mathematics
UNIVERSITY OF BRISTOL
&
Department of Engineering Design and Mathematics
UNIVERSITY OF THE WEST OF ENGLAND
A MSc dissertation submitted to the University of Bristol
and the University of the West of England in accordance
with the requirements of the degree of MASTER OF
SCIENCE IN ROBOTICS in the Faculty of Engineering.
September 18, 2022
Declaration of own work
I declare that the work in this MSc dissertation was carried out in accordance with the require-
ments of the University’s Regulations and Code of Practice for Research Degree Programmes and
that it has not been submitted for any other academic award. Except where indicated by specific
reference in the text, the work is the candidate’s own work. Work done in collaboration with, or
with the assistance of, others, is indicated as such. Any views expressed in the dissertation are
those of the author.
François Wieckowiak, September 18, 2022
1
Acknowledgement
I could not have undertaken this journey without the members of the Robotics Innovation Fa-
cility at the Bristol Robotics Laboratory, who were by my side during my Master’s thesis: my su-
pervisor Professor Farid Dailami, who directed me towards this dissertation project which matches
my deep interest in Machine Vision and Robotics, as well as Nathan Churchill, Shaun Jordan and
"soon-to-be Doctor" Arjuna Mendis.
I am also grateful to my great friends, Coena, Shrestha and Sripad, members of "The Mountain",
without whom I would not have had an experience in Bristol near as exceptional as it was.
Lastly, I’d like to mention the unconditional support from my partner Séphora that, albeit being
on the other side of the English Channel, kept me going throughout this year.
2
Abstract
Autonomous inspection and repair of critical components in systems that are essential to
the functioning of an industry is the next step in automated maintenance. Today, inspec-
tions are often carried out manually by expert workers, hence they are time-consuming and
resource expensive. This report presents a proof-of-concept robotic system comprised of a
UR5 robot equipped with a profile laser scanner based on optical triangulation, capable of
automatically scanning a designated area of a part and fully identifying cracks by outputting
their locations and mean paths. This information can then be transferred to a robotic welding
system for automatic repair of said crack. This proof of concept can only proceed to scans of
flat components from a top-view for now, but its parameters (resolution, scanning direction
and scanning speed) were extensively tested to find their optimal values for the fastest and
most accurate scans. A crack "palette" showing cracks with various widths was laser cut to
quantify the precision of the system on which cracks were identified with a mean error of less
than 0.2 millimetres with the optimal parameters. This approach is novel as it does not rely on
any prior knowledge of the scanned part and paves the way towards developing autonomous
inspection and possibly self-repairing critical systems.
Number of words in the dissertation: 10,575 words.
3
Contents
Page
1 Introduction 7
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Literature Review 10
2.1 Laser scanning in different industries . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Point cloud processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Crack detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Research Methodology 15
3.1 Universal Robot UR5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 ScanCONTROL 3000-50/BL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Laser cut parts to put the scanner to the test . . . . . . . . . . . . . . . . . . . . . 18
3.4 The scanning process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.5 The data processing on the point cloud gathered . . . . . . . . . . . . . . . . . . . 25
3.6 A repeatability test to make sure that the user has a limited influence on the results 32
4 Results and Discussion 33
4.1 Quality assessment of the laser cut parts . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 The impact of the scanning speed on the point cloud quality . . . . . . . . . . . . . 36
4.3 Optimal scanning parameters for a fast and accurate scan . . . . . . . . . . . . . . 39
4.4 Influence of the manual steps on the resulting accuracy . . . . . . . . . . . . . . . 49
4.5 Resulting crack paths on the cylinder head gasket . . . . . . . . . . . . . . . . . . 50
4
5 Conclusion 52
5.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
A Appendix 56
5
List of Tables
3.1 The width of each crack of the testing palette . . . . . . . . . . . . . . . . . . . . 19
3.2 Speed and acceleration parameters depending on the speed factor . . . . . . . . . . 24
3.3 Parameters of the 16 test scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.1 The results of the point clouds superposition. . . . . . . . . . . . . . . . . . . . . 33
4.2 The results of the 16 testing scans . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.3 The results of the two linear regressions for the scans durations . . . . . . . . . . . 44
4.4 The results of the last linear regression . . . . . . . . . . . . . . . . . . . . . . . . 45
4.5 The results of the repeatability test . . . . . . . . . . . . . . . . . . . . . . . . . . 50
6
1 Introduction
1.1 Motivation
In every sector of industry, monitoring critical parts and predictive maintenance is essential to the
proper functioning of any system. When a part of the system breaks down, a simple choice must
be made: is it better to repair it or replace it altogether? For complex components, the best solution
is often to repair them. Indeed, manufacturing processes for small series production of complex
parts are often resource-expensive and time-consuming. Casting and moulding necessitate the de-
sign and making of complex moulds before starting production, while machining needs a large
amount of different equipment and machinery to achieve specific goals. Furthermore, industries
such as aerospace, nuclear energy and military often require a low amount of extremely complex
components that suffer heavy usage and thus are at risk of breaking and cracking due to fatigue
and repetitive stress. These parts are regularly monitored and replaced when their lifetime is over,
inducing a huge cost for the company. Repairing these parts is advantageous on several levels: it is
cheaper, faster, and more environmentally friendly.
(a) A handheld laser scanner used in a power plant [28]. (b) A laser scanner used in a geodetic survey
[35].
Figure 1.1: Some laser scanners used in different industries.
7
Robots can be used in this context to conduct the inspection and sometimes the repairing of
these parts on a one-off basis. An expert worker can often localise a defect in a component and
proceed to the repair, but using robots allows repairing in remote or hazardous locations using
multiple sensors. This kind of robot can also be programmed to automatically patrol between
critical locations, such as the many plant rooms of a hospital that need to be properly functioning
at all times.
3D parts scanning is a time-consuming process. They are done with the help of 3D scanning,
which can be done in various ways, including contact scanners with a probe, photogrammetry or
laser scanning. It is the latter that will be the focus of this dissertation. Laser scanning is often used
by geomatic engineers to conduct surveys of buildings, but it is also used in industries for 3D object
scanning for quality control or preventive maintenance. Some examples are visible in Figure 1.1.
A common already existing technology is handheld laser scanners, that workers use to manually
scan parts and output a 3D model of the real component, with tolerances of the order of magnitude
of a millimetre. For bigger components such as aircraft wings or power plants components, the
entire scanning process with older laser scanning technologies can take up to 16 hours, including
data acquisition and data processing [28]. That is why there is a lot of demand for automating this
exhausting and repetitive task with robots.
Automated laser scanning already exists, but it often requires an intensive calibration routine,
and only works in specific settings with prior knowledge. They are often used in inline quality
control on a specific production line where the expected geometry of the manufactured parts is
known. The approach of this report aims at allowing scanning of unknown part, without any prior
knowledge of the part required. To allow for more flexible scanning, the scanner can be mounted
on a robotic arm, so that the scanner itself is moving instead of the part the be scanned, to allow for
in-situ inspection. This system could detect cracks and save their shape and location for a welding
system to repair them with limited human supervision.
8
1.2 Aims
This project will focus on the design of a robotic system capable of inspecting defective components
containing cracks with a laser scanner and outputting the location and shape of the said crack. This
solution would be novel in the sense that it should be able to scan any part in any location without
any necessary fine-tuning and prior knowledge of the component. The scope of this dissertation
will be limited to the scanning of parts from a top-view perspective. Its accuracy and speed will
be deeply investigated to find the optimal configuration and variables to use depending on the
requirements of the welding robot and the time available for the data acquisition. The robot is
expected to output a point cloud model of the crack along with its mean path and location.
1.3 Objectives
The aims of this project are described as such:
(1) Choose a process to create simple components with intended cracks to test our solution on
and assess the quality of the production process chosen.
(2) Design a proof of concept scanning robot, using a UR5 Universal Robot and a scanCON-
TROL 3050/BL profile scanner, which can generate the point cloud of a scanned part, as
well as a program capable of accurately identifying cracks from the resulting point cloud.
(3) Understand the impact of the scanning speed on the quality of the scan by comparing the
resulting point clouds and scan duration of scans of the same component at different speeds.
(4) Find the relationship between the scanning parameters, which are the scanning direction
(parallel or perpendicular to a crack) and the scanning resolution, and the performances of
the robot, which are measured with the accuracy of the crack detection algorithm and the
duration of the scan, by testing the system with various combination of parameters.
(5) Make sure that the participation of a human operator in the analysis of the scan has a limited
effect on the generated crack path by repeating the same crack analysis multiple times and
looking at the resulting standard deviation of the accuracy.
9
2 Literature Review
2.1 Laser scanning in different industries
2.1.1 Large scale laser scanning
Laser scanners are used in various engineering fields. [14] lists nine of them, seven of them which
can be considered as "large scale" scanning. They include ground landscape surveys, protection
of buildings and cultural relics, or even deformation monitoring for tunnels, bridges and other
large-scale constructions. These applications often require long-range laser scanners, which use
technologies such as time of flight or phase shift measurements, having scanning ranges of the
order of magnitude of the kilometre. They lack accuracy, with a resolution in the range of a metre
to a centimetre. Laser scanners are of huge help in the construction sector, such as in [15], where
a hemispheric phase-shift type laser scanner was used to scan the surface of buildings and obtain
its surfaces point clouds. Post-processing was conducted to assess the flatness of the surfaces and
estimate the amount and cost of mortar necessary to correct the flatness. This article estimates the
maximum precision of this kind of laser to be 0.5 millimetres, allowing for accurate measurements
of buildings’ facades. The drawback of this kind of laser scanner is its reach, being at maximum of
around 150 metres.
Another application of laser scanners is the quality control of spatial structural elements in
buildings. They are scanned in [19] using a handheld laser scanner, and processing on the resulting
3D point cloud is used to match it to the original model of the structure. It allows for a faster
and extremely precise structural and mechanical analysis of this kind of structure. The handheld
scanner used function thanks to optical triangulation and is reported to have a resolution of 0.1
millimetres and an accuracy of ± 0.03 millimetres. These scanners are very effective at scanning
small and complex components but rely on the expertise of their users.
In these large-scale 3D scans, multi-modal systems can make use of the advantages of laser
10
scanners as well as other sensors, such as cameras to obtain colour and texture data of the scanned
areas. A multi-modal approach was used in [2] to generate photo-realistic 3D models of the rooms
of a museum and its artworks. These fast and non-contact solutions are the safest ways to perform
scans in such sensible environments.
2.1.2 Small scale laser scanning
Smaller scale scanning is also widely used in various domains.[14] mentioned the measuring of
complex industrial equipment, that are rendered more complex when the shapes of the scanned
parts may hinder the capabilities of the traditional scanners, as well as medical measurements
for prosthesis making, where non-contact solutions are almost always required. In [5], a laser
scanner was used to duplicate an ear cast for a prosthesis. The usage of coloured pins as markers
to stitch together point clouds resulting from a scan done from multiple directions is notable. [32]
is a 2019 survey of the state of the art in 3D imaging sensors. Alongside photogrammetry and
interferometry, laser scanners, using different technologies such as time of flight measurement or
laser triangulation, and their various applications are listed. With feature sizes going all the way
down to the µm in surface roughness analysis, it illustrates the many applications requiring laser
sensors. Indeed, laser scanning proved to be extremely effective even at this scale, as seen in [38],
where various optical scanners were compared in the analysis of surface defects and roughness.
Chronologically, earlier usage of laser triangulation was for reverse engineering. In 1994, [13]
set up a scanning arrangement with this technology to obtain a 3D point cloud of a part to be reverse
engineered. They explained at the time that contact coordinate measurement systems may not be
usable for complex or bigger components. However, they mentioned how reference points may be
necessary to stitch together multiple point clouds from scans of the same part from different angles.
A few years later, in 2002, [4] managed to laser scan and machine copies of complex aircraft parts,
with an error, including both the scanning and machining errors, of ± 0.127 millimetres.
In [17] and [34], the authors made use of a line laser scanner working with optical triangulation
and focused on the generation of a scanning trajectory that englobes all of the possible scanning
points. It necessitates prior knowledge of the part CAD model but can generate any scanning
trajectory in different directions and with varying altitudes. The part is put on a rotary table to
facilitate the scanning process, but this is only possible for parts that can be taken apart from their
11
systems. When it is the case, a more flexible scanning device needs to be used, just such as in
[18], where a gantry was designed around a statue by Michelangelo in a museum to proceed to
an entire scan. Necessary precautions are taken to ensure the safety of the art piece. As seen
before, colour data was gathered thanks to a digital camera, making this system a multi-modal one.
When touching the part is authorised, multi-modal systems can use touch probes in combination
with laser scanners, as it has been done in [44], to achieve greater accuracy. [20] compared touch
probes, laser line scanning and portable arm CMM on various parameters and illustrated how touch
probes allow for extreme precision at the cost of an extended scanning duration. Just like most of
the papers cited above, this solution required prior knowledge of the part, here, to select the best
sensors for each surface to map. Stereo vision, triangulation and laser lines are used in combination
in [41] to scan parts with an error range of ± 0.25 millimetres.
Finally, closely linked to the aims of this project, [6] presents the concept of an autonomous
system, supposed to automatically detect and weld defects on railway tracks. A laser line scanner
is used to gather a point cloud of the top surface of the rail, then potential defects are identified,
allowing for a separate welding robot to proceed to the repair.
These articles all demonstrated how laser scanning is capable of generating accurate 3D repre-
sentations of real-world parts with excellent precision, while highlighting common practices, such
as markers to stitch clouds together, or using prior knowledge of the part to help with the scan
trajectory generation.
2.2 Point cloud processing
Laser scanners output point clouds of varying sizes representing the area they scanned. Processing
is almost always necessary to obtain clean and usable data, for, in our case, crack identification.
The Point Cloud Library PCL [31] is one of the most used toolkits when dealing with 3D
point clouds. It includes many algorithms necessary for point cloud processing, such as filtering,
feature estimation and segmentation, key points identification and more. This paper and library
helped the author of this report identify the commonly used methods in 3D point cloud processing
and guided them towards their final implementation. After comparing different point cloud data
acquisition methods, such as Time of flight laser scanning, photogrammetry and even RGB-D
12
camera, [40] focused on the different approaches for data cleansing, registration, segmentation and
object recognition. It mentioned many algorithms and approaches that proved to be usable in our
project, especially converting the point clouds to images to proceed to traditional machine vision
approaches such as median filtering, morphological erosion and dilation, using kernels whose sizes
have been estimated from the point cloud itself. Data registration, for combining point clouds taken
from different sensors or views, is made possible using ICP algorithm or key points detection. [16]
added an inertial measurement unit to a handheld laser scanner to simplify the data stitching, simply
by offsetting the scanned point clouds by the position of the scanner. This solution could be easily
adapted for a robotic arm by using the position of the end effector of the robot, which is precisely
calculated thanks to its forward kinematics model.
For data cleansing, [24] made use of a statistical approach to fit the best matching planes to
point clouds and remove the most probable outliers. Their approach proved to be computationally
simple and faster than other traditional methods at accurately identifying outliers in noisy point
clouds. [7] applied denoising on point clouds without the need of converting them to a mesh in
the first place, which can be computationally expensive when the cloud is noisy. Their approach is
non-local and is based on a similarity measure between the analysed point and other points of the
cloud. PCA was used in [8] to reduce the dimensionality of a 3D point cloud to a 2D space, with
each dimension being generated from a different Principal Component Analysis. It allows for a low
complexity classification of points into noise and non-noise classes, filtering, and broadcasting of
the 2D points back into the 3D space.
When multiple objects are present in a scan, such as in scans of construction sites or outdoor
environments, Principal Component Analysis is used to both reduce the size of the data and easily
cluster the points of the clouds for further processing. [23] applied A Robust PCA method to
accurately segment the point clouds in regions with a semi-automatic method depending on a few
user-defined parameters. Another approach used voxel segmentation to divide the point cloud
of a construction site into sub-clouds, then also used PointNet [29] on the data, a deep learning
approach to point cloud segmentation and classification, and aims at comparing the results of the
two solutions.
To assess the quality of a point cloud of a part, it is often compared to the 3D model used to
create said part. In [12], a modified Iterative Closest Point (ICP) algorithm is modified to allow for
13
matching the point cloud to the CAD model. An objective function representing the Mean Squared
Error between the point cloud and the model was proposed, and the algorithm tries to minimise it
by rotating and translating the point cloud to match the model. The output of this algorithm is a
3D representation of the part superimposed on its 3D model highlighting the geometric deviations.
[39] conducted a similar analysis by using the point cloud from a structural light 3D scanner and
comparing it to the original CAD model of the scanned part, resulting in an effective method for
quality assessments of parts during a production process.
2.3 Crack detection
Most of the crack detection research has been done on photographs of surfaces. [22] reviewed 50
papers working on crack detection, and listed the most used processing techniques. They include
morphological approaches to extract the crack surface from the pictures [33], mask generation
with various thresholding methods (manual, Otsu [10] [36]), or graph-based approach and usage of
shortest path algorithm like Dijkstra [11].
[1] proposed a different method which calculates the minimal path from a starting pixel with a
set length, which always matched the true crack path. It was tested on synthetic and real pictures
and proved to be effective at identifying cracks on road pavements, even though this task is rendered
complicated due to the variations in surface texture.
[25] used a neural network trained on grayscale images of cracks on roads, on which the in-
tensity of the pixels of the cracks was visibly different from the rest of the ground. The authors
also compared six algorithms for crack classification in three classes: longitudinal, transversal, and
miscellaneous cracks. This was also achieved by [26], but crack detection was simply done by nor-
malising and thresholding the images to extract the pixel of the cracks that have a different intensity.
Shadow removal may be necessary in case of uneven lighting conditions in the pictures. [45] im-
plemented such a process and applied a probabilistic approach to automatically detect cracks from
images of pavement.
To detect the edges of cracks in our project, edge-based segmentation on point clouds was
mentioned by [40], which allows for fast, but sensible to noise and uneven point cloud density,
segmentation of features.
14
3 Research Methodology
3.1 Universal Robot UR5
The robot used throughout this project is a Universal Robot’s UR5. It is set up on its workstation
in the RIF Laboratory at the Bristol Robotics Laboratory. It is programmed using the software
RoboDK, which allows simulation and control of the system using the built-in functions and the
Python SDK. This software holds many ready-to-use assets, including the 3D model of the robot as
well as its forward and inverse kinematics to move it around easily. A screenshot of the simulation,
as well as a picture of the actual work station, are shown in Figure 3.1. A pointer (Figure 3.2 that
can be mounted on the robot was also 3D printed to allow simple teaching of precise position to
the robot.
Figure 3.1: The workstation and the simulated robot.
15
Figure 3.2: The 3D printed pointer tool to teach specific points to the robot.
3.2 ScanCONTROL 3000-50/BL
The laser scanner used for this project is a Micro-Epsilon scanCONTROL 3000-50/BL profile
scanner [21], which functions thanks to the laser triangulation principle. Its laser is made of visible
blue light with a wavelength of 405 nm. Its maximum resolution is 2048 points along its scanning
line. It is a class 2M laser, meaning that it is dangerous when staring directly at it for a duration of
more than 0.25 seconds, which is highly unlikely due to the eye-closure reflex [30]. This laser has
been chosen due to its availability at the RIF lab at the Bristol Robotics Laboratory. A plastic holder,
allowing the UR5 robot to take and hold the laser scanner thanks to its Wingman Tool Changer
System Wingman Tool Changer [42], has been designed and 3D printed before this project. It can
communicate with a Python script using a python library called pyllt through an Ethernet cable,
which doubles as the power source thanks to the Power over Ethernet technology. The scanner and
its holder are shown in Figure 3.3, while the scanning area of the scanner and its dimension are
shown in Figure 3.4.
The documentation of the laser mentions many parameters that can generate errors in the
scanned points. They are listed below :
• Scanning highly reflective or transparent components may deceive the laser sensors tasked
with the optical triangulation task. The parts that we investigated were laser cut in acrylic
sheets which will not pose any issue.
16
• Colour differences and variations in the penetration depth of the laser can result in inaccurate
measurements. The exposure parameters of the scanner can only be changed as a whole for
each profile, so if parts with different colours and roughness are scanned, precautions should
be taken to scan these areas separately.
• An non-uniform temperature spread in the sensor may lead to inaccuracy. For the testing, the
scanner was turned on at least twenty minutes before scanning to ensure a uniform tempera-
ture in the system.
• External light can disturb the readings of the sensor. Sadly, it was not possible to isolate the
workspace of the robot, so this is a possible source of inaccuracy.
• Mechanical vibration can disturb the accuracy of the scanner in the range of the µm, that is
why the robot makes a full stop at each scanning step.
• Surface roughness of 5 µm and above can be detected by the scanner, generating surface
noise on the resulting point cloud. For our application of identifying cracks with a width of
the order of magnitude of a millimetre, this should not cause any issue.
• Specific parts geometry may obstruct the view of the sensor or be in the way of the laser
emitter. Because we are scanning flat components, this will not be an issue for our testing.
We will be using this laser scanner with the robot by bringing the end effector of the robot
to a full stop at various positions along a scanning trajectory and saving the scanned points each
time. Another way to use this scanner is to keep the movement of the robot continuous and use the
scanner at a higher frequency to save points along the way. This method is more prone to errors
caused by vibrations, especially at high speed, so the first approach will be used instead.
17
Figure 3.3: The scanner attached to its 3D printed
holder and connected to its ethernet cable. Figure 3.4: Dimensional drawing sensor scan-
CONTROL 30xx-50, dimensions in mm (inches)
3.3 Laser cut parts to put the scanner to the test
To test for the quality of the scan, components with intended defects were made. One of them is a
testing crack "palette", showing the same crack seven times with varying widths. It was designed
and saved as a Scalable Vector Graphics file (SVG), then cut on the laser cutter of the lab, an Epilog
laser cutter, in a 5 millimetres thick plastic sheet. The SVG file and the resulting part can be seen
in Figure 3.5, and the width of each crack is shown in Table 3.1.
This means of production was chosen because it was the fastest and easiest way to make this
component. Laser cutting usually has a tolerance of ± 0.25 millimetres, but because it is a removing
process, it is prone to its quality being affected by the quality of the material used. The plate cut
is relatively thin (5 millimetres), so its thickness should not influence the quality of the cut [3].
Alternatives could have been to use 3D printing, with technologies such as Polyjet printing or even
Stereolithography printing, which can bring this tolerance down to ± 0.05 - 0.01 mm.
18
Figure 3.5: The sketch used for the laser cutting and the resulting crack palette
Crack No. 0 1 2 3 4 5 6
Width (mm) 2.22 1.71 1.38 1.08 0.72 0.31 ∼0
Table 3.1: The width of each crack of the testing palette. Crack 6 is almost nonexistent. The mean
width is 1.06 mm.
The path of each crack was generated by converting the Scalable Vector Graphics (SVG) file to
a point cloud. Points were generated at a set z coordinate by iterating through the paths of the file.
We set the density of points so that the crack itself is made of roughly 40,000 points, giving the
edges of a crack a linear point density of approximately 450 points per millimetre. The resulting
point cloud is visible in Figure 3.6.
19
Figure 3.6: The point cloud generated from the SVG file of the crack palette.
Finally, the true path of each crack that will be used as a baseline to compare our scan against
was computed by averaging the left and right edges of the crack. 400 points were generated along
the length of the crack, giving a resolution of 0.11 mm or a point density of 9 points per millimetre,
which is enough for our welding application. The resulting path and cracks’ edges can be seen in
Figure 3.7.
Figure 3.7: The generated cracks’ edges and paths for cracks 0, 1 and 2. These paths will be used
as a baseline to assess the quality of the scan.
20
To make sure that the cut palette matches with the original file, top-view pictures of each crack
were taken using an iPhone XR on a makeshift rig. Seven pictures were taken, each having a
crack directly below the camera to avoid the distortion effect of the lens. A telecentric camera lens
could have been used to limit this distortion even more and obtain a picture of the top surface of
the palette. These pictures were then manually binarized to generate a mask for each crack with
the photo editing software PhotoFiltre. This process can be seen in Figure 3.8. Using the contour
detection algorithm from the python library OpenCV, each mask was converted to a point cloud
with a point density of 22 points per millimetre, which is the lowest attainable with the resolution
of the camera used.
Figure 3.8: The binarization was manually done with a photo editing software called PhotoFiltre.
To compare the point cloud generated from the SVG file and the point cloud generated from the
pictures of the actual palette, the two were superimposed by computing the affine transformation
necessary to match the four corners of each crack. This process is illustrated in Figure 3.9. This
was done for each crack, and the resulting combination of the two point clouds can be seen in
Figure 3.10. To quantitatively compare these two point clouds, the average distance between each
closest points was calculated using the following equation :
If A and B are two point clouds of sizes nA and nB, with a and b points belonging to these
clouds, the average distance is calculated as :
dist(A,B) =
1
nA
∑
a
min
b∈B
(dist(a,b))
21
Figure 3.9: The affine transformation is calculated by manually selecting the four corners of each
crack. The red and green point clouds respectively come from the SVG file and the mask generated
from the photograph.
Figure 3.10: The superposition of the point clouds from the SVG file (in red) and from the pho-
tographs. A zoomed-in area of one of the crack is shown.
Another part with intended cracks was laser cut: it consists of a portion of a cylinder head
gasket found online [9] with added cracks. Its SVG file used for its laser cutting as well as a picture
of the final part is shown in Figure 3.11. This part will be used to conduct testing in the following
chapter.
22
Figure 3.11: The sketch used for the laser cutting and the resulting plastic cylinder head gasket.
3.4 The scanning process
The scope of this project is limited to the scanning of cracks on flat components and from a top-view
perspective. This makes this research more focused on the scanning accuracy and data processing
rather than the mechanical and geometrical aspect of the scan, such as the orientation or trajectory
of the robot. They will be addressed in this part which describes the scanning process. This process
was programmed and tested only on the simulation in the first place, then sent to the robot once it
was finished.
The first step of the scanning process makes use of the pointer described in section 3.1. It
is automatically mounted on the robot using the Wingman tool changer system available. The
user then needs to guide the robot to the four corners of the area they want the scan to be done.
The resulting bounding box is saved, then the robot automatically switches tools by removing the
pointer and grabbing the scanner.
The chosen area is scanned by the robot with max joint and linear speeds and accelerations set
by a configurable speed factor (See table 3.2), with a set number of steps ("n steps" in Figure 3.12)
in either the x or y direction. A small overlap is deliberately programmed to ensure the whole area
is correctly scanned. The coordinates of each point are calculated by adding the position of the
scanner relative to the robot base to the position of the point relative to the scanner. Once the entire
area has been covered, the point cloud, the parameters of the robot and the duration of the scan are
saved in a text file. For the sake of simplicity, the robot follows a Z-shaped trajectory to cover the
entire area, but an S-shaped trajectory may reduce the overall duration of the scan. Indeed, it is
23
visible in Figure 3.13 that depending on the direction of the scan, it may require only one back and
forth motion (Figure 3.13b), or numerous (5 in Figure 3.13a). During these motions, no scanning
is done by the robot, so some time is wasted.
Speed Factor 0.5 1 2 3 4 5 6
Max Speed (mm/s) 50 100 200 300 400 500 600
Max Joint Speed (deg/s) 50 100 200 300 400 500 600
Max Acceleration (mm/s²) 25 50 100 150 200 250 300
Max Joint Acceleration (deg/s²) 25 50 100 150 200 250 300
Table 3.2: The max speed and acceleration parameters of the robot depending on the chosen speed
factor.
Figure 3.12: A simulated scan showing the effective scanning area. The z coordinate of the robot
is calculated according to the altitude of the scanning area in yellow.
24
(a) Scan with N steps = 50, done in the direction of the
width.
(b) Scan with N steps = 50, done in the direction of the
length.
Figure 3.13: The same scan conducted in two different directions.
To figure out the best speed for the robot, the cylinder head gasket mentioned in the previous
section was scanned at various speeds, i.e. with the 7 speed factors mentioned in Table 3.2, with all
other parameters fixed. The number of steps was set at 100, the scanning direction was constant,
and the scanning area and part were set once and not moved throughout the scanning. Each scan
will be done three times, and the resulting point clouds will all be compared against a baseline
scan done at a speed factor of 0.5. The metrics used to compare the point cloud is still the average
distance between each closest points of two point clouds.
dist(A,B) =
1
nA
∑
a
min
b∈B
(dist(a,b))
The duration of each scan was also saved in order to figure out the relation between the robot’s
speed and the time taken to do its scan.
3.5 The data processing on the point cloud gathered
After a defective component has been scanned, data processing is necessary to identify the path of
its crack. For this purpose, the python library Point Processing Toolkit [27] was used. It allows for
simple 3D visualisation of huge point clouds and manual point selection by the user.
25
The processing of the generated point cloud to calculate the path of a crack works as follows:
• The altitude and range of the slice to keep only the top surface of the scanned component is
manually inputted by the user. Only the points situated in this slice are saved. (Figure 3.14)
• A sample flat area without a crack is manually selected by the user on the 3D point cloud.
This area is then converted to a black and white image, from which the length of the longest
empty (white) area along the scanning direction is saved as the "kernel size" k for future
processing. (Figure 3.15)
• The overall location of the crack is manually selected by the user, then a black and white
mask is generated, representing the projection of the location of the crack on a flat surface.
Each point of the cloud is converted to a black pixel on the mask. (Figure 3.16)
• The kernel size calculated before is used to generate a 1 by k kernel. This kernel is used to do
erosion on the current mask. This has the effect of filling in the gaps between the scanning
lines while keeping the crack empty. This forces us to do scans with a distance between
scanning lines smaller than the maximum width of the crack. (Figure 3.17)
• The mask of the crack is converted back to a point cloud, with the z coordinate of each point
being set to the mean altitude of the points of the rest of the cloud. It is aligned back to
its original location thanks to an affine transformation. The user then selects only the crack
using the Python interface. This step allows them to remove possible errors in the crack
identification. (Figure 3.18)
26
Figure 3.14: Keeping only the top surface of the
scanned part by manually setting the correct al-
titude and range (z = 8.5 ± 1 mm).
Figure 3.15: The empty area selected by the user
is converted to a mask. The "kernel size" con-
sists of the maximum distance (in pixels) be-
tween two lines, plus one.
Figure 3.16: The selection and conversion of the
area of the crack to a black and white mask. Figure 3.17: With an horizontal kernel of length
k, the white spaces between the scanning lines
are filled, but not the crack.
27
Figure 3.18: On the top part of the figure, the user selected (in yellow) parts of the newly generated
cloud that are not the crack. The cloud is then separated on the bottom part of the figure in two
distinct clouds, with the right one being of the crack only.
This process was designed for the scope of this dissertation. Namely, the scan being done
from a top view perspective of flat components, and with the direction of the scan being parallel
or perpendicular to the overall direction of the crack. Having a scan done at a 45-degree angle
of the crack would raise issues with the image processing used, as the dilation may erase parts of
the crack. A solution would be to have a scan done at an extremely high resolution (i.e., with a
huge number of steps) to avoid the need for dilation in the crack identification process. This could
be achieved by using the scanner at its maximum frequency of up to 10 kHz, but it would require
extremely precise control of the robot, both in terms of travelling along its trajectory at a constant
speed and the correct height relative to the part. The methodology we have chosen for our scans
allows for more flexibility and error margin in both the robot and part position.
This part of the process relies heavily on the inputs of a user, which has to select a flat area to
determine the kernel size, select the location of the crack, and check and correct the final output of
the algorithm.
28
At this point, the crack has been correctly identified and localised in the original coordinate
frame. The next step consists of the computation of the crack mean path so that a comparison
between a scanned path and the true path generated from the SVG file of the crack palette (visible
in Figure 3.7) is possible.
The process to calculate the scanned crack path starts with the identification of the two points
of the crack that are the furthest apart. A straight line (comprised of 50 points, in this case, approx-
imately one point per millimetre) is then drawn between them, as seen in green in Figure 3.19.
Figure 3.19: The crack points (in blue) determined with the image processing, subsampled by a
factor of 30 for better visualisation, as well as the straight line between the two furthest points of
the crack (in green), and the generated crack path (in red).
For each of these points, a plane going through it and perpendicular to the direction of the
straight line is calculated. The coordinates of n (30 in this case) of the closest points of the crack
are then averaged and yield the coordinate of a new point of the final crack path. This process is
visible in Figure 3.20. This solution makes sure that the path will be correctly identified even when
the crack is curved.
29
Figure 3.20: The calculation of a point of the crack path. For each point of the green line, a
perpendicular plane is drawn. The resulting path point results from the average coordinates of the
n closest point of the crack to this plane.
With 50 points of the crack path now calculated, linear interpolation is used between each
successive points of the crack path to bring the total number of points to 400. This final path is
depicted in red in Figure 3.21.
Figure 3.21: The path generated (in red) for the crack number 0 of the crack palette, the "true" path
generated from the SVG file (in blue) and the crack edges front the SVG file (in green).
30
This process was repeated for various parameters, including the number of steps in the scanning
trajectory, the direction of the scan (parallel or perpendicular to the cracks of the palette), and for
each of the 7 cracks of the palette. The scanning speed was set to the optimal value that was found
from scanning the cylinder head gasket (the finding of this value is described later in this report),
and the scanning area and palette position were kept constant. These parameters are detailed in
Table 3.3.
Parameters for the scans of the crack palette
Speed factor 4
Number of steps [25, 50, 100, 200, 300, 400, 500, 1000.0]
Direction [Perpendicular (0) or parallel (1) to the crack direction]
Scanning area 97 x 226 mm2
Table 3.3: The parameters used to perform a total of 16 different scans.
Scans with a number of scanning steps lesser than 25 were also conducted, but they turned out
unusable by the path-generating algorithm. Two of them are shown in Figure 3.22. Scans realised
with a number of steps lesser than 25 were not used for this analysis.
(a) N steps = 5, direction = Parallel (b) N steps = 10, direction = Perpendicular
Figure 3.22: Unusable scans due to their low number of scanning steps.
31
To measure the quality of the generated path, they were compared to the "true" path coming
from the SVG file of the testing palette. They were translated and scaled with an affine transforma-
tion to the location of the crack in the real-world coordinate frame, by matching the four corners
of each crack. The distance between the two trajectories is the mean Lock-step Euclidean distance,
which represents the root mean squared distance between all pairs of corresponding points in the
path [37]:
Eu(A,B) =
s
n
∑
i=1
dist2
2(ai,bi)
3.6 A repeatability test to make sure that the user has a limited
influence on the results
To make sure that the manual steps of the process do not influence the results, the crack analysis
of the palette described in the previous Section was repeated ten times. The same user (who is
the writer of this report) repeatedly selected the sample flat area for the kernel size determination,
the overall location of each crack, and once the image processing was over, removed to potential
errors in the crack identification. The total manual processing time was saved, and the Lock-step
Euclidean distances between the 70 computed trajectories and their "true" counterparts generated
from the SVG file of the crack were calculated.
The 70 crack analyses (10 times 7 cracks on each scan) were done in a row on the same day to
limit the variations in the user’s attention. The same scan was used for all of them, being done with
a speed factor of 4, 100 scanning steps and with its scanning direction perpendicular to the crack
(meaning that the scanning lines are perpendicular to the direction of the crack).
32
4 Results and Discussion
4.1 Quality assessment of the laser cut parts
In Section 3.3, a "crack palette" was laser cut in order to test the accuracy of the scanner. After the
making of this component, photographs of each crack were taken and processed to generate point
clouds of the real part.
The point cloud generated from the SVG file used for the laser cutting and the clouds generated
from photographs of the laser cut part have been superimposed as seen in Figure 3.10, by manually
selecting the four corners of each crack. To measure how well the real part corresponds to its
original file, the mean distances between points of each cloud for each crack were calculated and
are plotted in Figure 4.1. The individual results as well as the number of points in the clouds are
detailed in Table 4.1. As a reminder for the reader, the formula used throughout this report to
calculate the mean distance between two point clouds A and B is :
dist(A,B) = ∑
a
min
b∈B
(dist(a,b))
Crack Number 0 1 2 3 4 5 6
Mean Distance (mm) 0.781 0.742 0.698 0.757 0.621 0.418 0.407
Standard Deviation (mm) 0.551 0.561 0.535 0.563 0.447 0.279 0.174
Coefficient of Variation (%) 70.6 75.6 76.7 74.4 71.9 66.7 42.8
No. of Points 3307 3283 3302 3372 3307 2836 1720
Table 4.1: The results of the point clouds superposition.
33
Figure 4.1: Mean distances between the point of the SVG file point cloud and the point clouds
generated from the photographs.
The results show a slight downward trend in the mean distance the thinner the cracks get. This
can simply be explained by the fact that the thinnest cracks are made of fewer points, so large errors
are less prone to be present. The coefficients of variation are also high (> 50%), meaning that there
is a huge spread in the magnitude of the distances. In other words, they are both points that are
very far apart and points that are almost exactly at the same positions on the two clouds. Finally, a
mean distance of 0.57 millimetres for all cracks between the intended and actual part is a huge error
considering the widest crack is only 2.22 millimetres wide. It is however necessary to remember
that this comparison is done between a point cloud that has been generated from an SVG file and
another one generated from photographs of the laser-cut parts. Many parameters may explain that
difference:
• The conversion of the SVG file in a point cloud was done by iterating through each "path"
of the file and generating points along them. So continuous lines were transformed into a
discrete set of points, implying some information was lost.
• The part was laser cut on an Epilog laser cutter, which has a usual accuracy of ± 0.25 mil-
limetres.
34
• The photographs of the palette were taken on a makeshift rig, that lacked proper lighting and
uniform background.
• The camera used to take the photographs was not specially designed for this kind of machine
vision application. A telecentric lens could have been used to avoid distortion and obtain
more precise measurements. This also led to the vertical surfaces of the cracks being visible
in the photographs, even though pictures were taken by aligning each crack straight under
the camera lens. These vertical surfaces are visible in Figure 4.2.
• The image binarization was done manually by selecting the top surface of the palette, but due
to improper lighting, errors may have been made in this process. Glares and areas that are
not part of the top surface may have been wrongly selected. This kind of misleading feature
is also illustrated in Figure 4.2.
• When selecting the four corners of a crack on the cloud generated from the photograph to
warp it in the coordinate frame of the SVG point cloud, there was no perfect point making
a right angle at the edges of the crack. This led to an incorrect transformation that did not
exactly match the two point clouds together, as seen in Figure 4.3. On the right-hand side of
this Figure, the consequences of this misalignment are visible.
Figure 4.2: Due to the setup used to take the pictures, unwanted glares and surfaces were visible,
that can alter the quality of the binarization.
35
Figure 4.3: Details of the consequent offset between the SVG point cloud and the one generated
from the photographs.
When looking at the point clouds in detail, such as on the right-hand side of Figure 4.3, the
offset between the clouds seems to be consistently towards the left side, then switches to the right
side once the middle point of the crack has been reached. The point cloud also shows that the mean
path of the SVG and actual cracks are aligned. That is why the crack palette, as well as the "true"
paths generated from the SVG file, will be considered valid for the testing to be done in the rest of
this project. Also, in the context of crack repairing, the position of the scanned crack needs to be
accurate on the scanned real-world component, not on the CAD file used for its making.
4.2 The impact of the scanning speed on the point cloud quality
A total of 21 scans (three scans for each of the seven speed factors) plus one baseline scan at a
speed factor of 0.5 have been done of the cylinder head gasket made in section 3.3. Some of the
resulting 3D point clouds are visible in Figure 4.4.
The average distances between the closest points between each scan and the baseline were
calculated, and the resulting mean distances were averaged for the three scans at each speed. The
average of all distances for all speeds was also calculated. These results are displayed in Figure 4.5
along with the mean duration of the scans for each speed.
36
(a) speed_factor = 0.5 (b) speed_factor = 2
(c) speed_factor = 4 (d) speed_factor = 6
Figure 4.4: The point clouds generated from some of the scans conducted for the analysis of the
influence of the speed of the robot.
37
Figure 4.5: The mean distances between the baseline scan and the scans at various speeds (in blue),
as well as the mean duration of the scan for each speed (in red).
In Figure 4.4, the number of points in each of the point clouds are visible, and even though the
only difference between each scan is the robot’s speed, this number is different for each of them.
This is because the scanner has a resolution of 2048 points per scan but does not register the entirety
of them at every step. In most cases, it is because the laser line fell out of the effective scanning
area of the scanner, in one of the holes of the table visible in Figure 4.4 for instance. For the scan in
Figure 4.4a, an average of 1944 points instead of 2048 were registered for each of the 500 scanned
lines.
The data gathered shows that no matter the scanning speed of the robot, the distance between
the resulting scan and the baseline is around 4 millimetres. The standard deviations on this plot
also give information about the repeatability of the scans: the mean distance to the baseline varies
among the scans, even with the same set of parameters (i.e. same speed factor). However, this
variation stays relatively small (± 0.5 mm). Finally, the speed factor has a huge influence on the
scan duration: the total scan duration goes from 235 seconds to 103 seconds when the speed factor
goes from 0.5 to 4, but only goes from 86.6 and 85.9 seconds between speed factors of 5 and 6.
When looking at the resulting point clouds with a 3D viewer, it is visible on the high-speed
38
scans that the scanning lines are faulty and not equally distributed, as seen in Figure 4.4d. For
scans done at lower speeds, the scanning lines seem more equally distributed and give a better view
of the part, as seen in Figure 4.4b. However, this difference does not seem to affect the chosen
metric in Figure 4.4, where the difference between the mean distances at speed factors of 2 and 6
is less than 0.1 millimetres. In addition to this qualitative difference, speed factors above 4 created
a huge amount of jerk during the scanning, which translated as audible and visible vibration on
the robot and its workspace. This jerk was caused by the high-speed acceleration and deceleration
between each scanning step. It is detrimental to the robot and its components, so it is strongly
advised to limit its speed. Furthermore, The duration of the scan plateaus with speed factors higher
than 4. This is certainly because the distance between each scanning step, on which the robot does
a full stop, is relatively small (1.6 millimetres for these scans). On such a short distance, allowing
the robot to speed up and down faster does not have any influence on the total duration beyond a
certain point.
All in all, this section showed that multiple scans of the same part output different point clouds
with a mean distance of 4 millimetres between their closest points. This does not invalidate them, as
all of the points are still mapping the part and its geometry correctly. For the safety of the robot and
its workspace, and to perform the fastest scans, a speed factor of 4 will be used for the remaining
experiments on the robot.
4.3 Optimal scanning parameters for a fast and accurate scan
The two remaining parameters to set for the scanning are the direction of the scan and the number of
scanning steps along it. A total of 16 scans have been done by trying out every combination of the
parameters shown in Table 3.3. As stated before, the optimal speed factor for our application was
found to be 4, and the scanning area and part position were kept constant throughout the testing.
All of the conducted scans are listed in Table 4.2, and four of them are visible in Figure 4.6.
39
N_steps Width steps (mm) Direction Speed factor Number of points Duration (s)
25.0 9.04 0.0 4.0 148366 94.6
25.0 3.88 1.0 4.0 240753 116.0
50.0 4.52 0.0 4.0 296172 125.0
50.0 1.94 1.0 4.0 479523 155.9
100.0 2.26 0.0 4.0 590785 172.6
100.0 0.97 1.0 4.0 960806 207.3
200.0 1.13 0.0 4.0 1181248 238.2
200.0 0.48 1.0 4.0 1920256 315.4
300.0 0.75 0.0 4.0 1772212 300.0
300.0 0.32 1.0 4.0 2880689 408.5
400.0 0.56 0.0 4.0 2362771 360.2
400.0 0.24 1.0 4.0 3840257 501.5
500.0 0.45 0.0 4.0 2953144 449.0
500.0 0.19 1.0 4.0 4801046 674.7
1000.0 0.23 0.0 4.0 5906296 850.4
1000.0 0.10 1.0 4.0 9600778 1671.7
Table 4.2: The 16 scans done for this analysis, as well as their duration in second and width between
the scanning lines in millimetre.
40
(a) N steps = 50, direction = Perpendicular (b) N steps = 50, direction = Parallel
(c) N steps = 100, direction = Perpendicular (d) N steps = 100, direction = Parallel
Figure 4.6: The point clouds generated from some of the scans conducted for the analysis of the
crack palette.
41
Before conducting any analysis on the calculated crack path of each of these scans, it is nec-
essary to note that setting the number of steps instead of their width was a poor choice. Indeed,
it is not fair to compare two scans done with 100 steps and different scanning directions, because
the scanning area is a 97 x 226 mm2 rectangle. This makes the width of the steps 2.26 mm in one
direction and 0.97 mm in the other. This is visible in Figures 4.6a and 4.6b: the scan is not as
fine when it is done perpendicularly to the crack direction, while the scan done parallelly, with the
same number of steps, is more detailed. This fact needs to be kept in mind when comparing scans
with different scanning directions. An equivalent metric to the width between the scanning lines
in millimetre is their resolution, in lines per millimetre. Both metrics are equivalent, but using the
width allows us to easily compare this data against the width of a crack or its length.
As stated in the Methodology section, generated crack paths were compared to the original
paths by calculating their mean Lock-step Euclidean distances.
4.3.1 The case of crack number 6
Crack number 6, being almost nonexistent, will be analysed first. While generating this crack’s
path, it was noticed that they were almost always visibly wrong. The resulting path of the scan
made with 200 steps and parallelly to the cracks is shown in Figure 4.7. Although the crack itself
was partially identified (in red on the top half of the Figure), the path-generating algorithm failed at
computing a path similar to the "true" path in green. This may be caused by the way this algorithm
works, by averaging the coordinates of the 30 closest points of the crack to a moving plane to
generate the path (see Figure 3.20). This is an issue as there is only a total of 47 points that were
classified as “crack” with this scan. This explains why the endpoint is located before the actual
end of the crack. Because the number of points selected to calculate a point was set to 30 for this
whole analysis, all of the paths generated for crack number 6 will be incorrect. This crack will be
excluded for the rest of the data analysis.
42
Figure 4.7: The path generated by the analysis (in red) for crack number 6 is visibly wrong.
4.3.2 Prediction of the scan duration
The duration of each scan is plotted in Figure 4.8. The shortest scan took 94.6 seconds for a
perpendicular trajectory and 25 scanning steps, while the longest was 1671.7 seconds long for
a parallel scan and 1000 scanning steps. They seem to follow a linear evolution, so two linear
regressions were made, one for each scanning direction. The results of the two regressions are
detailed in Table 4.3. These values allow us to predict the approximate duration of a scan for a
set number of steps, for this scanning area size only. The slope and R2 score of the parallel scans
are respectively greater and less close to 1 than the ones of the perpendicular scans. This may be
explained by the fact that the parallel scans require more back-and-forth motion of the robot due to
its Z-shaped scanning trajectory. This was illustrated in Figure 3.13.
To predict the scanning duration, in any case, this kind of regression should be done by looking
at the duration of a scan against the width between scanning lines. They were calculated by dividing
the number of steps of each scan by the scanned length, which depends on the scanning direction
(either 97 mm or 226 mm). The duration of the scan against the width between scanning lines is
plotted in Figure 4.9. it seems to follow a 1
x evolution, so the relation between the scan duration
and 1
width steps was plotted and a linear regression was conducted. On this new plot (Figure 4.10),
43
the difference between the two scanning directions almost disappeared, and the relation between
1
width steps and the scan duration, whatever the scanning direction is, was found. It is described in
Table 4.4. This regression is only valid for the current scanning area size, and more testing should
be done to find the influence of the scanning area size on the scan duration.
Figure 4.8: The duration in second of the scans against their number of steps, for parallel and per-
pendicular scanning trajectories. Linear regression were computed, with R2 values of respectively
0.9841 and 0.9985.
Direction Slope Intercept R2 Value Standard Deviation
Parallel 1.550 7.411 0.9841 0.1141
Perpendicular 0.757 80.473 0.9985 0.0155
Table 4.3: The results of the two linear regressions for the scans durations.
44
Figure 4.9: The scan duration in second against
the width between scanning lines in millimetre.
It seems to follow a 1
x evolution.
Figure 4.10: The scan duration in second against
the inverse of the scanning lines width (mm−1).
Using this data instead of the number of steps
removed some of the differences caused by the
two different directions.
Direction Slope Intercept R Value Standard Deviation
Both 147.697 68.479 0.9730 9.043
Scan Duration (s) = 147.697∗ 1
Width steps (mm) +68.479
Table 4.4: The results of the last linear regression. A relation between the scan duration and the
width of the steps has been found.
4.3.3 Quality of the generated crack paths
Figure 4.11 shows the generated path of the first six cracks. They come from a fine scan, which is
comprised of 300 scanned lines in the parallel direction, resulting in a distance of 0.32 millimetres
between each scanning line. Paths were generated for each of the 16 scans, and their Lock Step
Euclidian distances (LSED) were calculated and averaged by scans. This gave out a total of 16 data
points, representing the accuracy of each scan over a range of crack width (0.31 to 2.22 mm, with
a mean width of 1.06 mm).
45
Figure 4.11: The resulting crack path (in red), the true path (in green) and the outline of the cracks
0 to 5 (in blue) for a parallel scan and a width steps of 0.32 mm.
These points are represented in Figure 4.13 against the width of the steps of each scan. For
widths above 1.2 millimetres, the mean LSED goes beyond 1 millimetre, which means that the
generated paths certainly go out of the crack’s limit multiple times. One of them is visible in
Figure 4.12.
Figure 4.12: An obviously wrong generated crack path (in red) on crack number 4, which was
obtained from a coarse parallel scan (Width steps = 3.88 mm).
46
Figure 4.13: The graph representing the quality
of the generated scans in term of mean LSED
(mm) against the width of the scanning steps
(mm).
Figure 4.14: A zoom in of the rectangular red
area of Figure 4.13, showing the scans with the
shortest width between steps and the smallest
mean LSED.
A zoom in the plot of the LSED is made in Figure 4.14. This plot shows that for a scanning
width of less than 1.2 millimetres, the mean LSED of the scans falls below 0.4 millimetres, thus
they are fairly accurate. Also, a clear distinction is visible between parallel and perpendicular scans.
Even when fairly compared by the width of their steps and not their number, the mean LSED of the
parallel scans averages at 0.16 millimetres, while the perpendicular scans do at 0.27 millimetres.
Finally, the minimum LSED seems to plateau at 0.16 millimetres, as reducing the width between
steps below 1 millimetre did not significantly reduce the LSED.
To understand the difference between the quality of the parallel and perpendicular scans, seeing
the resulting 3D point clouds can help. In Figure 4.15, two scans of similar width steps done in the
two directions are shown, as well as the true path from the SVG file. The mean LSED already tell
that the parallel scans lead to smaller LSED, and the look of the cloud also shows that the crack is
more visible and clearer on the parallel scan. The edges of the crack can easily be missed on the
perpendicular scans depending on the location of the scanning lines, while on the parallel scans, it
is certain that the edges of the crack will be caught by the high resolution (2048 points) of the line.
For future scans, the scanning trajectories should be kept as parallel as possible to the crack, so that
the straight scanning lines are "cut" by the defect and identify it correctly.
47
Figure 4.15: Two scans side by side, with relatively close width steps and two different scanning
directions. The true path is represented in green in the middle of the Figure.
Figure 4.14 showed that no improvements were made on the mean LSED below a certain point.
This can be caused by the way the data processing is conducted, and the fact that the generated
crack paths were compared to the SVG file of the crack palette. These two variables may have
damaged our ability to obtain a LSED of zero. We can infer that the maximum distance between
scanning lines should be less than the approximate width of the crack.
Figure 4.16 shows the average mean LSED of these finest scans with a scanning width of less
than 1.2 millimetres but detailed for each crack of the crack palette. Only crack number 5, the
narrowest of them, shows the worst results. Once again, the parallel scans have better results than
the perpendicular ones. The last two plots illustrated that for cracks in this range of width (0.31 to
2.22 mm), a scanning done parallel to the crack and with a width of approximately 1.2 millimetres
were the best parameters to use to obtain the best accuracy and the fastest scan. Doing thinner scans
would only result in longer scans with the same accuracy. A scan with these parameters would take
approximately 211 seconds to complete according to Section 4.3.2.
48
Figure 4.16: The LSED (mm) for the scans with a scanning width of less than 1.2 mm, averaged
for each crack of the palette.
4.4 Influence of the manual steps on the resulting accuracy
This section will present the results of the repeatability test done to make sure that the user, when
doing the manual steps of the crack analysis, does not greatly make the results fluctuate. For each
of the ten analyses and each crack, the mean LSED between the points of the calculated crack
path and the original "true" path coming from the SVG file was calculated. These 70 means were
averaged by crack, and the corresponding standard deviations and coefficients of variation were
calculated as well. All of these results are shown in Table 4.5.
The mean LSED to the correct path is less than 0.6 millimetres for all cracks except for crack
number 6. As mentioned before, this crack is almost nonexistent and can not be used to assess
the accuracy of the crack detection algorithm. However, for all of the cracks, the coefficient of
variation of the mean LSED of the calculated path to the true path is on average 4.65%. This means
that the variations in the path caused by the manual steps of the processing are minimal.
49
Crack Number 0 1 2 3 4 5 6
LSED to True Paths (mm) 0.438 0.385 0.372 0.389 0.219 0.561 12.941
Standard Deviation (mm) 0.021 0.023 0.026 0.014 0.006 0.024 0.518
Coefficient of Variation (%) 4.86 5.90 7.03 3.60 2.89 4.27 4.00
Table 4.5: The results of the repeatability test.
4.5 Resulting crack paths on the cylinder head gasket
This section will show the results of the crack scanning process and path computation on the cylin-
der head gasket made in Section 3.3. Three cracks have been processed: they are circled in red
in Figure 4.17, and the resulting cracks and crack paths are shown in Figure 4.18. Even when the
crack is not exactly identified, the user can correct the resulting point cloud when doing the manual
selection, and the mean path stays relatively correct.
Figure 4.17: Scanning area = 154.9 x 199.2 mm2, Width steps = 1.549, N steps = 100, Scanning
direction = along x, Speed factor = 4.
50
Figure 4.18: On the left-hand side, the results of the crack generating algorithm and the selection
of the user (with the crack in white and the parts that are not the crack in yellow). The resulting
crack path is in blue on the right-hand side, on the scanned point cloud (in red).
51
5 Conclusion
5.1 Summary
In this project, a robotic system capable of automatically scanning a designated area was designed
and implemented. It works by going over the whole area to scan and stopping a set number of
times to save lines of 2048 3D points. This solution was able to generate point clouds of the
scanned components, which can be passed through an algorithm designed to automatically identify
the location of cracks and generate their mean paths (2).
To assess the quality of this solution, two testing parts, a "crack palette" showing seven cracks
of varying width, as well as a cylinder head gasket with various added cracks, were laser cut on
an Epilog laser cutter in a 5 millimetres thick plastic sheet. The quality of the crack palette was
checked, by comparing point clouds generated from pictures of the real part against a point cloud
generated from the Scalable Vector Graphics file used to make it. This comparison returned a mean
distance between the two of 0.5 millimetres, which may derive from various factors, including the
setup used to take the photographs and their processing into point clouds (1).
The impact of the speed of the robot on the quality of the scan was tested by performing scans
of the same component at seven different speeds, with all other parameters set constant. The result-
ing distances between each generated cloud and a baseline one showed that there were no major
variations between a slow and a fast scan, except for a huge difference in the scans’ durations. A
maximum "speed factor" of 4 was chosen to ensure shorter scanning time while limiting the jerk
and vibration in the robot and its workspace, which can damage the robot over time, that higher
speeds caused (3).
The two remaining parameters to investigate were the scanning direction relative to the crack,
as well as the scanning resolution. Sixteen scans with different parameters were conducted, and the
mean Lock Step Euclidian distances between the generated crack paths and their "true" counterparts
generated from the Scalable Vector Graphics file were calculated. The results showed that a width
52
between scanning lines of less than 1.2 millimetres, which is the mean width of the cracks of the
palette, was necessary to obtain satisfying accuracy on the generated crack paths. The results also
showed that scanning parallelly to the cracks, thus having the scanning lines "cut" by the crack on
the point cloud, led to a better crack path generation than scanning perpendicularly (4).
Finally, the human influence in the manual steps of the crack identification process was tested
by conducting the analysis of a scan ten times by the same user. The results showed a coefficient of
variation of approximately 5%, which proves that the variation caused by the user is minimum (5).
5.2 Limitations
This project was limited to the scanning of flat components from a top view perspective. For the
design of a proof of concept, it simplified the scanning trajectory generation and helped us focus
more on point cloud processing and crack identification. However, on real parts, curved surfaces
are very likely to be present. This kind of surface would force the system to adapt its altitude when
doing the scanning to keep the part in the scanning range.
Cracks could also be present on vertical surfaces of components, which would force the robot
to conduct the scan from different angles. A more adaptive trajectory generation algorithm would
need to be implemented to analyse such defects.
Our implementation is able to output the point cloud of the identified crack and calculate its
mean path, but robotic welding may need more information, such as the points normal or details of
the inside and depth of the crack. Gathering this information may require the addition of sensors to
the system, such as cameras or touch probes. This kind of additional sensors may also be required
for parts that present complex shapes or edges that a laser scanner may have a hard time surveying.
Finally, the data processing implemented relies on the inputs of a user to select the broad loca-
tion of the crack and check for potential errors. This means that the robot can still work on its own,
but a remote operator needs to conduct the manual steps of the analysis.
53
5.3 Contribution
This system was designed to be a proof of concept of crack identification on unknown parts. Its
novelty rests in the fact that it does not require any prior knowledge about the component, except
for its overall location, and does not require heavy calibration. While automatic laser scanning
devices already exist, in the quality control sector, for instance, they are often set up in a perfect
way to optimise their accuracy, require a specific calibration routine and only work on a specific
task. This system paves the way toward autonomous inspection and even self-repairing robots.
This kind of robot could be used in hazardous environments, such as in a nuclear power plant,
to proceed to inspections without the need for a worker to be present in person. Another application
is round-the-clock inspections, such as in hospital plant rooms, that require constant monitoring.
Having human workers permanently surveying these essential rooms is tiring for them and expen-
sive for the hospital, whereas robots could be designed to patrol around these locations and make
sure that no defects have appeared on the systems. They could also be equipped with more sensors,
such as liquid detectors for leak detection or microphone to analyse the patterns of the machines’
vibrations.
Finally, the design of this kind of system is a step towards more environmentally friendly prac-
tices. Repairing parts is often overlooked over the simplicity of ordering a new one, but this solution
comes with expenses and a negative environmental impact. Repairing a part allows for its reuse
and avoids the need of throwing it away, especially because metal recycling today has the potential
of being sustainable, but still has many weaknesses [43].
5.4 Future Work
The designed system proved to be working on the parts made for testing purpose, but some modi-
fications and further testing is required to bring this system closer to being commercially available.
Its main limitation is its scanning being limited to the top view of flat components. Tuning the
scanning trajectory generation and crack identification processes to scan any crack from any angle
would be the next logical step. Rethinking the scanning trajectory to take the shape of an S instead
of a Z would also decrease the total scanning duration, and conducting more investigation on the
54
relationship between the scanning area, the scanning resolution and the scan duration would allow
for a better scan duration model. Furthermore, the scanning trajectories follow at the moment either
the x or y direction of the world frame, but scanning at any angle may be required if the user wants
to keep the scanning lines as perpendicular as possible to the scanned crack to keep the accuracy
maximum.
Another huge part of the scanning process that is worth investigating is the scanning frequency.
It is currently limited in this project by the fact that the robot is moved to each scanning position to a
full stop to save the 3D points. Saving the 3D points continuously while the robot is moving along
its scanning trajectory may decrease the scan duration substantially, but vibrations and accuracy
need to be closely monitored to avoid errors.
Once the 3D point cloud of the part has been saved, the crack path generating algorithm is used
to output the mean path of the crack. This algorithm was designed for the scope of this project, and
many of its parameters were empirically set. A large-scale analysis with multiple parts to scan and
various crack profiles could be conducted to find the best set of parameters to use in the algorithm
for an optimal crack path generation.
Concerning potential additional work on the 3D point clouds, superposing multiple clouds done
with different scanning directions may yield a more accurate point cloud, at the cost of an increased
processing time and risking more errors due to poor superposition and wrong point registration.
Once a finer point cloud is created, it would also be possible to generate a 3D model of the part
instead of keeping the point cloud. Working with 3D models allows easier comparison with the
original part file, but requires additional softwares and more variables to account for.
55
A Appendix
Parts of the python scripts and RoboDK simulation are on the following GitHub repository. The
scripts are not usable as they are because the 3D point clouds are not saved on the repository.
legentil42, ‘Laser scanning for crack detection’. Sep. 18, 2022. Accessed: Sep. 18, 2022.
[Online]. Available: https://github.com/legentil42/Laser-scanning-master
56
Bibliography
[1] M. Avila, S. Begot, F. Duculty, and T. S. Nguyen. “2D image based road pavement crack de-
tection by calculating minimal paths and dynamic programming”. en. In: 2014 IEEE Inter-
national Conference on Image Processing (ICIP). Paris, France: IEEE, Oct. 2014, pp. 783–
787. ISBN: 978-1-4799-5751-4. DOI: 10 . 1109 / ICIP . 2014 . 7025157. Available from:
http://ieeexplore.ieee.org/document/7025157/ [Accessed Sept. 18, 2022].
[2] F. Blais and J. A. Beraldin. Recent Developments in 3D Multi-modal Laser Imaging Ap-
plied to Cultural Heritage. en. Machine Vision and Applications [online]. 17.6 (Dec. 2006),
pp. 395–409. ISSN: 1432-1769. DOI: 10.1007/s00138- 006- 0025- 3. Available from:
https://doi.org/10.1007/s00138-006-0025-3 [Accessed June 16, 2022].
[3] F. Caiazzo, F. Curcio, G. Daurelio, and F. C. Minutolo. Laser cutting of different poly-
meric plastics (PE, PP and PC) by a CO2 laser beam. en. Journal of Materials Processing
Technology [online]. 159.3 (Feb. 2005), pp. 279–285. ISSN: 09240136. DOI: 10.1016/j.
jmatprotec.2004.02.019. Available from: https://linkinghub.elsevier.com/
retrieve/pii/S0924013604002109 [Accessed Sept. 11, 2022].
[4] J. Chow, T. Xu, S.-M. Lee, and K. Kengkool. Development of an Integrated Laser-Based
Reverse Engineering and Machining System. en. The International Journal of Advanced
Manufacturing Technology [online]. 19.3 (Feb. 2002), pp. 186–191. ISSN: 1433-3015. DOI:
10.1007/s001700200013. Available from: https://doi.org/10.1007/s001700200013
[Accessed June 16, 2022].
[5] L. Ciocca and R. Scotti. CAD-CAM generated ear cast by means of a laser scanner and rapid
prototyping machine. en. The Journal of Prosthetic Dentistry [online]. 92.6 (Dec. 2004),
pp. 591–595. ISSN: 00223913. DOI: 10.1016/j.prosdent.2004.08.021. Available from:
https://linkinghub.elsevier.com/retrieve/pii/S0022391304005542 [Accessed
June 16, 2022].
57
[6] D. De Becker, J. Dobrzanski, L. Justham, and Y. Goh. A laser scanner based approach for
identifying rail surface squat defects. en. Proceedings of the Institution of Mechanical En-
gineers, Part F: Journal of Rail and Rapid Transit [online]. 235.6 (July 2021), pp. 763–
773. ISSN: 0954-4097, 2041-3017. DOI: 10.1177/0954409720962252. Available from:
http://journals.sagepub.com/doi/10.1177/0954409720962252 [Accessed June 16,
2022].
[7] J. Digne. “Similarity based filtering of point clouds”. In: 2012 IEEE Computer Society Con-
ference on Computer Vision and Pattern Recognition Workshops. ISSN: 2160-7516. June
2012, pp. 73–79. DOI: 10.1109/CVPRW.2012.6238917.
[8] Y. Duan and C. Yang. Low-complexity Point Cloud Filtering for LiDAR by PCA-based
Dimension Reduction. en (), p. 7.
[9] Free STL file 5 cylinder head gasket - 3D printer design to download - Cults. Available from:
https://cults3d.com/en/3d-model/art/5-cylinder-head-gasket [Accessed
Sept. 5, 2022].
[10] J. Glud, J. Dulieu-Barton, O. Thomsen, and L. Overgaard. Automated counting of off-axis
tunnelling cracks using digital image processing. Composites Science and Technology [on-
line]. 125 (Jan. 2016). DOI: 10.1016/j.compscitech.2016.01.019.
[11] C. Gunkel, A. Stepper, A. C. Müller, and C. H. Müller. Micro crack detection with Dijkstra’s
shortest path algorithm. en. Machine Vision and Applications [online]. 23.3 (May 2012),
pp. 589–601. ISSN: 1432-1769. DOI: 10.1007/s00138- 011- 0324- 1. Available from:
https://doi.org/10.1007/s00138-011-0324-1 [Accessed Sept. 18, 2022].
[12] P. Hong-Seok and T. U. Mani. Development of an Inspection System for Defect Detection
in Pressed Parts Using Laser Scanned Data. en. Procedia Engineering [online]. 69 (2014),
pp. 931–936. ISSN: 18777058. DOI: 10.1016/j.proeng.2014.03.072. Available from:
https://linkinghub.elsevier.com/retrieve/pii/S187770581400318X [Accessed
June 16, 2022].
[13] Y. Hosni and L. Ferreira. Laser based system for reverse engineering. en. Computers &
Industrial Engineering [online]. 26.2 (Apr. 1994), pp. 387–394. ISSN: 03608352. DOI: 10.
58
1016/0360-8352(94)90072-8. Available from: https://linkinghub.elsevier.com/
retrieve/pii/0360835294900728 [Accessed June 16, 2022].
[14] C. Hu, L. Kong, and F. Lv. Application of 3D laser scanning technology in engineering
field. en. E3S Web of Conferences [online]. 233 (2021). Ed. by L. Zhang, S. Defilla, and W.
Chu, p. 04014. ISSN: 2267-1242. DOI: 10.1051/e3sconf/202123304014. Available from:
https://www.e3s-conferences.org/10.1051/e3sconf/202123304014 [Accessed
June 16, 2022].
[15] M. C. Israel and R. G. Pileggi. Use of 3D laser scanning for flatness and volumetric analysis
of mortar in facades. en. Revista IBRACON de Estruturas e Materiais [online]. 9 (Feb. 2016).
Publisher: IBRACON - Instituto Brasileiro do Concreto, pp. 91–122. ISSN: 1983-4195. DOI:
10.1590/S1983-41952016000100007. Available from: http://www.scielo.br/j/
riem/a/RK6DFYH5XBjPnFWGqMDnqMp/?lang=en [Accessed June 16, 2022].
[16] B. Kleiner, C. Munkelt, T. Thorhallsson, G. Notni, P. Kühmstedt, and U. Schneider. Hand-
held 3-D Scanning with Automatic Multi-View Registration Based on Visual-Inertial Nav-
igation. en. International Journal of Optomechatronics [online]. 8.4 (Oct. 2014), pp. 313–
325. ISSN: 1559-9612, 1559-9620. DOI: 10.1080/15599612.2014.942931. Available
from: http://www.tandfonline.com/doi/abs/10.1080/15599612.2014.942931
[Accessed July 1, 2022].
[17] K. H. Lee and H.-p. Park. Automated inspection planning of free-form shape parts by laser
scanning. en. Robotics and Computer-Integrated Manufacturing [online]. 16.4 (Aug. 2000),
pp. 201–210. ISSN: 0736-5845. DOI: 10.1016/S0736-5845(99)00060-5. Available from:
https://www.sciencedirect.com/science/article/pii/S0736584599000605
[Accessed June 16, 2022].
[18] M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton, S. Ander-
son, J. Davis, J. Ginsberg, J. Shade, and D. Fulk. “The digital Michelangelo project: 3D scan-
ning of large statues”. In: Proceedings of the 27th annual conference on Computer graphics
and interactive techniques. SIGGRAPH ’00. USA: ACM Press/Addison-Wesley Publishing
Co., July 2000, pp. 131–144. ISBN: 978-1-58113-208-3. DOI: 10.1145/344779.344849.
Available from: https://doi.org/10.1145/344779.344849 [Accessed June 16, 2022].
59
[19] J. Liu, Q. Zhang, J. Wu, and Y. Zhao. Dimensional accuracy and structural performance
assessment of spatial structure components using 3D laser scanning. en. Automation in Con-
struction [online]. 96 (Dec. 2018), pp. 324–336. ISSN: 0926-5805. DOI: 10 . 1016 / j .
autcon.2018.09.026. Available from: https://www.sciencedirect.com/science/
article/pii/S0926580518301699 [Accessed June 16, 2022].
[20] S. H. Mian and A. Al-Ahmari. Comparative analysis of different digitization systems and
selection of best alternative. en. Journal of Intelligent Manufacturing [online]. 30.5 (June
2019), pp. 2039–2067. ISSN: 1572-8145. DOI: 10.1007/s10845-017-1371-x. Available
from: https://doi.org/10.1007/s10845-017-1371-x [Accessed June 16, 2022].
[21] Micro-Epsilon-Messtechnik. “Operating Instructions for scanCONTROL 30xx”. In: 2019.
[22] A. Mohan and S. Poobal. Crack detection using image processing: A critical review and
analysis. en. Alexandria Engineering Journal [online]. 57.2 (June 2018), pp. 787–798. ISSN:
11100168. DOI: 10.1016/j.aej.2017.01.020. Available from: https://linkinghub.
elsevier.com/retrieve/pii/S1110016817300236 [Accessed Sept. 18, 2022].
[23] A. Nurunnabi, D. Belton, and G. West. “Robust Segmentation in Laser Scanning 3D Point
Cloud Data”. In: 2012 International Conference on Digital Image Computing Techniques
and Applications (DICTA). Dec. 2012, pp. 1–8. DOI: 10.1109/DICTA.2012.6411672.
[24] A. Nurunnabi, G. West, and D. Belton. Outlier detection and robust normal-curvature es-
timation in mobile laser scanning 3D point cloud data. en. Pattern Recognition [online].
48.4 (Apr. 2015), pp. 1404–1419. ISSN: 00313203. DOI: 10 . 1016 / j . patcog . 2014 .
10 . 014. Available from: https : / / linkinghub . elsevier . com / retrieve / pii /
S0031320314004312 [Accessed June 16, 2022].
[25] H. Oliveira and P. L. Correia. Automatic Road Crack Detection and Characterization. IEEE
Transactions on Intelligent Transportation Systems [online]. 14.1 (Mar. 2013). Conference
Name: IEEE Transactions on Intelligent Transportation Systems, pp. 155–168. ISSN: 1558-
0016. DOI: 10.1109/TITS.2012.2208630.
[26] H. Oliveira and P. Lobato Correia. “Identifying and retrieving distress images from road
pavement surveys”. In: 2008 15th IEEE International Conference on Image Processing.
ISSN: 2381-8549. Oct. 2008, pp. 57–60. DOI: 10.1109/ICIP.2008.4711690.
60
[27] pptk - Point Processing Toolkit. original-date: 2018-07-11T08:33:04Z. Aug. 2022. Available
from: https://github.com/heremaps/pptk [Accessed Sept. 11, 2022].
[28] K. M. Publishing. 3D Scanner Speeds Measurement of Power Plant Components. en-US.
Nov. 2021. Available from: https://metrology.news/3d-scanner-speeds-measurement-
of-power-plant-components/ [Accessed Sept. 13, 2022].
[29] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. PointNet: Deep Learning on Point Sets for 3D Clas-
sification and Segmentation. arXiv:1612.00593 [cs]. Apr. 2017. DOI: 10.48550/arXiv.
1612.00593. Available from: http://arxiv.org/abs/1612.00593 [Accessed Sept. 18,
2022].
[30] H. D. Reidenbach, H. Warmbold, J. Hofmann, and K. Dollinger. “First Experimental Results
On Eye Protection By The Blink Reflex For Laser Class 2”. In: 2001.
[31] R. B. Rusu and S. Cousins. “3D is here: Point Cloud Library (PCL)”. en. In: 2011 IEEE
International Conference on Robotics and Automation. Shanghai, China: IEEE, May 2011,
pp. 1–4. ISBN: 978-1-61284-386-5. DOI: 10.1109/ICRA.2011.5980567. Available from:
http://ieeexplore.ieee.org/document/5980567/ [Accessed Sept. 18, 2022].
[32] G. Sansoni, M. Trebeschi, and F. Docchio. State-of-The-Art and Applications of 3D Imaging
Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation. en. Sensors
[online]. 9.1 (Jan. 2009). Number: 1 Publisher: Molecular Diversity Preservation Interna-
tional, pp. 568–601. ISSN: 1424-8220. DOI: 10.3390/s90100568. Available from: https:
//www.mdpi.com/1424-8220/9/1/568 [Accessed June 16, 2022].
[33] S. K. Sinha and P. W. Fieguth. Automated detection of cracks in buried concrete pipe images.
en. Automation in Construction [online]. 15.1 (Jan. 2006), pp. 58–72. ISSN: 09265805. DOI:
10.1016/j.autcon.2005.02.006. Available from: https://linkinghub.elsevier.
com/retrieve/pii/S0926580505000452 [Accessed Sept. 18, 2022].
[34] S. Son, H. Park, and K. H. Lee. Automated laser scanning system for reverse engineering and
inspection. en. International Journal of Machine Tools and Manufacture [online]. 42.8 (June
2002), pp. 889–897. ISSN: 08906955. DOI: 10.1016/S0890-6955(02)00030-5. Available
from: https : / / linkinghub . elsevier . com / retrieve / pii / S0890695502000305
[Accessed Sept. 17, 2022].
61
[35] C. Suchocki and A. Wasilewski. Geodetic surveys of Cliff shores with the application of
scanning technology (Jan. 2009), pp. 93–100.
[36] A. M. A. Talab, Z. Huang, F. Xi, and L. HaiMing. Detection crack in image using Otsu
method and multiple filtering in image processing techniques. en. Optik [online]. 127.3
(Feb. 2016), pp. 1030–1033. ISSN: 0030-4026. DOI: 10 . 1016 / j . ijleo . 2015 . 09 .
147. Available from: https://www.sciencedirect.com/science/article/pii/
S0030402615012164 [Accessed Sept. 18, 2022].
[37] Y. Tao, A. Both, R. I. Silveira, K. Buchin, S. Sijben, R. S. Purves, P. Laube, D. Peng, K.
Toohey, and M. Duckham. A comparative analysis of trajectory similarity measures. en.
GIScience & Remote Sensing [online]. 58.5 (July 2021), pp. 643–669. ISSN: 1548-1603,
1943-7226. DOI: 10.1080/15481603.2021.1908927. Available from: https://www.
tandfonline.com/doi/full/10.1080/15481603.2021.1908927 [Accessed Sept. 2,
2022].
[38] M. Tatarevic, B. Gapinski, and N. Swojak. The Use of Optical Scanner for Analysis of
Surface Defects. en. In: DAAAM Proceedings. Ed. by B. Katalinic. Vol. 1. DAAAM Inter-
national Vienna, 2019, pp. 0076–0085. ISBN: 978-3-902734-22-8. DOI: 10.2507/30th.
daaam.proceedings.010. Available from: http://www.daaam.info/Downloads/
Pdfs/proceedings/proceedings_2019/010.pdf [Accessed June 16, 2022].
[39] J. Tomasiak. The Use of Optical Methods for Leak Testing Dampers. en. Procedia Engi-
neering [online]. 69 (2014), pp. 788–794. ISSN: 18777058. DOI: 10.1016/j.proeng.
2014.03.055. Available from: https://linkinghub.elsevier.com/retrieve/pii/
S1877705814003014 [Accessed Sept. 17, 2022].
[40] Q. Wang, Y. Tan, and Z. Mei. Computational Methods of Acquisition and Processing of 3D
Point Cloud Data for Construction Applications. en. Archives of Computational Methods
in Engineering [online]. 27.2 (Apr. 2020), pp. 479–499. ISSN: 1886-1784. DOI: 10.1007/
s11831-019-09320-4. Available from: https://doi.org/10.1007/s11831-019-
09320-4 [Accessed June 16, 2022].
[41] X. Wang, Z. Xie, K. Wang, and L. Zhou. Research on a Handheld 3D Laser Scanning System
for Measuring Large-Sized Objects. en. Sensors [online]. 18.10 (Oct. 2018). Number: 10
62
Publisher: Multidisciplinary Digital Publishing Institute, p. 3567. ISSN: 1424-8220. DOI:
10.3390/s18103567. Available from: https://www.mdpi.com/1424-8220/18/10/
3567 [Accessed July 1, 2022].
[42] Wingman Tool Changer. Aug. 2021. Available from: http://triplea-robotics.com/
tool-changer/ [Accessed Sept. 5, 2022].
[43] S. Wright, S. Jahanshahi, F. Jorgensen, and D. Brennan. Is Metal Recycling Sustainable?
Journal Abbreviation: Green Processing 2002 - Proceedings: International Conference on the
Sustainable Proceesing of Minerals Publication Title: Green Processing 2002 - Proceedings:
International Conference on the Sustainable Proceesing of Minerals. Jan. 2002.
[44] H. Zhao, J.-P. Kruth, N. Van Gestel, B. Boeckmans, and P. Bleys. Automated dimensional
inspection planning using the combination of laser scanner and tactile probe. en. Mea-
surement [online]. 45.5 (June 2012), pp. 1057–1066. ISSN: 02632241. DOI: 10.1016/j.
measurement.2012.01.037. Available from: https://linkinghub.elsevier.com/
retrieve/pii/S0263224112000528 [Accessed June 16, 2022].
[45] Q. Zou, Y. Cao, Q. Li, Q. Mao, and S. Wang. CrackTree: Automatic crack detection from
pavement images. en. Pattern Recognition Letters [online]. 33.3 (Feb. 2012), pp. 227–238.
ISSN: 01678655. DOI: 10.1016/j.patrec.2011.11.004. Available from: https://
linkinghub.elsevier.com/retrieve/pii/S0167865511003795 [Accessed Sept. 18,
2022].
63

More Related Content

What's hot

Analysis by semantic segmentation of Multispectral satellite imagery using de...
Analysis by semantic segmentation of Multispectral satellite imagery using de...Analysis by semantic segmentation of Multispectral satellite imagery using de...
Analysis by semantic segmentation of Multispectral satellite imagery using de...
Yogesh S Awate
 

What's hot (20)

IoT Standardization and Implementation Challenges
IoT Standardization and Implementation ChallengesIoT Standardization and Implementation Challenges
IoT Standardization and Implementation Challenges
 
Iot architecture
Iot architectureIot architecture
Iot architecture
 
IoT Tutorial for Beginners | Internet of Things (IoT) | IoT Training | IoT Te...
IoT Tutorial for Beginners | Internet of Things (IoT) | IoT Training | IoT Te...IoT Tutorial for Beginners | Internet of Things (IoT) | IoT Training | IoT Te...
IoT Tutorial for Beginners | Internet of Things (IoT) | IoT Training | IoT Te...
 
Machine Learning and Artificial Intelligence
Machine Learning and Artificial IntelligenceMachine Learning and Artificial Intelligence
Machine Learning and Artificial Intelligence
 
Artificial inteligence
Artificial inteligenceArtificial inteligence
Artificial inteligence
 
A Survey on Stroke Prediction
A Survey on Stroke PredictionA Survey on Stroke Prediction
A Survey on Stroke Prediction
 
Bringing ArcGIS spatial analysis to bear on IoT data
Bringing ArcGIS spatial analysis to bear on IoT dataBringing ArcGIS spatial analysis to bear on IoT data
Bringing ArcGIS spatial analysis to bear on IoT data
 
AlexNet(ImageNet Classification with Deep Convolutional Neural Networks)
AlexNet(ImageNet Classification with Deep Convolutional Neural Networks)AlexNet(ImageNet Classification with Deep Convolutional Neural Networks)
AlexNet(ImageNet Classification with Deep Convolutional Neural Networks)
 
Internet of Things: state of the art
Internet of Things: state of the artInternet of Things: state of the art
Internet of Things: state of the art
 
AI IN AGRICULTURE
AI IN AGRICULTUREAI IN AGRICULTURE
AI IN AGRICULTURE
 
Big data analysis and Internet of Things(IoT)
Big data analysis and Internet of Things(IoT)Big data analysis and Internet of Things(IoT)
Big data analysis and Internet of Things(IoT)
 
918 prasu seminar
918 prasu seminar918 prasu seminar
918 prasu seminar
 
Digital Twin Technology
Digital Twin TechnologyDigital Twin Technology
Digital Twin Technology
 
Machine Learning and Internet of Things
Machine Learning and Internet of ThingsMachine Learning and Internet of Things
Machine Learning and Internet of Things
 
Analysis by semantic segmentation of Multispectral satellite imagery using de...
Analysis by semantic segmentation of Multispectral satellite imagery using de...Analysis by semantic segmentation of Multispectral satellite imagery using de...
Analysis by semantic segmentation of Multispectral satellite imagery using de...
 
Artificial Intelligence (AI) and Climate Change
Artificial Intelligence (AI) and Climate ChangeArtificial Intelligence (AI) and Climate Change
Artificial Intelligence (AI) and Climate Change
 
Plant disease detection using machine learning algorithm-1.pptx
Plant disease detection using machine learning algorithm-1.pptxPlant disease detection using machine learning algorithm-1.pptx
Plant disease detection using machine learning algorithm-1.pptx
 
Internet of Things (IoT) and its applications
Internet of Things (IoT) and its applicationsInternet of Things (IoT) and its applications
Internet of Things (IoT) and its applications
 
Internet of Things & Its application in Smart Agriculture
Internet of Things & Its application in Smart AgricultureInternet of Things & Its application in Smart Agriculture
Internet of Things & Its application in Smart Agriculture
 
Artificial Intelligence in Education|Evolve Machine Learners
Artificial Intelligence in Education|Evolve Machine LearnersArtificial Intelligence in Education|Evolve Machine Learners
Artificial Intelligence in Education|Evolve Machine Learners
 

Similar to Laser scanning for crack detection and repair with robotic welding

Master_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_LiuMaster_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_Liu
Jiaqi Liu
 
Final Report 9505482 5845742
Final Report 9505482 5845742Final Report 9505482 5845742
Final Report 9505482 5845742
Bawantha Liyanage
 
Maxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysis
Maxime Javaux
 
TFG_Cristobal_Cuevas_Garcia_2018.pdf
TFG_Cristobal_Cuevas_Garcia_2018.pdfTFG_Cristobal_Cuevas_Garcia_2018.pdf
TFG_Cristobal_Cuevas_Garcia_2018.pdf
Gerard Labernia
 
Masters Thesis - Ankit_Kukreja
Masters Thesis - Ankit_KukrejaMasters Thesis - Ankit_Kukreja
Masters Thesis - Ankit_Kukreja
ANKIT KUKREJA
 
ImplementationOFDMFPGA
ImplementationOFDMFPGAImplementationOFDMFPGA
ImplementationOFDMFPGA
Nikita Pinto
 

Similar to Laser scanning for crack detection and repair with robotic welding (20)

Master_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_LiuMaster_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_Liu
 
Milan_thesis.pdf
Milan_thesis.pdfMilan_thesis.pdf
Milan_thesis.pdf
 
Final Report 9505482 5845742
Final Report 9505482 5845742Final Report 9505482 5845742
Final Report 9505482 5845742
 
Thesis
ThesisThesis
Thesis
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation system
 
Project Report Distance measurement system
Project Report Distance measurement systemProject Report Distance measurement system
Project Report Distance measurement system
 
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
 
Thesis Report
Thesis ReportThesis Report
Thesis Report
 
Realtimesamplingofutilization
RealtimesamplingofutilizationRealtimesamplingofutilization
Realtimesamplingofutilization
 
JJ_Thesis
JJ_ThesisJJ_Thesis
JJ_Thesis
 
thesis-2
thesis-2thesis-2
thesis-2
 
Maxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysis
 
TFG_Cristobal_Cuevas_Garcia_2018.pdf
TFG_Cristobal_Cuevas_Garcia_2018.pdfTFG_Cristobal_Cuevas_Garcia_2018.pdf
TFG_Cristobal_Cuevas_Garcia_2018.pdf
 
Diplomarbeit
DiplomarbeitDiplomarbeit
Diplomarbeit
 
Honours_Thesis2015_final
Honours_Thesis2015_finalHonours_Thesis2015_final
Honours_Thesis2015_final
 
Thesis
ThesisThesis
Thesis
 
Masters Thesis - Ankit_Kukreja
Masters Thesis - Ankit_KukrejaMasters Thesis - Ankit_Kukreja
Masters Thesis - Ankit_Kukreja
 
ImplementationOFDMFPGA
ImplementationOFDMFPGAImplementationOFDMFPGA
ImplementationOFDMFPGA
 
Sarda_uta_2502M_12076
Sarda_uta_2502M_12076Sarda_uta_2502M_12076
Sarda_uta_2502M_12076
 
Fulltext02
Fulltext02Fulltext02
Fulltext02
 

Recently uploaded

Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Dr.Costas Sachpazis
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
dharasingh5698
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
ankushspencer015
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 

Recently uploaded (20)

Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdf
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
 
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
 
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICSUNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
 
NFPA 5000 2024 standard .
NFPA 5000 2024 standard                                  .NFPA 5000 2024 standard                                  .
NFPA 5000 2024 standard .
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
 

Laser scanning for crack detection and repair with robotic welding

  • 1. Laser scanning for crack detection and repair with robotic welding By François, Wieckowiak MSc Robotics Dissertation Department of Engineering Mathematics UNIVERSITY OF BRISTOL & Department of Engineering Design and Mathematics UNIVERSITY OF THE WEST OF ENGLAND A MSc dissertation submitted to the University of Bristol and the University of the West of England in accordance with the requirements of the degree of MASTER OF SCIENCE IN ROBOTICS in the Faculty of Engineering. September 18, 2022
  • 2. Declaration of own work I declare that the work in this MSc dissertation was carried out in accordance with the require- ments of the University’s Regulations and Code of Practice for Research Degree Programmes and that it has not been submitted for any other academic award. Except where indicated by specific reference in the text, the work is the candidate’s own work. Work done in collaboration with, or with the assistance of, others, is indicated as such. Any views expressed in the dissertation are those of the author. François Wieckowiak, September 18, 2022 1
  • 3. Acknowledgement I could not have undertaken this journey without the members of the Robotics Innovation Fa- cility at the Bristol Robotics Laboratory, who were by my side during my Master’s thesis: my su- pervisor Professor Farid Dailami, who directed me towards this dissertation project which matches my deep interest in Machine Vision and Robotics, as well as Nathan Churchill, Shaun Jordan and "soon-to-be Doctor" Arjuna Mendis. I am also grateful to my great friends, Coena, Shrestha and Sripad, members of "The Mountain", without whom I would not have had an experience in Bristol near as exceptional as it was. Lastly, I’d like to mention the unconditional support from my partner Séphora that, albeit being on the other side of the English Channel, kept me going throughout this year. 2
  • 4. Abstract Autonomous inspection and repair of critical components in systems that are essential to the functioning of an industry is the next step in automated maintenance. Today, inspec- tions are often carried out manually by expert workers, hence they are time-consuming and resource expensive. This report presents a proof-of-concept robotic system comprised of a UR5 robot equipped with a profile laser scanner based on optical triangulation, capable of automatically scanning a designated area of a part and fully identifying cracks by outputting their locations and mean paths. This information can then be transferred to a robotic welding system for automatic repair of said crack. This proof of concept can only proceed to scans of flat components from a top-view for now, but its parameters (resolution, scanning direction and scanning speed) were extensively tested to find their optimal values for the fastest and most accurate scans. A crack "palette" showing cracks with various widths was laser cut to quantify the precision of the system on which cracks were identified with a mean error of less than 0.2 millimetres with the optimal parameters. This approach is novel as it does not rely on any prior knowledge of the scanned part and paves the way towards developing autonomous inspection and possibly self-repairing critical systems. Number of words in the dissertation: 10,575 words. 3
  • 5. Contents Page 1 Introduction 7 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2 Literature Review 10 2.1 Laser scanning in different industries . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 Point cloud processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Crack detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3 Research Methodology 15 3.1 Universal Robot UR5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2 ScanCONTROL 3000-50/BL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3 Laser cut parts to put the scanner to the test . . . . . . . . . . . . . . . . . . . . . 18 3.4 The scanning process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.5 The data processing on the point cloud gathered . . . . . . . . . . . . . . . . . . . 25 3.6 A repeatability test to make sure that the user has a limited influence on the results 32 4 Results and Discussion 33 4.1 Quality assessment of the laser cut parts . . . . . . . . . . . . . . . . . . . . . . . 33 4.2 The impact of the scanning speed on the point cloud quality . . . . . . . . . . . . . 36 4.3 Optimal scanning parameters for a fast and accurate scan . . . . . . . . . . . . . . 39 4.4 Influence of the manual steps on the resulting accuracy . . . . . . . . . . . . . . . 49 4.5 Resulting crack paths on the cylinder head gasket . . . . . . . . . . . . . . . . . . 50 4
  • 6. 5 Conclusion 52 5.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 5.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 5.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 A Appendix 56 5
  • 7. List of Tables 3.1 The width of each crack of the testing palette . . . . . . . . . . . . . . . . . . . . 19 3.2 Speed and acceleration parameters depending on the speed factor . . . . . . . . . . 24 3.3 Parameters of the 16 test scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.1 The results of the point clouds superposition. . . . . . . . . . . . . . . . . . . . . 33 4.2 The results of the 16 testing scans . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.3 The results of the two linear regressions for the scans durations . . . . . . . . . . . 44 4.4 The results of the last linear regression . . . . . . . . . . . . . . . . . . . . . . . . 45 4.5 The results of the repeatability test . . . . . . . . . . . . . . . . . . . . . . . . . . 50 6
  • 8. 1 Introduction 1.1 Motivation In every sector of industry, monitoring critical parts and predictive maintenance is essential to the proper functioning of any system. When a part of the system breaks down, a simple choice must be made: is it better to repair it or replace it altogether? For complex components, the best solution is often to repair them. Indeed, manufacturing processes for small series production of complex parts are often resource-expensive and time-consuming. Casting and moulding necessitate the de- sign and making of complex moulds before starting production, while machining needs a large amount of different equipment and machinery to achieve specific goals. Furthermore, industries such as aerospace, nuclear energy and military often require a low amount of extremely complex components that suffer heavy usage and thus are at risk of breaking and cracking due to fatigue and repetitive stress. These parts are regularly monitored and replaced when their lifetime is over, inducing a huge cost for the company. Repairing these parts is advantageous on several levels: it is cheaper, faster, and more environmentally friendly. (a) A handheld laser scanner used in a power plant [28]. (b) A laser scanner used in a geodetic survey [35]. Figure 1.1: Some laser scanners used in different industries. 7
  • 9. Robots can be used in this context to conduct the inspection and sometimes the repairing of these parts on a one-off basis. An expert worker can often localise a defect in a component and proceed to the repair, but using robots allows repairing in remote or hazardous locations using multiple sensors. This kind of robot can also be programmed to automatically patrol between critical locations, such as the many plant rooms of a hospital that need to be properly functioning at all times. 3D parts scanning is a time-consuming process. They are done with the help of 3D scanning, which can be done in various ways, including contact scanners with a probe, photogrammetry or laser scanning. It is the latter that will be the focus of this dissertation. Laser scanning is often used by geomatic engineers to conduct surveys of buildings, but it is also used in industries for 3D object scanning for quality control or preventive maintenance. Some examples are visible in Figure 1.1. A common already existing technology is handheld laser scanners, that workers use to manually scan parts and output a 3D model of the real component, with tolerances of the order of magnitude of a millimetre. For bigger components such as aircraft wings or power plants components, the entire scanning process with older laser scanning technologies can take up to 16 hours, including data acquisition and data processing [28]. That is why there is a lot of demand for automating this exhausting and repetitive task with robots. Automated laser scanning already exists, but it often requires an intensive calibration routine, and only works in specific settings with prior knowledge. They are often used in inline quality control on a specific production line where the expected geometry of the manufactured parts is known. The approach of this report aims at allowing scanning of unknown part, without any prior knowledge of the part required. To allow for more flexible scanning, the scanner can be mounted on a robotic arm, so that the scanner itself is moving instead of the part the be scanned, to allow for in-situ inspection. This system could detect cracks and save their shape and location for a welding system to repair them with limited human supervision. 8
  • 10. 1.2 Aims This project will focus on the design of a robotic system capable of inspecting defective components containing cracks with a laser scanner and outputting the location and shape of the said crack. This solution would be novel in the sense that it should be able to scan any part in any location without any necessary fine-tuning and prior knowledge of the component. The scope of this dissertation will be limited to the scanning of parts from a top-view perspective. Its accuracy and speed will be deeply investigated to find the optimal configuration and variables to use depending on the requirements of the welding robot and the time available for the data acquisition. The robot is expected to output a point cloud model of the crack along with its mean path and location. 1.3 Objectives The aims of this project are described as such: (1) Choose a process to create simple components with intended cracks to test our solution on and assess the quality of the production process chosen. (2) Design a proof of concept scanning robot, using a UR5 Universal Robot and a scanCON- TROL 3050/BL profile scanner, which can generate the point cloud of a scanned part, as well as a program capable of accurately identifying cracks from the resulting point cloud. (3) Understand the impact of the scanning speed on the quality of the scan by comparing the resulting point clouds and scan duration of scans of the same component at different speeds. (4) Find the relationship between the scanning parameters, which are the scanning direction (parallel or perpendicular to a crack) and the scanning resolution, and the performances of the robot, which are measured with the accuracy of the crack detection algorithm and the duration of the scan, by testing the system with various combination of parameters. (5) Make sure that the participation of a human operator in the analysis of the scan has a limited effect on the generated crack path by repeating the same crack analysis multiple times and looking at the resulting standard deviation of the accuracy. 9
  • 11. 2 Literature Review 2.1 Laser scanning in different industries 2.1.1 Large scale laser scanning Laser scanners are used in various engineering fields. [14] lists nine of them, seven of them which can be considered as "large scale" scanning. They include ground landscape surveys, protection of buildings and cultural relics, or even deformation monitoring for tunnels, bridges and other large-scale constructions. These applications often require long-range laser scanners, which use technologies such as time of flight or phase shift measurements, having scanning ranges of the order of magnitude of the kilometre. They lack accuracy, with a resolution in the range of a metre to a centimetre. Laser scanners are of huge help in the construction sector, such as in [15], where a hemispheric phase-shift type laser scanner was used to scan the surface of buildings and obtain its surfaces point clouds. Post-processing was conducted to assess the flatness of the surfaces and estimate the amount and cost of mortar necessary to correct the flatness. This article estimates the maximum precision of this kind of laser to be 0.5 millimetres, allowing for accurate measurements of buildings’ facades. The drawback of this kind of laser scanner is its reach, being at maximum of around 150 metres. Another application of laser scanners is the quality control of spatial structural elements in buildings. They are scanned in [19] using a handheld laser scanner, and processing on the resulting 3D point cloud is used to match it to the original model of the structure. It allows for a faster and extremely precise structural and mechanical analysis of this kind of structure. The handheld scanner used function thanks to optical triangulation and is reported to have a resolution of 0.1 millimetres and an accuracy of ± 0.03 millimetres. These scanners are very effective at scanning small and complex components but rely on the expertise of their users. In these large-scale 3D scans, multi-modal systems can make use of the advantages of laser 10
  • 12. scanners as well as other sensors, such as cameras to obtain colour and texture data of the scanned areas. A multi-modal approach was used in [2] to generate photo-realistic 3D models of the rooms of a museum and its artworks. These fast and non-contact solutions are the safest ways to perform scans in such sensible environments. 2.1.2 Small scale laser scanning Smaller scale scanning is also widely used in various domains.[14] mentioned the measuring of complex industrial equipment, that are rendered more complex when the shapes of the scanned parts may hinder the capabilities of the traditional scanners, as well as medical measurements for prosthesis making, where non-contact solutions are almost always required. In [5], a laser scanner was used to duplicate an ear cast for a prosthesis. The usage of coloured pins as markers to stitch together point clouds resulting from a scan done from multiple directions is notable. [32] is a 2019 survey of the state of the art in 3D imaging sensors. Alongside photogrammetry and interferometry, laser scanners, using different technologies such as time of flight measurement or laser triangulation, and their various applications are listed. With feature sizes going all the way down to the µm in surface roughness analysis, it illustrates the many applications requiring laser sensors. Indeed, laser scanning proved to be extremely effective even at this scale, as seen in [38], where various optical scanners were compared in the analysis of surface defects and roughness. Chronologically, earlier usage of laser triangulation was for reverse engineering. In 1994, [13] set up a scanning arrangement with this technology to obtain a 3D point cloud of a part to be reverse engineered. They explained at the time that contact coordinate measurement systems may not be usable for complex or bigger components. However, they mentioned how reference points may be necessary to stitch together multiple point clouds from scans of the same part from different angles. A few years later, in 2002, [4] managed to laser scan and machine copies of complex aircraft parts, with an error, including both the scanning and machining errors, of ± 0.127 millimetres. In [17] and [34], the authors made use of a line laser scanner working with optical triangulation and focused on the generation of a scanning trajectory that englobes all of the possible scanning points. It necessitates prior knowledge of the part CAD model but can generate any scanning trajectory in different directions and with varying altitudes. The part is put on a rotary table to facilitate the scanning process, but this is only possible for parts that can be taken apart from their 11
  • 13. systems. When it is the case, a more flexible scanning device needs to be used, just such as in [18], where a gantry was designed around a statue by Michelangelo in a museum to proceed to an entire scan. Necessary precautions are taken to ensure the safety of the art piece. As seen before, colour data was gathered thanks to a digital camera, making this system a multi-modal one. When touching the part is authorised, multi-modal systems can use touch probes in combination with laser scanners, as it has been done in [44], to achieve greater accuracy. [20] compared touch probes, laser line scanning and portable arm CMM on various parameters and illustrated how touch probes allow for extreme precision at the cost of an extended scanning duration. Just like most of the papers cited above, this solution required prior knowledge of the part, here, to select the best sensors for each surface to map. Stereo vision, triangulation and laser lines are used in combination in [41] to scan parts with an error range of ± 0.25 millimetres. Finally, closely linked to the aims of this project, [6] presents the concept of an autonomous system, supposed to automatically detect and weld defects on railway tracks. A laser line scanner is used to gather a point cloud of the top surface of the rail, then potential defects are identified, allowing for a separate welding robot to proceed to the repair. These articles all demonstrated how laser scanning is capable of generating accurate 3D repre- sentations of real-world parts with excellent precision, while highlighting common practices, such as markers to stitch clouds together, or using prior knowledge of the part to help with the scan trajectory generation. 2.2 Point cloud processing Laser scanners output point clouds of varying sizes representing the area they scanned. Processing is almost always necessary to obtain clean and usable data, for, in our case, crack identification. The Point Cloud Library PCL [31] is one of the most used toolkits when dealing with 3D point clouds. It includes many algorithms necessary for point cloud processing, such as filtering, feature estimation and segmentation, key points identification and more. This paper and library helped the author of this report identify the commonly used methods in 3D point cloud processing and guided them towards their final implementation. After comparing different point cloud data acquisition methods, such as Time of flight laser scanning, photogrammetry and even RGB-D 12
  • 14. camera, [40] focused on the different approaches for data cleansing, registration, segmentation and object recognition. It mentioned many algorithms and approaches that proved to be usable in our project, especially converting the point clouds to images to proceed to traditional machine vision approaches such as median filtering, morphological erosion and dilation, using kernels whose sizes have been estimated from the point cloud itself. Data registration, for combining point clouds taken from different sensors or views, is made possible using ICP algorithm or key points detection. [16] added an inertial measurement unit to a handheld laser scanner to simplify the data stitching, simply by offsetting the scanned point clouds by the position of the scanner. This solution could be easily adapted for a robotic arm by using the position of the end effector of the robot, which is precisely calculated thanks to its forward kinematics model. For data cleansing, [24] made use of a statistical approach to fit the best matching planes to point clouds and remove the most probable outliers. Their approach proved to be computationally simple and faster than other traditional methods at accurately identifying outliers in noisy point clouds. [7] applied denoising on point clouds without the need of converting them to a mesh in the first place, which can be computationally expensive when the cloud is noisy. Their approach is non-local and is based on a similarity measure between the analysed point and other points of the cloud. PCA was used in [8] to reduce the dimensionality of a 3D point cloud to a 2D space, with each dimension being generated from a different Principal Component Analysis. It allows for a low complexity classification of points into noise and non-noise classes, filtering, and broadcasting of the 2D points back into the 3D space. When multiple objects are present in a scan, such as in scans of construction sites or outdoor environments, Principal Component Analysis is used to both reduce the size of the data and easily cluster the points of the clouds for further processing. [23] applied A Robust PCA method to accurately segment the point clouds in regions with a semi-automatic method depending on a few user-defined parameters. Another approach used voxel segmentation to divide the point cloud of a construction site into sub-clouds, then also used PointNet [29] on the data, a deep learning approach to point cloud segmentation and classification, and aims at comparing the results of the two solutions. To assess the quality of a point cloud of a part, it is often compared to the 3D model used to create said part. In [12], a modified Iterative Closest Point (ICP) algorithm is modified to allow for 13
  • 15. matching the point cloud to the CAD model. An objective function representing the Mean Squared Error between the point cloud and the model was proposed, and the algorithm tries to minimise it by rotating and translating the point cloud to match the model. The output of this algorithm is a 3D representation of the part superimposed on its 3D model highlighting the geometric deviations. [39] conducted a similar analysis by using the point cloud from a structural light 3D scanner and comparing it to the original CAD model of the scanned part, resulting in an effective method for quality assessments of parts during a production process. 2.3 Crack detection Most of the crack detection research has been done on photographs of surfaces. [22] reviewed 50 papers working on crack detection, and listed the most used processing techniques. They include morphological approaches to extract the crack surface from the pictures [33], mask generation with various thresholding methods (manual, Otsu [10] [36]), or graph-based approach and usage of shortest path algorithm like Dijkstra [11]. [1] proposed a different method which calculates the minimal path from a starting pixel with a set length, which always matched the true crack path. It was tested on synthetic and real pictures and proved to be effective at identifying cracks on road pavements, even though this task is rendered complicated due to the variations in surface texture. [25] used a neural network trained on grayscale images of cracks on roads, on which the in- tensity of the pixels of the cracks was visibly different from the rest of the ground. The authors also compared six algorithms for crack classification in three classes: longitudinal, transversal, and miscellaneous cracks. This was also achieved by [26], but crack detection was simply done by nor- malising and thresholding the images to extract the pixel of the cracks that have a different intensity. Shadow removal may be necessary in case of uneven lighting conditions in the pictures. [45] im- plemented such a process and applied a probabilistic approach to automatically detect cracks from images of pavement. To detect the edges of cracks in our project, edge-based segmentation on point clouds was mentioned by [40], which allows for fast, but sensible to noise and uneven point cloud density, segmentation of features. 14
  • 16. 3 Research Methodology 3.1 Universal Robot UR5 The robot used throughout this project is a Universal Robot’s UR5. It is set up on its workstation in the RIF Laboratory at the Bristol Robotics Laboratory. It is programmed using the software RoboDK, which allows simulation and control of the system using the built-in functions and the Python SDK. This software holds many ready-to-use assets, including the 3D model of the robot as well as its forward and inverse kinematics to move it around easily. A screenshot of the simulation, as well as a picture of the actual work station, are shown in Figure 3.1. A pointer (Figure 3.2 that can be mounted on the robot was also 3D printed to allow simple teaching of precise position to the robot. Figure 3.1: The workstation and the simulated robot. 15
  • 17. Figure 3.2: The 3D printed pointer tool to teach specific points to the robot. 3.2 ScanCONTROL 3000-50/BL The laser scanner used for this project is a Micro-Epsilon scanCONTROL 3000-50/BL profile scanner [21], which functions thanks to the laser triangulation principle. Its laser is made of visible blue light with a wavelength of 405 nm. Its maximum resolution is 2048 points along its scanning line. It is a class 2M laser, meaning that it is dangerous when staring directly at it for a duration of more than 0.25 seconds, which is highly unlikely due to the eye-closure reflex [30]. This laser has been chosen due to its availability at the RIF lab at the Bristol Robotics Laboratory. A plastic holder, allowing the UR5 robot to take and hold the laser scanner thanks to its Wingman Tool Changer System Wingman Tool Changer [42], has been designed and 3D printed before this project. It can communicate with a Python script using a python library called pyllt through an Ethernet cable, which doubles as the power source thanks to the Power over Ethernet technology. The scanner and its holder are shown in Figure 3.3, while the scanning area of the scanner and its dimension are shown in Figure 3.4. The documentation of the laser mentions many parameters that can generate errors in the scanned points. They are listed below : • Scanning highly reflective or transparent components may deceive the laser sensors tasked with the optical triangulation task. The parts that we investigated were laser cut in acrylic sheets which will not pose any issue. 16
  • 18. • Colour differences and variations in the penetration depth of the laser can result in inaccurate measurements. The exposure parameters of the scanner can only be changed as a whole for each profile, so if parts with different colours and roughness are scanned, precautions should be taken to scan these areas separately. • An non-uniform temperature spread in the sensor may lead to inaccuracy. For the testing, the scanner was turned on at least twenty minutes before scanning to ensure a uniform tempera- ture in the system. • External light can disturb the readings of the sensor. Sadly, it was not possible to isolate the workspace of the robot, so this is a possible source of inaccuracy. • Mechanical vibration can disturb the accuracy of the scanner in the range of the µm, that is why the robot makes a full stop at each scanning step. • Surface roughness of 5 µm and above can be detected by the scanner, generating surface noise on the resulting point cloud. For our application of identifying cracks with a width of the order of magnitude of a millimetre, this should not cause any issue. • Specific parts geometry may obstruct the view of the sensor or be in the way of the laser emitter. Because we are scanning flat components, this will not be an issue for our testing. We will be using this laser scanner with the robot by bringing the end effector of the robot to a full stop at various positions along a scanning trajectory and saving the scanned points each time. Another way to use this scanner is to keep the movement of the robot continuous and use the scanner at a higher frequency to save points along the way. This method is more prone to errors caused by vibrations, especially at high speed, so the first approach will be used instead. 17
  • 19. Figure 3.3: The scanner attached to its 3D printed holder and connected to its ethernet cable. Figure 3.4: Dimensional drawing sensor scan- CONTROL 30xx-50, dimensions in mm (inches) 3.3 Laser cut parts to put the scanner to the test To test for the quality of the scan, components with intended defects were made. One of them is a testing crack "palette", showing the same crack seven times with varying widths. It was designed and saved as a Scalable Vector Graphics file (SVG), then cut on the laser cutter of the lab, an Epilog laser cutter, in a 5 millimetres thick plastic sheet. The SVG file and the resulting part can be seen in Figure 3.5, and the width of each crack is shown in Table 3.1. This means of production was chosen because it was the fastest and easiest way to make this component. Laser cutting usually has a tolerance of ± 0.25 millimetres, but because it is a removing process, it is prone to its quality being affected by the quality of the material used. The plate cut is relatively thin (5 millimetres), so its thickness should not influence the quality of the cut [3]. Alternatives could have been to use 3D printing, with technologies such as Polyjet printing or even Stereolithography printing, which can bring this tolerance down to ± 0.05 - 0.01 mm. 18
  • 20. Figure 3.5: The sketch used for the laser cutting and the resulting crack palette Crack No. 0 1 2 3 4 5 6 Width (mm) 2.22 1.71 1.38 1.08 0.72 0.31 ∼0 Table 3.1: The width of each crack of the testing palette. Crack 6 is almost nonexistent. The mean width is 1.06 mm. The path of each crack was generated by converting the Scalable Vector Graphics (SVG) file to a point cloud. Points were generated at a set z coordinate by iterating through the paths of the file. We set the density of points so that the crack itself is made of roughly 40,000 points, giving the edges of a crack a linear point density of approximately 450 points per millimetre. The resulting point cloud is visible in Figure 3.6. 19
  • 21. Figure 3.6: The point cloud generated from the SVG file of the crack palette. Finally, the true path of each crack that will be used as a baseline to compare our scan against was computed by averaging the left and right edges of the crack. 400 points were generated along the length of the crack, giving a resolution of 0.11 mm or a point density of 9 points per millimetre, which is enough for our welding application. The resulting path and cracks’ edges can be seen in Figure 3.7. Figure 3.7: The generated cracks’ edges and paths for cracks 0, 1 and 2. These paths will be used as a baseline to assess the quality of the scan. 20
  • 22. To make sure that the cut palette matches with the original file, top-view pictures of each crack were taken using an iPhone XR on a makeshift rig. Seven pictures were taken, each having a crack directly below the camera to avoid the distortion effect of the lens. A telecentric camera lens could have been used to limit this distortion even more and obtain a picture of the top surface of the palette. These pictures were then manually binarized to generate a mask for each crack with the photo editing software PhotoFiltre. This process can be seen in Figure 3.8. Using the contour detection algorithm from the python library OpenCV, each mask was converted to a point cloud with a point density of 22 points per millimetre, which is the lowest attainable with the resolution of the camera used. Figure 3.8: The binarization was manually done with a photo editing software called PhotoFiltre. To compare the point cloud generated from the SVG file and the point cloud generated from the pictures of the actual palette, the two were superimposed by computing the affine transformation necessary to match the four corners of each crack. This process is illustrated in Figure 3.9. This was done for each crack, and the resulting combination of the two point clouds can be seen in Figure 3.10. To quantitatively compare these two point clouds, the average distance between each closest points was calculated using the following equation : If A and B are two point clouds of sizes nA and nB, with a and b points belonging to these clouds, the average distance is calculated as : dist(A,B) = 1 nA ∑ a min b∈B (dist(a,b)) 21
  • 23. Figure 3.9: The affine transformation is calculated by manually selecting the four corners of each crack. The red and green point clouds respectively come from the SVG file and the mask generated from the photograph. Figure 3.10: The superposition of the point clouds from the SVG file (in red) and from the pho- tographs. A zoomed-in area of one of the crack is shown. Another part with intended cracks was laser cut: it consists of a portion of a cylinder head gasket found online [9] with added cracks. Its SVG file used for its laser cutting as well as a picture of the final part is shown in Figure 3.11. This part will be used to conduct testing in the following chapter. 22
  • 24. Figure 3.11: The sketch used for the laser cutting and the resulting plastic cylinder head gasket. 3.4 The scanning process The scope of this project is limited to the scanning of cracks on flat components and from a top-view perspective. This makes this research more focused on the scanning accuracy and data processing rather than the mechanical and geometrical aspect of the scan, such as the orientation or trajectory of the robot. They will be addressed in this part which describes the scanning process. This process was programmed and tested only on the simulation in the first place, then sent to the robot once it was finished. The first step of the scanning process makes use of the pointer described in section 3.1. It is automatically mounted on the robot using the Wingman tool changer system available. The user then needs to guide the robot to the four corners of the area they want the scan to be done. The resulting bounding box is saved, then the robot automatically switches tools by removing the pointer and grabbing the scanner. The chosen area is scanned by the robot with max joint and linear speeds and accelerations set by a configurable speed factor (See table 3.2), with a set number of steps ("n steps" in Figure 3.12) in either the x or y direction. A small overlap is deliberately programmed to ensure the whole area is correctly scanned. The coordinates of each point are calculated by adding the position of the scanner relative to the robot base to the position of the point relative to the scanner. Once the entire area has been covered, the point cloud, the parameters of the robot and the duration of the scan are saved in a text file. For the sake of simplicity, the robot follows a Z-shaped trajectory to cover the entire area, but an S-shaped trajectory may reduce the overall duration of the scan. Indeed, it is 23
  • 25. visible in Figure 3.13 that depending on the direction of the scan, it may require only one back and forth motion (Figure 3.13b), or numerous (5 in Figure 3.13a). During these motions, no scanning is done by the robot, so some time is wasted. Speed Factor 0.5 1 2 3 4 5 6 Max Speed (mm/s) 50 100 200 300 400 500 600 Max Joint Speed (deg/s) 50 100 200 300 400 500 600 Max Acceleration (mm/s²) 25 50 100 150 200 250 300 Max Joint Acceleration (deg/s²) 25 50 100 150 200 250 300 Table 3.2: The max speed and acceleration parameters of the robot depending on the chosen speed factor. Figure 3.12: A simulated scan showing the effective scanning area. The z coordinate of the robot is calculated according to the altitude of the scanning area in yellow. 24
  • 26. (a) Scan with N steps = 50, done in the direction of the width. (b) Scan with N steps = 50, done in the direction of the length. Figure 3.13: The same scan conducted in two different directions. To figure out the best speed for the robot, the cylinder head gasket mentioned in the previous section was scanned at various speeds, i.e. with the 7 speed factors mentioned in Table 3.2, with all other parameters fixed. The number of steps was set at 100, the scanning direction was constant, and the scanning area and part were set once and not moved throughout the scanning. Each scan will be done three times, and the resulting point clouds will all be compared against a baseline scan done at a speed factor of 0.5. The metrics used to compare the point cloud is still the average distance between each closest points of two point clouds. dist(A,B) = 1 nA ∑ a min b∈B (dist(a,b)) The duration of each scan was also saved in order to figure out the relation between the robot’s speed and the time taken to do its scan. 3.5 The data processing on the point cloud gathered After a defective component has been scanned, data processing is necessary to identify the path of its crack. For this purpose, the python library Point Processing Toolkit [27] was used. It allows for simple 3D visualisation of huge point clouds and manual point selection by the user. 25
  • 27. The processing of the generated point cloud to calculate the path of a crack works as follows: • The altitude and range of the slice to keep only the top surface of the scanned component is manually inputted by the user. Only the points situated in this slice are saved. (Figure 3.14) • A sample flat area without a crack is manually selected by the user on the 3D point cloud. This area is then converted to a black and white image, from which the length of the longest empty (white) area along the scanning direction is saved as the "kernel size" k for future processing. (Figure 3.15) • The overall location of the crack is manually selected by the user, then a black and white mask is generated, representing the projection of the location of the crack on a flat surface. Each point of the cloud is converted to a black pixel on the mask. (Figure 3.16) • The kernel size calculated before is used to generate a 1 by k kernel. This kernel is used to do erosion on the current mask. This has the effect of filling in the gaps between the scanning lines while keeping the crack empty. This forces us to do scans with a distance between scanning lines smaller than the maximum width of the crack. (Figure 3.17) • The mask of the crack is converted back to a point cloud, with the z coordinate of each point being set to the mean altitude of the points of the rest of the cloud. It is aligned back to its original location thanks to an affine transformation. The user then selects only the crack using the Python interface. This step allows them to remove possible errors in the crack identification. (Figure 3.18) 26
  • 28. Figure 3.14: Keeping only the top surface of the scanned part by manually setting the correct al- titude and range (z = 8.5 ± 1 mm). Figure 3.15: The empty area selected by the user is converted to a mask. The "kernel size" con- sists of the maximum distance (in pixels) be- tween two lines, plus one. Figure 3.16: The selection and conversion of the area of the crack to a black and white mask. Figure 3.17: With an horizontal kernel of length k, the white spaces between the scanning lines are filled, but not the crack. 27
  • 29. Figure 3.18: On the top part of the figure, the user selected (in yellow) parts of the newly generated cloud that are not the crack. The cloud is then separated on the bottom part of the figure in two distinct clouds, with the right one being of the crack only. This process was designed for the scope of this dissertation. Namely, the scan being done from a top view perspective of flat components, and with the direction of the scan being parallel or perpendicular to the overall direction of the crack. Having a scan done at a 45-degree angle of the crack would raise issues with the image processing used, as the dilation may erase parts of the crack. A solution would be to have a scan done at an extremely high resolution (i.e., with a huge number of steps) to avoid the need for dilation in the crack identification process. This could be achieved by using the scanner at its maximum frequency of up to 10 kHz, but it would require extremely precise control of the robot, both in terms of travelling along its trajectory at a constant speed and the correct height relative to the part. The methodology we have chosen for our scans allows for more flexibility and error margin in both the robot and part position. This part of the process relies heavily on the inputs of a user, which has to select a flat area to determine the kernel size, select the location of the crack, and check and correct the final output of the algorithm. 28
  • 30. At this point, the crack has been correctly identified and localised in the original coordinate frame. The next step consists of the computation of the crack mean path so that a comparison between a scanned path and the true path generated from the SVG file of the crack palette (visible in Figure 3.7) is possible. The process to calculate the scanned crack path starts with the identification of the two points of the crack that are the furthest apart. A straight line (comprised of 50 points, in this case, approx- imately one point per millimetre) is then drawn between them, as seen in green in Figure 3.19. Figure 3.19: The crack points (in blue) determined with the image processing, subsampled by a factor of 30 for better visualisation, as well as the straight line between the two furthest points of the crack (in green), and the generated crack path (in red). For each of these points, a plane going through it and perpendicular to the direction of the straight line is calculated. The coordinates of n (30 in this case) of the closest points of the crack are then averaged and yield the coordinate of a new point of the final crack path. This process is visible in Figure 3.20. This solution makes sure that the path will be correctly identified even when the crack is curved. 29
  • 31. Figure 3.20: The calculation of a point of the crack path. For each point of the green line, a perpendicular plane is drawn. The resulting path point results from the average coordinates of the n closest point of the crack to this plane. With 50 points of the crack path now calculated, linear interpolation is used between each successive points of the crack path to bring the total number of points to 400. This final path is depicted in red in Figure 3.21. Figure 3.21: The path generated (in red) for the crack number 0 of the crack palette, the "true" path generated from the SVG file (in blue) and the crack edges front the SVG file (in green). 30
  • 32. This process was repeated for various parameters, including the number of steps in the scanning trajectory, the direction of the scan (parallel or perpendicular to the cracks of the palette), and for each of the 7 cracks of the palette. The scanning speed was set to the optimal value that was found from scanning the cylinder head gasket (the finding of this value is described later in this report), and the scanning area and palette position were kept constant. These parameters are detailed in Table 3.3. Parameters for the scans of the crack palette Speed factor 4 Number of steps [25, 50, 100, 200, 300, 400, 500, 1000.0] Direction [Perpendicular (0) or parallel (1) to the crack direction] Scanning area 97 x 226 mm2 Table 3.3: The parameters used to perform a total of 16 different scans. Scans with a number of scanning steps lesser than 25 were also conducted, but they turned out unusable by the path-generating algorithm. Two of them are shown in Figure 3.22. Scans realised with a number of steps lesser than 25 were not used for this analysis. (a) N steps = 5, direction = Parallel (b) N steps = 10, direction = Perpendicular Figure 3.22: Unusable scans due to their low number of scanning steps. 31
  • 33. To measure the quality of the generated path, they were compared to the "true" path coming from the SVG file of the testing palette. They were translated and scaled with an affine transforma- tion to the location of the crack in the real-world coordinate frame, by matching the four corners of each crack. The distance between the two trajectories is the mean Lock-step Euclidean distance, which represents the root mean squared distance between all pairs of corresponding points in the path [37]: Eu(A,B) = s n ∑ i=1 dist2 2(ai,bi) 3.6 A repeatability test to make sure that the user has a limited influence on the results To make sure that the manual steps of the process do not influence the results, the crack analysis of the palette described in the previous Section was repeated ten times. The same user (who is the writer of this report) repeatedly selected the sample flat area for the kernel size determination, the overall location of each crack, and once the image processing was over, removed to potential errors in the crack identification. The total manual processing time was saved, and the Lock-step Euclidean distances between the 70 computed trajectories and their "true" counterparts generated from the SVG file of the crack were calculated. The 70 crack analyses (10 times 7 cracks on each scan) were done in a row on the same day to limit the variations in the user’s attention. The same scan was used for all of them, being done with a speed factor of 4, 100 scanning steps and with its scanning direction perpendicular to the crack (meaning that the scanning lines are perpendicular to the direction of the crack). 32
  • 34. 4 Results and Discussion 4.1 Quality assessment of the laser cut parts In Section 3.3, a "crack palette" was laser cut in order to test the accuracy of the scanner. After the making of this component, photographs of each crack were taken and processed to generate point clouds of the real part. The point cloud generated from the SVG file used for the laser cutting and the clouds generated from photographs of the laser cut part have been superimposed as seen in Figure 3.10, by manually selecting the four corners of each crack. To measure how well the real part corresponds to its original file, the mean distances between points of each cloud for each crack were calculated and are plotted in Figure 4.1. The individual results as well as the number of points in the clouds are detailed in Table 4.1. As a reminder for the reader, the formula used throughout this report to calculate the mean distance between two point clouds A and B is : dist(A,B) = ∑ a min b∈B (dist(a,b)) Crack Number 0 1 2 3 4 5 6 Mean Distance (mm) 0.781 0.742 0.698 0.757 0.621 0.418 0.407 Standard Deviation (mm) 0.551 0.561 0.535 0.563 0.447 0.279 0.174 Coefficient of Variation (%) 70.6 75.6 76.7 74.4 71.9 66.7 42.8 No. of Points 3307 3283 3302 3372 3307 2836 1720 Table 4.1: The results of the point clouds superposition. 33
  • 35. Figure 4.1: Mean distances between the point of the SVG file point cloud and the point clouds generated from the photographs. The results show a slight downward trend in the mean distance the thinner the cracks get. This can simply be explained by the fact that the thinnest cracks are made of fewer points, so large errors are less prone to be present. The coefficients of variation are also high (> 50%), meaning that there is a huge spread in the magnitude of the distances. In other words, they are both points that are very far apart and points that are almost exactly at the same positions on the two clouds. Finally, a mean distance of 0.57 millimetres for all cracks between the intended and actual part is a huge error considering the widest crack is only 2.22 millimetres wide. It is however necessary to remember that this comparison is done between a point cloud that has been generated from an SVG file and another one generated from photographs of the laser-cut parts. Many parameters may explain that difference: • The conversion of the SVG file in a point cloud was done by iterating through each "path" of the file and generating points along them. So continuous lines were transformed into a discrete set of points, implying some information was lost. • The part was laser cut on an Epilog laser cutter, which has a usual accuracy of ± 0.25 mil- limetres. 34
  • 36. • The photographs of the palette were taken on a makeshift rig, that lacked proper lighting and uniform background. • The camera used to take the photographs was not specially designed for this kind of machine vision application. A telecentric lens could have been used to avoid distortion and obtain more precise measurements. This also led to the vertical surfaces of the cracks being visible in the photographs, even though pictures were taken by aligning each crack straight under the camera lens. These vertical surfaces are visible in Figure 4.2. • The image binarization was done manually by selecting the top surface of the palette, but due to improper lighting, errors may have been made in this process. Glares and areas that are not part of the top surface may have been wrongly selected. This kind of misleading feature is also illustrated in Figure 4.2. • When selecting the four corners of a crack on the cloud generated from the photograph to warp it in the coordinate frame of the SVG point cloud, there was no perfect point making a right angle at the edges of the crack. This led to an incorrect transformation that did not exactly match the two point clouds together, as seen in Figure 4.3. On the right-hand side of this Figure, the consequences of this misalignment are visible. Figure 4.2: Due to the setup used to take the pictures, unwanted glares and surfaces were visible, that can alter the quality of the binarization. 35
  • 37. Figure 4.3: Details of the consequent offset between the SVG point cloud and the one generated from the photographs. When looking at the point clouds in detail, such as on the right-hand side of Figure 4.3, the offset between the clouds seems to be consistently towards the left side, then switches to the right side once the middle point of the crack has been reached. The point cloud also shows that the mean path of the SVG and actual cracks are aligned. That is why the crack palette, as well as the "true" paths generated from the SVG file, will be considered valid for the testing to be done in the rest of this project. Also, in the context of crack repairing, the position of the scanned crack needs to be accurate on the scanned real-world component, not on the CAD file used for its making. 4.2 The impact of the scanning speed on the point cloud quality A total of 21 scans (three scans for each of the seven speed factors) plus one baseline scan at a speed factor of 0.5 have been done of the cylinder head gasket made in section 3.3. Some of the resulting 3D point clouds are visible in Figure 4.4. The average distances between the closest points between each scan and the baseline were calculated, and the resulting mean distances were averaged for the three scans at each speed. The average of all distances for all speeds was also calculated. These results are displayed in Figure 4.5 along with the mean duration of the scans for each speed. 36
  • 38. (a) speed_factor = 0.5 (b) speed_factor = 2 (c) speed_factor = 4 (d) speed_factor = 6 Figure 4.4: The point clouds generated from some of the scans conducted for the analysis of the influence of the speed of the robot. 37
  • 39. Figure 4.5: The mean distances between the baseline scan and the scans at various speeds (in blue), as well as the mean duration of the scan for each speed (in red). In Figure 4.4, the number of points in each of the point clouds are visible, and even though the only difference between each scan is the robot’s speed, this number is different for each of them. This is because the scanner has a resolution of 2048 points per scan but does not register the entirety of them at every step. In most cases, it is because the laser line fell out of the effective scanning area of the scanner, in one of the holes of the table visible in Figure 4.4 for instance. For the scan in Figure 4.4a, an average of 1944 points instead of 2048 were registered for each of the 500 scanned lines. The data gathered shows that no matter the scanning speed of the robot, the distance between the resulting scan and the baseline is around 4 millimetres. The standard deviations on this plot also give information about the repeatability of the scans: the mean distance to the baseline varies among the scans, even with the same set of parameters (i.e. same speed factor). However, this variation stays relatively small (± 0.5 mm). Finally, the speed factor has a huge influence on the scan duration: the total scan duration goes from 235 seconds to 103 seconds when the speed factor goes from 0.5 to 4, but only goes from 86.6 and 85.9 seconds between speed factors of 5 and 6. When looking at the resulting point clouds with a 3D viewer, it is visible on the high-speed 38
  • 40. scans that the scanning lines are faulty and not equally distributed, as seen in Figure 4.4d. For scans done at lower speeds, the scanning lines seem more equally distributed and give a better view of the part, as seen in Figure 4.4b. However, this difference does not seem to affect the chosen metric in Figure 4.4, where the difference between the mean distances at speed factors of 2 and 6 is less than 0.1 millimetres. In addition to this qualitative difference, speed factors above 4 created a huge amount of jerk during the scanning, which translated as audible and visible vibration on the robot and its workspace. This jerk was caused by the high-speed acceleration and deceleration between each scanning step. It is detrimental to the robot and its components, so it is strongly advised to limit its speed. Furthermore, The duration of the scan plateaus with speed factors higher than 4. This is certainly because the distance between each scanning step, on which the robot does a full stop, is relatively small (1.6 millimetres for these scans). On such a short distance, allowing the robot to speed up and down faster does not have any influence on the total duration beyond a certain point. All in all, this section showed that multiple scans of the same part output different point clouds with a mean distance of 4 millimetres between their closest points. This does not invalidate them, as all of the points are still mapping the part and its geometry correctly. For the safety of the robot and its workspace, and to perform the fastest scans, a speed factor of 4 will be used for the remaining experiments on the robot. 4.3 Optimal scanning parameters for a fast and accurate scan The two remaining parameters to set for the scanning are the direction of the scan and the number of scanning steps along it. A total of 16 scans have been done by trying out every combination of the parameters shown in Table 3.3. As stated before, the optimal speed factor for our application was found to be 4, and the scanning area and part position were kept constant throughout the testing. All of the conducted scans are listed in Table 4.2, and four of them are visible in Figure 4.6. 39
  • 41. N_steps Width steps (mm) Direction Speed factor Number of points Duration (s) 25.0 9.04 0.0 4.0 148366 94.6 25.0 3.88 1.0 4.0 240753 116.0 50.0 4.52 0.0 4.0 296172 125.0 50.0 1.94 1.0 4.0 479523 155.9 100.0 2.26 0.0 4.0 590785 172.6 100.0 0.97 1.0 4.0 960806 207.3 200.0 1.13 0.0 4.0 1181248 238.2 200.0 0.48 1.0 4.0 1920256 315.4 300.0 0.75 0.0 4.0 1772212 300.0 300.0 0.32 1.0 4.0 2880689 408.5 400.0 0.56 0.0 4.0 2362771 360.2 400.0 0.24 1.0 4.0 3840257 501.5 500.0 0.45 0.0 4.0 2953144 449.0 500.0 0.19 1.0 4.0 4801046 674.7 1000.0 0.23 0.0 4.0 5906296 850.4 1000.0 0.10 1.0 4.0 9600778 1671.7 Table 4.2: The 16 scans done for this analysis, as well as their duration in second and width between the scanning lines in millimetre. 40
  • 42. (a) N steps = 50, direction = Perpendicular (b) N steps = 50, direction = Parallel (c) N steps = 100, direction = Perpendicular (d) N steps = 100, direction = Parallel Figure 4.6: The point clouds generated from some of the scans conducted for the analysis of the crack palette. 41
  • 43. Before conducting any analysis on the calculated crack path of each of these scans, it is nec- essary to note that setting the number of steps instead of their width was a poor choice. Indeed, it is not fair to compare two scans done with 100 steps and different scanning directions, because the scanning area is a 97 x 226 mm2 rectangle. This makes the width of the steps 2.26 mm in one direction and 0.97 mm in the other. This is visible in Figures 4.6a and 4.6b: the scan is not as fine when it is done perpendicularly to the crack direction, while the scan done parallelly, with the same number of steps, is more detailed. This fact needs to be kept in mind when comparing scans with different scanning directions. An equivalent metric to the width between the scanning lines in millimetre is their resolution, in lines per millimetre. Both metrics are equivalent, but using the width allows us to easily compare this data against the width of a crack or its length. As stated in the Methodology section, generated crack paths were compared to the original paths by calculating their mean Lock-step Euclidean distances. 4.3.1 The case of crack number 6 Crack number 6, being almost nonexistent, will be analysed first. While generating this crack’s path, it was noticed that they were almost always visibly wrong. The resulting path of the scan made with 200 steps and parallelly to the cracks is shown in Figure 4.7. Although the crack itself was partially identified (in red on the top half of the Figure), the path-generating algorithm failed at computing a path similar to the "true" path in green. This may be caused by the way this algorithm works, by averaging the coordinates of the 30 closest points of the crack to a moving plane to generate the path (see Figure 3.20). This is an issue as there is only a total of 47 points that were classified as “crack” with this scan. This explains why the endpoint is located before the actual end of the crack. Because the number of points selected to calculate a point was set to 30 for this whole analysis, all of the paths generated for crack number 6 will be incorrect. This crack will be excluded for the rest of the data analysis. 42
  • 44. Figure 4.7: The path generated by the analysis (in red) for crack number 6 is visibly wrong. 4.3.2 Prediction of the scan duration The duration of each scan is plotted in Figure 4.8. The shortest scan took 94.6 seconds for a perpendicular trajectory and 25 scanning steps, while the longest was 1671.7 seconds long for a parallel scan and 1000 scanning steps. They seem to follow a linear evolution, so two linear regressions were made, one for each scanning direction. The results of the two regressions are detailed in Table 4.3. These values allow us to predict the approximate duration of a scan for a set number of steps, for this scanning area size only. The slope and R2 score of the parallel scans are respectively greater and less close to 1 than the ones of the perpendicular scans. This may be explained by the fact that the parallel scans require more back-and-forth motion of the robot due to its Z-shaped scanning trajectory. This was illustrated in Figure 3.13. To predict the scanning duration, in any case, this kind of regression should be done by looking at the duration of a scan against the width between scanning lines. They were calculated by dividing the number of steps of each scan by the scanned length, which depends on the scanning direction (either 97 mm or 226 mm). The duration of the scan against the width between scanning lines is plotted in Figure 4.9. it seems to follow a 1 x evolution, so the relation between the scan duration and 1 width steps was plotted and a linear regression was conducted. On this new plot (Figure 4.10), 43
  • 45. the difference between the two scanning directions almost disappeared, and the relation between 1 width steps and the scan duration, whatever the scanning direction is, was found. It is described in Table 4.4. This regression is only valid for the current scanning area size, and more testing should be done to find the influence of the scanning area size on the scan duration. Figure 4.8: The duration in second of the scans against their number of steps, for parallel and per- pendicular scanning trajectories. Linear regression were computed, with R2 values of respectively 0.9841 and 0.9985. Direction Slope Intercept R2 Value Standard Deviation Parallel 1.550 7.411 0.9841 0.1141 Perpendicular 0.757 80.473 0.9985 0.0155 Table 4.3: The results of the two linear regressions for the scans durations. 44
  • 46. Figure 4.9: The scan duration in second against the width between scanning lines in millimetre. It seems to follow a 1 x evolution. Figure 4.10: The scan duration in second against the inverse of the scanning lines width (mm−1). Using this data instead of the number of steps removed some of the differences caused by the two different directions. Direction Slope Intercept R Value Standard Deviation Both 147.697 68.479 0.9730 9.043 Scan Duration (s) = 147.697∗ 1 Width steps (mm) +68.479 Table 4.4: The results of the last linear regression. A relation between the scan duration and the width of the steps has been found. 4.3.3 Quality of the generated crack paths Figure 4.11 shows the generated path of the first six cracks. They come from a fine scan, which is comprised of 300 scanned lines in the parallel direction, resulting in a distance of 0.32 millimetres between each scanning line. Paths were generated for each of the 16 scans, and their Lock Step Euclidian distances (LSED) were calculated and averaged by scans. This gave out a total of 16 data points, representing the accuracy of each scan over a range of crack width (0.31 to 2.22 mm, with a mean width of 1.06 mm). 45
  • 47. Figure 4.11: The resulting crack path (in red), the true path (in green) and the outline of the cracks 0 to 5 (in blue) for a parallel scan and a width steps of 0.32 mm. These points are represented in Figure 4.13 against the width of the steps of each scan. For widths above 1.2 millimetres, the mean LSED goes beyond 1 millimetre, which means that the generated paths certainly go out of the crack’s limit multiple times. One of them is visible in Figure 4.12. Figure 4.12: An obviously wrong generated crack path (in red) on crack number 4, which was obtained from a coarse parallel scan (Width steps = 3.88 mm). 46
  • 48. Figure 4.13: The graph representing the quality of the generated scans in term of mean LSED (mm) against the width of the scanning steps (mm). Figure 4.14: A zoom in of the rectangular red area of Figure 4.13, showing the scans with the shortest width between steps and the smallest mean LSED. A zoom in the plot of the LSED is made in Figure 4.14. This plot shows that for a scanning width of less than 1.2 millimetres, the mean LSED of the scans falls below 0.4 millimetres, thus they are fairly accurate. Also, a clear distinction is visible between parallel and perpendicular scans. Even when fairly compared by the width of their steps and not their number, the mean LSED of the parallel scans averages at 0.16 millimetres, while the perpendicular scans do at 0.27 millimetres. Finally, the minimum LSED seems to plateau at 0.16 millimetres, as reducing the width between steps below 1 millimetre did not significantly reduce the LSED. To understand the difference between the quality of the parallel and perpendicular scans, seeing the resulting 3D point clouds can help. In Figure 4.15, two scans of similar width steps done in the two directions are shown, as well as the true path from the SVG file. The mean LSED already tell that the parallel scans lead to smaller LSED, and the look of the cloud also shows that the crack is more visible and clearer on the parallel scan. The edges of the crack can easily be missed on the perpendicular scans depending on the location of the scanning lines, while on the parallel scans, it is certain that the edges of the crack will be caught by the high resolution (2048 points) of the line. For future scans, the scanning trajectories should be kept as parallel as possible to the crack, so that the straight scanning lines are "cut" by the defect and identify it correctly. 47
  • 49. Figure 4.15: Two scans side by side, with relatively close width steps and two different scanning directions. The true path is represented in green in the middle of the Figure. Figure 4.14 showed that no improvements were made on the mean LSED below a certain point. This can be caused by the way the data processing is conducted, and the fact that the generated crack paths were compared to the SVG file of the crack palette. These two variables may have damaged our ability to obtain a LSED of zero. We can infer that the maximum distance between scanning lines should be less than the approximate width of the crack. Figure 4.16 shows the average mean LSED of these finest scans with a scanning width of less than 1.2 millimetres but detailed for each crack of the crack palette. Only crack number 5, the narrowest of them, shows the worst results. Once again, the parallel scans have better results than the perpendicular ones. The last two plots illustrated that for cracks in this range of width (0.31 to 2.22 mm), a scanning done parallel to the crack and with a width of approximately 1.2 millimetres were the best parameters to use to obtain the best accuracy and the fastest scan. Doing thinner scans would only result in longer scans with the same accuracy. A scan with these parameters would take approximately 211 seconds to complete according to Section 4.3.2. 48
  • 50. Figure 4.16: The LSED (mm) for the scans with a scanning width of less than 1.2 mm, averaged for each crack of the palette. 4.4 Influence of the manual steps on the resulting accuracy This section will present the results of the repeatability test done to make sure that the user, when doing the manual steps of the crack analysis, does not greatly make the results fluctuate. For each of the ten analyses and each crack, the mean LSED between the points of the calculated crack path and the original "true" path coming from the SVG file was calculated. These 70 means were averaged by crack, and the corresponding standard deviations and coefficients of variation were calculated as well. All of these results are shown in Table 4.5. The mean LSED to the correct path is less than 0.6 millimetres for all cracks except for crack number 6. As mentioned before, this crack is almost nonexistent and can not be used to assess the accuracy of the crack detection algorithm. However, for all of the cracks, the coefficient of variation of the mean LSED of the calculated path to the true path is on average 4.65%. This means that the variations in the path caused by the manual steps of the processing are minimal. 49
  • 51. Crack Number 0 1 2 3 4 5 6 LSED to True Paths (mm) 0.438 0.385 0.372 0.389 0.219 0.561 12.941 Standard Deviation (mm) 0.021 0.023 0.026 0.014 0.006 0.024 0.518 Coefficient of Variation (%) 4.86 5.90 7.03 3.60 2.89 4.27 4.00 Table 4.5: The results of the repeatability test. 4.5 Resulting crack paths on the cylinder head gasket This section will show the results of the crack scanning process and path computation on the cylin- der head gasket made in Section 3.3. Three cracks have been processed: they are circled in red in Figure 4.17, and the resulting cracks and crack paths are shown in Figure 4.18. Even when the crack is not exactly identified, the user can correct the resulting point cloud when doing the manual selection, and the mean path stays relatively correct. Figure 4.17: Scanning area = 154.9 x 199.2 mm2, Width steps = 1.549, N steps = 100, Scanning direction = along x, Speed factor = 4. 50
  • 52. Figure 4.18: On the left-hand side, the results of the crack generating algorithm and the selection of the user (with the crack in white and the parts that are not the crack in yellow). The resulting crack path is in blue on the right-hand side, on the scanned point cloud (in red). 51
  • 53. 5 Conclusion 5.1 Summary In this project, a robotic system capable of automatically scanning a designated area was designed and implemented. It works by going over the whole area to scan and stopping a set number of times to save lines of 2048 3D points. This solution was able to generate point clouds of the scanned components, which can be passed through an algorithm designed to automatically identify the location of cracks and generate their mean paths (2). To assess the quality of this solution, two testing parts, a "crack palette" showing seven cracks of varying width, as well as a cylinder head gasket with various added cracks, were laser cut on an Epilog laser cutter in a 5 millimetres thick plastic sheet. The quality of the crack palette was checked, by comparing point clouds generated from pictures of the real part against a point cloud generated from the Scalable Vector Graphics file used to make it. This comparison returned a mean distance between the two of 0.5 millimetres, which may derive from various factors, including the setup used to take the photographs and their processing into point clouds (1). The impact of the speed of the robot on the quality of the scan was tested by performing scans of the same component at seven different speeds, with all other parameters set constant. The result- ing distances between each generated cloud and a baseline one showed that there were no major variations between a slow and a fast scan, except for a huge difference in the scans’ durations. A maximum "speed factor" of 4 was chosen to ensure shorter scanning time while limiting the jerk and vibration in the robot and its workspace, which can damage the robot over time, that higher speeds caused (3). The two remaining parameters to investigate were the scanning direction relative to the crack, as well as the scanning resolution. Sixteen scans with different parameters were conducted, and the mean Lock Step Euclidian distances between the generated crack paths and their "true" counterparts generated from the Scalable Vector Graphics file were calculated. The results showed that a width 52
  • 54. between scanning lines of less than 1.2 millimetres, which is the mean width of the cracks of the palette, was necessary to obtain satisfying accuracy on the generated crack paths. The results also showed that scanning parallelly to the cracks, thus having the scanning lines "cut" by the crack on the point cloud, led to a better crack path generation than scanning perpendicularly (4). Finally, the human influence in the manual steps of the crack identification process was tested by conducting the analysis of a scan ten times by the same user. The results showed a coefficient of variation of approximately 5%, which proves that the variation caused by the user is minimum (5). 5.2 Limitations This project was limited to the scanning of flat components from a top view perspective. For the design of a proof of concept, it simplified the scanning trajectory generation and helped us focus more on point cloud processing and crack identification. However, on real parts, curved surfaces are very likely to be present. This kind of surface would force the system to adapt its altitude when doing the scanning to keep the part in the scanning range. Cracks could also be present on vertical surfaces of components, which would force the robot to conduct the scan from different angles. A more adaptive trajectory generation algorithm would need to be implemented to analyse such defects. Our implementation is able to output the point cloud of the identified crack and calculate its mean path, but robotic welding may need more information, such as the points normal or details of the inside and depth of the crack. Gathering this information may require the addition of sensors to the system, such as cameras or touch probes. This kind of additional sensors may also be required for parts that present complex shapes or edges that a laser scanner may have a hard time surveying. Finally, the data processing implemented relies on the inputs of a user to select the broad loca- tion of the crack and check for potential errors. This means that the robot can still work on its own, but a remote operator needs to conduct the manual steps of the analysis. 53
  • 55. 5.3 Contribution This system was designed to be a proof of concept of crack identification on unknown parts. Its novelty rests in the fact that it does not require any prior knowledge about the component, except for its overall location, and does not require heavy calibration. While automatic laser scanning devices already exist, in the quality control sector, for instance, they are often set up in a perfect way to optimise their accuracy, require a specific calibration routine and only work on a specific task. This system paves the way toward autonomous inspection and even self-repairing robots. This kind of robot could be used in hazardous environments, such as in a nuclear power plant, to proceed to inspections without the need for a worker to be present in person. Another application is round-the-clock inspections, such as in hospital plant rooms, that require constant monitoring. Having human workers permanently surveying these essential rooms is tiring for them and expen- sive for the hospital, whereas robots could be designed to patrol around these locations and make sure that no defects have appeared on the systems. They could also be equipped with more sensors, such as liquid detectors for leak detection or microphone to analyse the patterns of the machines’ vibrations. Finally, the design of this kind of system is a step towards more environmentally friendly prac- tices. Repairing parts is often overlooked over the simplicity of ordering a new one, but this solution comes with expenses and a negative environmental impact. Repairing a part allows for its reuse and avoids the need of throwing it away, especially because metal recycling today has the potential of being sustainable, but still has many weaknesses [43]. 5.4 Future Work The designed system proved to be working on the parts made for testing purpose, but some modi- fications and further testing is required to bring this system closer to being commercially available. Its main limitation is its scanning being limited to the top view of flat components. Tuning the scanning trajectory generation and crack identification processes to scan any crack from any angle would be the next logical step. Rethinking the scanning trajectory to take the shape of an S instead of a Z would also decrease the total scanning duration, and conducting more investigation on the 54
  • 56. relationship between the scanning area, the scanning resolution and the scan duration would allow for a better scan duration model. Furthermore, the scanning trajectories follow at the moment either the x or y direction of the world frame, but scanning at any angle may be required if the user wants to keep the scanning lines as perpendicular as possible to the scanned crack to keep the accuracy maximum. Another huge part of the scanning process that is worth investigating is the scanning frequency. It is currently limited in this project by the fact that the robot is moved to each scanning position to a full stop to save the 3D points. Saving the 3D points continuously while the robot is moving along its scanning trajectory may decrease the scan duration substantially, but vibrations and accuracy need to be closely monitored to avoid errors. Once the 3D point cloud of the part has been saved, the crack path generating algorithm is used to output the mean path of the crack. This algorithm was designed for the scope of this project, and many of its parameters were empirically set. A large-scale analysis with multiple parts to scan and various crack profiles could be conducted to find the best set of parameters to use in the algorithm for an optimal crack path generation. Concerning potential additional work on the 3D point clouds, superposing multiple clouds done with different scanning directions may yield a more accurate point cloud, at the cost of an increased processing time and risking more errors due to poor superposition and wrong point registration. Once a finer point cloud is created, it would also be possible to generate a 3D model of the part instead of keeping the point cloud. Working with 3D models allows easier comparison with the original part file, but requires additional softwares and more variables to account for. 55
  • 57. A Appendix Parts of the python scripts and RoboDK simulation are on the following GitHub repository. The scripts are not usable as they are because the 3D point clouds are not saved on the repository. legentil42, ‘Laser scanning for crack detection’. Sep. 18, 2022. Accessed: Sep. 18, 2022. [Online]. Available: https://github.com/legentil42/Laser-scanning-master 56
  • 58. Bibliography [1] M. Avila, S. Begot, F. Duculty, and T. S. Nguyen. “2D image based road pavement crack de- tection by calculating minimal paths and dynamic programming”. en. In: 2014 IEEE Inter- national Conference on Image Processing (ICIP). Paris, France: IEEE, Oct. 2014, pp. 783– 787. ISBN: 978-1-4799-5751-4. DOI: 10 . 1109 / ICIP . 2014 . 7025157. Available from: http://ieeexplore.ieee.org/document/7025157/ [Accessed Sept. 18, 2022]. [2] F. Blais and J. A. Beraldin. Recent Developments in 3D Multi-modal Laser Imaging Ap- plied to Cultural Heritage. en. Machine Vision and Applications [online]. 17.6 (Dec. 2006), pp. 395–409. ISSN: 1432-1769. DOI: 10.1007/s00138- 006- 0025- 3. Available from: https://doi.org/10.1007/s00138-006-0025-3 [Accessed June 16, 2022]. [3] F. Caiazzo, F. Curcio, G. Daurelio, and F. C. Minutolo. Laser cutting of different poly- meric plastics (PE, PP and PC) by a CO2 laser beam. en. Journal of Materials Processing Technology [online]. 159.3 (Feb. 2005), pp. 279–285. ISSN: 09240136. DOI: 10.1016/j. jmatprotec.2004.02.019. Available from: https://linkinghub.elsevier.com/ retrieve/pii/S0924013604002109 [Accessed Sept. 11, 2022]. [4] J. Chow, T. Xu, S.-M. Lee, and K. Kengkool. Development of an Integrated Laser-Based Reverse Engineering and Machining System. en. The International Journal of Advanced Manufacturing Technology [online]. 19.3 (Feb. 2002), pp. 186–191. ISSN: 1433-3015. DOI: 10.1007/s001700200013. Available from: https://doi.org/10.1007/s001700200013 [Accessed June 16, 2022]. [5] L. Ciocca and R. Scotti. CAD-CAM generated ear cast by means of a laser scanner and rapid prototyping machine. en. The Journal of Prosthetic Dentistry [online]. 92.6 (Dec. 2004), pp. 591–595. ISSN: 00223913. DOI: 10.1016/j.prosdent.2004.08.021. Available from: https://linkinghub.elsevier.com/retrieve/pii/S0022391304005542 [Accessed June 16, 2022]. 57
  • 59. [6] D. De Becker, J. Dobrzanski, L. Justham, and Y. Goh. A laser scanner based approach for identifying rail surface squat defects. en. Proceedings of the Institution of Mechanical En- gineers, Part F: Journal of Rail and Rapid Transit [online]. 235.6 (July 2021), pp. 763– 773. ISSN: 0954-4097, 2041-3017. DOI: 10.1177/0954409720962252. Available from: http://journals.sagepub.com/doi/10.1177/0954409720962252 [Accessed June 16, 2022]. [7] J. Digne. “Similarity based filtering of point clouds”. In: 2012 IEEE Computer Society Con- ference on Computer Vision and Pattern Recognition Workshops. ISSN: 2160-7516. June 2012, pp. 73–79. DOI: 10.1109/CVPRW.2012.6238917. [8] Y. Duan and C. Yang. Low-complexity Point Cloud Filtering for LiDAR by PCA-based Dimension Reduction. en (), p. 7. [9] Free STL file 5 cylinder head gasket - 3D printer design to download - Cults. Available from: https://cults3d.com/en/3d-model/art/5-cylinder-head-gasket [Accessed Sept. 5, 2022]. [10] J. Glud, J. Dulieu-Barton, O. Thomsen, and L. Overgaard. Automated counting of off-axis tunnelling cracks using digital image processing. Composites Science and Technology [on- line]. 125 (Jan. 2016). DOI: 10.1016/j.compscitech.2016.01.019. [11] C. Gunkel, A. Stepper, A. C. Müller, and C. H. Müller. Micro crack detection with Dijkstra’s shortest path algorithm. en. Machine Vision and Applications [online]. 23.3 (May 2012), pp. 589–601. ISSN: 1432-1769. DOI: 10.1007/s00138- 011- 0324- 1. Available from: https://doi.org/10.1007/s00138-011-0324-1 [Accessed Sept. 18, 2022]. [12] P. Hong-Seok and T. U. Mani. Development of an Inspection System for Defect Detection in Pressed Parts Using Laser Scanned Data. en. Procedia Engineering [online]. 69 (2014), pp. 931–936. ISSN: 18777058. DOI: 10.1016/j.proeng.2014.03.072. Available from: https://linkinghub.elsevier.com/retrieve/pii/S187770581400318X [Accessed June 16, 2022]. [13] Y. Hosni and L. Ferreira. Laser based system for reverse engineering. en. Computers & Industrial Engineering [online]. 26.2 (Apr. 1994), pp. 387–394. ISSN: 03608352. DOI: 10. 58
  • 60. 1016/0360-8352(94)90072-8. Available from: https://linkinghub.elsevier.com/ retrieve/pii/0360835294900728 [Accessed June 16, 2022]. [14] C. Hu, L. Kong, and F. Lv. Application of 3D laser scanning technology in engineering field. en. E3S Web of Conferences [online]. 233 (2021). Ed. by L. Zhang, S. Defilla, and W. Chu, p. 04014. ISSN: 2267-1242. DOI: 10.1051/e3sconf/202123304014. Available from: https://www.e3s-conferences.org/10.1051/e3sconf/202123304014 [Accessed June 16, 2022]. [15] M. C. Israel and R. G. Pileggi. Use of 3D laser scanning for flatness and volumetric analysis of mortar in facades. en. Revista IBRACON de Estruturas e Materiais [online]. 9 (Feb. 2016). Publisher: IBRACON - Instituto Brasileiro do Concreto, pp. 91–122. ISSN: 1983-4195. DOI: 10.1590/S1983-41952016000100007. Available from: http://www.scielo.br/j/ riem/a/RK6DFYH5XBjPnFWGqMDnqMp/?lang=en [Accessed June 16, 2022]. [16] B. Kleiner, C. Munkelt, T. Thorhallsson, G. Notni, P. Kühmstedt, and U. Schneider. Hand- held 3-D Scanning with Automatic Multi-View Registration Based on Visual-Inertial Nav- igation. en. International Journal of Optomechatronics [online]. 8.4 (Oct. 2014), pp. 313– 325. ISSN: 1559-9612, 1559-9620. DOI: 10.1080/15599612.2014.942931. Available from: http://www.tandfonline.com/doi/abs/10.1080/15599612.2014.942931 [Accessed July 1, 2022]. [17] K. H. Lee and H.-p. Park. Automated inspection planning of free-form shape parts by laser scanning. en. Robotics and Computer-Integrated Manufacturing [online]. 16.4 (Aug. 2000), pp. 201–210. ISSN: 0736-5845. DOI: 10.1016/S0736-5845(99)00060-5. Available from: https://www.sciencedirect.com/science/article/pii/S0736584599000605 [Accessed June 16, 2022]. [18] M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton, S. Ander- son, J. Davis, J. Ginsberg, J. Shade, and D. Fulk. “The digital Michelangelo project: 3D scan- ning of large statues”. In: Proceedings of the 27th annual conference on Computer graphics and interactive techniques. SIGGRAPH ’00. USA: ACM Press/Addison-Wesley Publishing Co., July 2000, pp. 131–144. ISBN: 978-1-58113-208-3. DOI: 10.1145/344779.344849. Available from: https://doi.org/10.1145/344779.344849 [Accessed June 16, 2022]. 59
  • 61. [19] J. Liu, Q. Zhang, J. Wu, and Y. Zhao. Dimensional accuracy and structural performance assessment of spatial structure components using 3D laser scanning. en. Automation in Con- struction [online]. 96 (Dec. 2018), pp. 324–336. ISSN: 0926-5805. DOI: 10 . 1016 / j . autcon.2018.09.026. Available from: https://www.sciencedirect.com/science/ article/pii/S0926580518301699 [Accessed June 16, 2022]. [20] S. H. Mian and A. Al-Ahmari. Comparative analysis of different digitization systems and selection of best alternative. en. Journal of Intelligent Manufacturing [online]. 30.5 (June 2019), pp. 2039–2067. ISSN: 1572-8145. DOI: 10.1007/s10845-017-1371-x. Available from: https://doi.org/10.1007/s10845-017-1371-x [Accessed June 16, 2022]. [21] Micro-Epsilon-Messtechnik. “Operating Instructions for scanCONTROL 30xx”. In: 2019. [22] A. Mohan and S. Poobal. Crack detection using image processing: A critical review and analysis. en. Alexandria Engineering Journal [online]. 57.2 (June 2018), pp. 787–798. ISSN: 11100168. DOI: 10.1016/j.aej.2017.01.020. Available from: https://linkinghub. elsevier.com/retrieve/pii/S1110016817300236 [Accessed Sept. 18, 2022]. [23] A. Nurunnabi, D. Belton, and G. West. “Robust Segmentation in Laser Scanning 3D Point Cloud Data”. In: 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA). Dec. 2012, pp. 1–8. DOI: 10.1109/DICTA.2012.6411672. [24] A. Nurunnabi, G. West, and D. Belton. Outlier detection and robust normal-curvature es- timation in mobile laser scanning 3D point cloud data. en. Pattern Recognition [online]. 48.4 (Apr. 2015), pp. 1404–1419. ISSN: 00313203. DOI: 10 . 1016 / j . patcog . 2014 . 10 . 014. Available from: https : / / linkinghub . elsevier . com / retrieve / pii / S0031320314004312 [Accessed June 16, 2022]. [25] H. Oliveira and P. L. Correia. Automatic Road Crack Detection and Characterization. IEEE Transactions on Intelligent Transportation Systems [online]. 14.1 (Mar. 2013). Conference Name: IEEE Transactions on Intelligent Transportation Systems, pp. 155–168. ISSN: 1558- 0016. DOI: 10.1109/TITS.2012.2208630. [26] H. Oliveira and P. Lobato Correia. “Identifying and retrieving distress images from road pavement surveys”. In: 2008 15th IEEE International Conference on Image Processing. ISSN: 2381-8549. Oct. 2008, pp. 57–60. DOI: 10.1109/ICIP.2008.4711690. 60
  • 62. [27] pptk - Point Processing Toolkit. original-date: 2018-07-11T08:33:04Z. Aug. 2022. Available from: https://github.com/heremaps/pptk [Accessed Sept. 11, 2022]. [28] K. M. Publishing. 3D Scanner Speeds Measurement of Power Plant Components. en-US. Nov. 2021. Available from: https://metrology.news/3d-scanner-speeds-measurement- of-power-plant-components/ [Accessed Sept. 13, 2022]. [29] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. PointNet: Deep Learning on Point Sets for 3D Clas- sification and Segmentation. arXiv:1612.00593 [cs]. Apr. 2017. DOI: 10.48550/arXiv. 1612.00593. Available from: http://arxiv.org/abs/1612.00593 [Accessed Sept. 18, 2022]. [30] H. D. Reidenbach, H. Warmbold, J. Hofmann, and K. Dollinger. “First Experimental Results On Eye Protection By The Blink Reflex For Laser Class 2”. In: 2001. [31] R. B. Rusu and S. Cousins. “3D is here: Point Cloud Library (PCL)”. en. In: 2011 IEEE International Conference on Robotics and Automation. Shanghai, China: IEEE, May 2011, pp. 1–4. ISBN: 978-1-61284-386-5. DOI: 10.1109/ICRA.2011.5980567. Available from: http://ieeexplore.ieee.org/document/5980567/ [Accessed Sept. 18, 2022]. [32] G. Sansoni, M. Trebeschi, and F. Docchio. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation. en. Sensors [online]. 9.1 (Jan. 2009). Number: 1 Publisher: Molecular Diversity Preservation Interna- tional, pp. 568–601. ISSN: 1424-8220. DOI: 10.3390/s90100568. Available from: https: //www.mdpi.com/1424-8220/9/1/568 [Accessed June 16, 2022]. [33] S. K. Sinha and P. W. Fieguth. Automated detection of cracks in buried concrete pipe images. en. Automation in Construction [online]. 15.1 (Jan. 2006), pp. 58–72. ISSN: 09265805. DOI: 10.1016/j.autcon.2005.02.006. Available from: https://linkinghub.elsevier. com/retrieve/pii/S0926580505000452 [Accessed Sept. 18, 2022]. [34] S. Son, H. Park, and K. H. Lee. Automated laser scanning system for reverse engineering and inspection. en. International Journal of Machine Tools and Manufacture [online]. 42.8 (June 2002), pp. 889–897. ISSN: 08906955. DOI: 10.1016/S0890-6955(02)00030-5. Available from: https : / / linkinghub . elsevier . com / retrieve / pii / S0890695502000305 [Accessed Sept. 17, 2022]. 61
  • 63. [35] C. Suchocki and A. Wasilewski. Geodetic surveys of Cliff shores with the application of scanning technology (Jan. 2009), pp. 93–100. [36] A. M. A. Talab, Z. Huang, F. Xi, and L. HaiMing. Detection crack in image using Otsu method and multiple filtering in image processing techniques. en. Optik [online]. 127.3 (Feb. 2016), pp. 1030–1033. ISSN: 0030-4026. DOI: 10 . 1016 / j . ijleo . 2015 . 09 . 147. Available from: https://www.sciencedirect.com/science/article/pii/ S0030402615012164 [Accessed Sept. 18, 2022]. [37] Y. Tao, A. Both, R. I. Silveira, K. Buchin, S. Sijben, R. S. Purves, P. Laube, D. Peng, K. Toohey, and M. Duckham. A comparative analysis of trajectory similarity measures. en. GIScience & Remote Sensing [online]. 58.5 (July 2021), pp. 643–669. ISSN: 1548-1603, 1943-7226. DOI: 10.1080/15481603.2021.1908927. Available from: https://www. tandfonline.com/doi/full/10.1080/15481603.2021.1908927 [Accessed Sept. 2, 2022]. [38] M. Tatarevic, B. Gapinski, and N. Swojak. The Use of Optical Scanner for Analysis of Surface Defects. en. In: DAAAM Proceedings. Ed. by B. Katalinic. Vol. 1. DAAAM Inter- national Vienna, 2019, pp. 0076–0085. ISBN: 978-3-902734-22-8. DOI: 10.2507/30th. daaam.proceedings.010. Available from: http://www.daaam.info/Downloads/ Pdfs/proceedings/proceedings_2019/010.pdf [Accessed June 16, 2022]. [39] J. Tomasiak. The Use of Optical Methods for Leak Testing Dampers. en. Procedia Engi- neering [online]. 69 (2014), pp. 788–794. ISSN: 18777058. DOI: 10.1016/j.proeng. 2014.03.055. Available from: https://linkinghub.elsevier.com/retrieve/pii/ S1877705814003014 [Accessed Sept. 17, 2022]. [40] Q. Wang, Y. Tan, and Z. Mei. Computational Methods of Acquisition and Processing of 3D Point Cloud Data for Construction Applications. en. Archives of Computational Methods in Engineering [online]. 27.2 (Apr. 2020), pp. 479–499. ISSN: 1886-1784. DOI: 10.1007/ s11831-019-09320-4. Available from: https://doi.org/10.1007/s11831-019- 09320-4 [Accessed June 16, 2022]. [41] X. Wang, Z. Xie, K. Wang, and L. Zhou. Research on a Handheld 3D Laser Scanning System for Measuring Large-Sized Objects. en. Sensors [online]. 18.10 (Oct. 2018). Number: 10 62
  • 64. Publisher: Multidisciplinary Digital Publishing Institute, p. 3567. ISSN: 1424-8220. DOI: 10.3390/s18103567. Available from: https://www.mdpi.com/1424-8220/18/10/ 3567 [Accessed July 1, 2022]. [42] Wingman Tool Changer. Aug. 2021. Available from: http://triplea-robotics.com/ tool-changer/ [Accessed Sept. 5, 2022]. [43] S. Wright, S. Jahanshahi, F. Jorgensen, and D. Brennan. Is Metal Recycling Sustainable? Journal Abbreviation: Green Processing 2002 - Proceedings: International Conference on the Sustainable Proceesing of Minerals Publication Title: Green Processing 2002 - Proceedings: International Conference on the Sustainable Proceesing of Minerals. Jan. 2002. [44] H. Zhao, J.-P. Kruth, N. Van Gestel, B. Boeckmans, and P. Bleys. Automated dimensional inspection planning using the combination of laser scanner and tactile probe. en. Mea- surement [online]. 45.5 (June 2012), pp. 1057–1066. ISSN: 02632241. DOI: 10.1016/j. measurement.2012.01.037. Available from: https://linkinghub.elsevier.com/ retrieve/pii/S0263224112000528 [Accessed June 16, 2022]. [45] Q. Zou, Y. Cao, Q. Li, Q. Mao, and S. Wang. CrackTree: Automatic crack detection from pavement images. en. Pattern Recognition Letters [online]. 33.3 (Feb. 2012), pp. 227–238. ISSN: 01678655. DOI: 10.1016/j.patrec.2011.11.004. Available from: https:// linkinghub.elsevier.com/retrieve/pii/S0167865511003795 [Accessed Sept. 18, 2022]. 63