The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

981 views

Published on

This is my Master final thesis for the conducted research that led to build a mechatronic machine vision system for inspection of garnished wall plates.

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
981
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
34
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

The Development of Mechatronic Machine Vision System for Inspection of Ceramic Plates

  1. 1. Ain Shams University Faculty f EngineeringDESIGN AND IMPLEMENTATION OF FLEXIBLE MANUFACTURING CELL FOR THE INSPECTION OF CERAMIC WALL PLATES A Thesis Submitted in Partial Fulfilment of the Requirements for the Degree of MSc. In Mechatronics By Waleed Abd El-Megeed El-Badry BSc. In Mechanical Engineering (Mechatronics Branch) Teaching Assistant, Mechatronics Department College of Engineering Misr University for Science and Technology Supervised By Prof. Farid A. Tolbah Emeritus Professor in Design and Production Engineering Department Ain Shams University Dr. Ahmed M. Aly Assistant Professor in Design and Production Engineering Department Ain Shams University 2012
  2. 2. ABSTRACT Quality inspection is one of the recurring topics among industrialcommunity. Garnished wall plates may require inspection in terms ofgeometry (height, width, corner radius ...etc), Drawn Objects Orientation(Angles with respect to reference axis) and colour of drawn objects.Emulation of human decision in classification is what inspection is allabout. In this thesis, we implemented a hybrid system of fuzzy logic andfuzzy c-mean clustering for quality inspection of garnished ceramic wallplates. This novel approach is designed to inspect colour, geometry, anglesand existence of colour spots. The study is intended to reduce error due tohuman visual inspection. A developed Flexible Manufacturing Cell (FMC)is responsible for inspection of ceramic plates applying Computer VisionSystem (CVS) and automation of the cell. The bench marking of theproposed algorithm is considered promising compared to other publishedpeers. II
  3. 3. SUMMARY The research effort expended upon the problem of objectivelyinspecting, analysing and characterizing garnished ceramic tiles is easilyjustified by the commercial and safety benefits to the industry such asautomating obsolete manual inspection procedure, providing higherhomogeneity within sorted grades which consequently would improve theprocessing stability of the overall inspection procedure. Tiredness and lack of concentration are common problems, leadingto errors in grading tiles. Gradual changes are difficult for humaninspectors to detect and it is possible that slight and progressive changeswill not be noticed at an early stage. Additionally, it is particularly difficultfor the human eye to accurately sort tiles into shades, with changing lightconditions in a factory. Human judgment is, as usual, influenced byexpectations and prior knowledge. Since early 90s, several published papers were concerned withfinding new approaches to detect cracks and colour spots in ceramic tiles.However, most of them were targeting the inspection algorithm andneglected the other aspects of machine vision system (variability inillumination and surface reflectivity, image acquisition, mechanicalhandling system and electrical interfacing circuit). Meanwhile, measuringdrawn objects geometry and colour quality has never been discussed priorto the conducted research which is demanded when inspecting garnishedwall plates. The proposed thesis offers a novel approach in performinggeometric and colour quality inspection based on a developed hybridtechnique of fuzzy logic and fuzzy c-mean clustering to detect cracks,colour spots and colour mismatch. It also offers scoring for extractedgeometric and colour features. Live image acquisition parameters are discussed in details withdifferent parameters affecting the quality of acquired image such asgamma correction and frame rate consideration. Detailed steps for liveacquisition are also discussed. A proposed algorithm for image calibration to minimise lensdistortion is demonstrated. The proposed algorithm proved its validity inlens and perspective correction. III
  4. 4. To perform colour grouping, fuzzy c-clustering was chosen wherea visual representation of clustering is displayed to offer visualinterpretation of drawn objects isolation rather than dummy clusteringnodes. Geometric feature extraction takes place by means of edgedetection using tuned Prewit filter with rays of spokes and line fitting forevaluation of edge length. Moreover, geometric and colour scores arecalculated based on the metrics measured. A fuzzy classifier having two inputs (geometry and colour score)and three outputs (acceptable, colour spot and colour mismatch) were builtfor garnished ceramic grading. A test rig was designed and manufactured for evaluation of theproposed system. The evaluation of the system showed promising resultsin terms of detection accuracy of correct ceramic grade. The effect ofceramic orientation was also studied to demonstrate the efficiency of theproposed calibration algorithm. Meanwhile, a benchmarking was carriedout to assess the overall performance. A comparative study was performedto show the effect of utilization of fuzzy c-mean over the nearest neighbourapproach which showed a better influence in terms of correct patterndetection by using the latter technique. Parts of the presented work in the submitted thesis were publishedin [1-3]. IV
  5. 5. ACKNOWLEDGEMENT The researcher is obliged to his supervisors, Prof. Farid Tolbah andDr. Ahmed Aly for their recurring support and guiding during the researchperiod, He also carries gratitude to the venerable referees, namely Prof.Magdy Abdel Hameed and Prof. Mohammed El-Adawy for their kindhospitality and comments to enhance this research. The priceless help of Prof. Mustafa Eissa in reviewing andenhancing this thesis will be memorable for life. The conducted research would not have been carried out withoutthe devoting assistance from Eng. Mahmoud Hassan who had majorfingerprints in the mechanical design of the built flexible cell. V
  6. 6. TABLE OF CONTENTS LITERATURE REVIEW ....................................................................................... 11.1 INTRODUCTION .................................................................................................. 11.2 MACHINE VISION AND COMPUTER VISION .............................................................. 31.3 MACHINE VISION AND HUMAN VISION ................................................................... 61.4 INSPECTION OF CERAMIC PLATES ............................................................................ 61.5 CHALLENGES IN AUTOMATED INSPECTION ............................................................... 71.6 STATEMENT OF THE PROBLEM ............................................................................. 151.7 OBJECTIVE OF THE PRESENT WORK ....................................................................... 15 IMAGE ACQUISITION AND PRE-PROCESSING ................................................ 172.1 SPATIAL CALIBRATION ....................................................................................... 17 2.1.1 Lens distortion ....................................................................................... 18 2.1.2 Telecentric Lens ..................................................................................... 18 2.1.3 Software based correction .................................................................... 192.2 PROPOSED ALGORITHM FOR CORRECTING AND CALIBRATING LENS DISTORTION ............. 20 2.2.1 Colour plane extraction ......................................................................... 20 2.2.2 Automatic Thresholding ........................................................................ 21 2.2.3 Particle grouping ................................................................................... 23 2.2.4 Particles and distance measurements................................................... 24 2.2.5 Performing correction on each pixel ..................................................... 262.3 IMAGE ACQUISITION.......................................................................................... 29 2.3.1 Analogue cameras................................................................................. 29 2.3.2 Digital Cameras ..................................................................................... 30 2.3.2.1 Camera Link ........................................................................................ 31 2.3.2.2 USB cameras ....................................................................................... 31 2.3.2.3 IEEE 1394 (Firewire cameras) .............................................................. 312.4 ACQUISITION PARAMETERS ................................................................................. 32 2.4.1 Image format ........................................................................................ 32 2.4.2 Speed ..................................................................................................... 33 2.4.3 Bayer colour filter .................................................................................. 33 2.4.4 Brightness and contrast ........................................................................ 35 2.4.5 Gain ....................................................................................................... 362.5 GAMMA CORRECTION ....................................................................................... 37 SOFT PARTITIONING AND COLOUR GROUPING ............................................. 393.1 CLUSTERING .................................................................................................... 393.2 PROBLEM OF CLUSTERING .................................................................................. 403.3 ADVANTAGES OF USING CLUSTERING .................................................................... 403.4 CONSTRAINTS OF USING CLUSTERING .................................................................... 40 VI
  7. 7. 3.5 METHODOLOGIES OF DISTANCE MEASUREMENTS .................................................... 41 3.5.1 Common distance functions .................................................................. 41 3.5.1.1 The Euclidean distance ....................................................................... 41 3.5.1.2 The Manhattan distance (aka taxicab norm or 1-norm) ..................... 41 3.5.1.3 The Hamming distance ....................................................................... 423.6 CLUSTERING TECHNIQUES .................................................................................. 433.7 EFFECTIVENESS OF COLOUR CLUSTERING ............................................................... 44 3.7.1 The need for fuzzification in colour grouping ........................................ 44 3.7.2 Challenges in fuzzy clustering ............................................................... 44 3.7.3 Proposed soft partitioning and group visualization using fuzzy clustering (fuzzy c-mean) ................................................................................... 45 GEOMETRIC FEATURE EXTRACTION .............................................................. 494.1 DEFINITION OF FEATURE VECTOR ......................................................................... 494.2 FEATURES OF INTEREST IN GARNISHED WALL PLATES ................................................ 504.3 APPLIED ALGORITHM FOR PRELIMINARY LOCATING OF DRAWN OBJECTS ....................... 50 Methodology of edge detection ............................................................ 50 Formulating edge detectors .................................................................. 51 Tuned Prewitt Filter ............................................................................... 534.4 COMPUTING FEATURE SCORE ............................................................................. 55 Calculating the score for colour quality ................................................ 55 Calculating the accuracy of geometric features .................................... 55 CLASSIFICATION OF CERAMIC PLATES USING FUZZY LOGIC ........................... 58 FUZZY LOGIC AND FUZZY LOGIC SYSTEMS ............................................................... 58 MAMDANI’S FUZZY LOGIC SYSTEM ....................................................................... 60 5.2.1 Fuzzification .......................................................................................... 60 5.2.2 Rules ...................................................................................................... 61 5.2.2.1 Fuzzy operations ................................................................................. 61 5.2.2.2 Implication .......................................................................................... 63 5.2.2.3 Aggregation ......................................................................................... 63 5.2.3 Defuzzification ....................................................................................... 64 5.2.3.1 Defuzzification techniques .................................................................. 65 METHODOLOGIES UTILIZED IN BUILDING A FUZZY INFERENCE SYSTEM .......................... 65 FORMULATING THE PROPOSED FUZZY CLASSIFIER FOR THE ASSESSMENT OF GARNISHEDWALL PLATES ................................................................................................................ 67 5.4.1 Fuzzification of input and output .......................................................... 68 5.4.2 Rules for classification ........................................................................... 69 5.4.3 Proposed modified defuziification method ........................................... 70 RESULTS AND DISCUSSION ........................................................................... 716.1 IMPLEMENTATION OF THE PROPOSED SYSTEM ............................................................... 71 VII
  8. 8. 6.1.1 Mechanical Design .................................................................................... 71 6.1.2 Electrical Circuitry ..................................................................................... 72 6.1.3 Software Architecture ............................................................................... 746.2 RESULTS ................................................................................................................ 756.3 DISCUSSION ........................................................................................................... 79 CONCLUSION ................................................................................................ 81 FUTURE WORK.............................................................................................. 82 REFERENCES ................................................................................................. 83 PUBLISHED PAPERS....................................................................................... 89 VIII
  9. 9. LIST OF FIGURESFIGURE 1.1 MECHATRONICS ENGINEERING DISCIPLINE .................................................................. 1FIGURE 1.2 ARCHETYPAL MACHINE VISION SYSTEMS APPLICABLE TO A WIDE ..................................... 2FIGURE 2.1 NOTION OF IMAGE CALIBRATION ............................................................................ 17FIGURE 2.2 TYPES OF LENS DISTORTION ................................................................................... 18FIGURE 2.3 IN A TELECENTRIC SYSTEM RAYS GET INTO THE OPTICS ONLY ......................................... 19FIGURE 2.4 THE PROPOSED ALGORITHM FOR PERSPECTIVE CORRECTION ......................................... 20FIGURE 2.5 SCREENSHOT OF CALIBRATION GRID USED IN CORRECTION AFTER RED PLANE EXTRACTION .. 21FIGURE 2.6 ADAPTIVE THRESHOLDING USING CLUSTERING TECHNIQUE ........................................... 22FIGURE 2.7 RESULTED IMAGE OF BOUNDARY TRACKING OF GRID DOTS ........................................... 23FIGURE 2.8 DETECTING CENTRES TO CHECK FOR SKEWNESS OF EACH DOT ....................................... 25FIGURE 2.9 VISUALIZATION OF DETECTED DOTS IN THE GRID ........................................................ 25FIGURE 2.10 PIXEL MAPPING TO RESTORE EXPECTED GRIDS LOCATIONS .......................................... 26FIGURE 2.11 ACQUIRED IMAGE OF CERAMIC PLATE .................................................................... 27FIGURE 2.12 THE CORRECTED IMAGE AFTER APPLYING THE ALGORITHM......................................... 28FIGURE 2.13 SCREENSHOT OF THE DEVELOPED SOFTWARE FOR CALIBRATION OF CERAMIC PLATE......... 28FIGURE 2.14 CCD ARRAY CONFIGURATION .............................................................................. 30FIGURE 2.15 IMAGE FORMAT AND DELIVERED SPEED RATE .......................................................... 33FIGURE 2.16 BAYER FILTER (CONCEPT AND CONFIGURATION)....................................................... 34FIGURE 2.17 BRIGHTNESS ADJUSTMENT .................................................................................. 35FIGURE 2.18 GAIN CONTROL ................................................................................................. 36FIGURE 2.19 GAMMA CORRECTION OF THE ACQUIRED CERAMIC IMAGE (Γ=0.7) .............................. 37FIGURE 2.20 IMPLEMENTATION OF GAMMA CORRECTION .......................................................... 38FIGURE 3.1 CLUSTERING IS PERFORMED BY GROUPING EACH SIMILAR DATA BASED ON COMMON CRITERIA ........................................................................................................................... 39FIGURE 3.2 TAXICAB GEOMETRY VS. EUCLIDEAN DISTANCE .......................................................... 42FIGURE 3.3 EXAMPLE OF HAMMING DISTANCE .......................................................................... 42FIGURE 3.4 VISUAL REPRESENTATION OF CLUSTERING RESULT ON THE GARNISHED WALL PLATES .......... 46FIGURE 3.5 THE DEVELOPED SOFTWARE SHOWING FUZZY CLUSTERING EFFECT IN HORIZONTAL ORIENTATION ......................................................................................................... 47FIGURE 3.6 THE DEVELOPED SOFTWARE SHOWING FUZZY CLUSTERING EFFECT IN VERTICAL ORIENTATION ........................................................................................................................... 48FIGURE 4.1 EXTRACTING GEOMETRIC FEATURES ........................................................................ 49FIGURE 4.2 FEATURE VECTOR AS METRICS FOR INSPECTION.......................................................... 49FIGURE 4.3 PROCESSED PIXEL AND ITS NEIGHBOURS ................................................................... 51FIGURE 4.4 KERNEL MULTIPLIED BY THE IMAGE WINDOW ............................................................ 51FIGURE 4.5 EDGE DETECTION AND EXTRACTING GEOMETRY OF DRAWN OBJECTS .............................. 53FIGURE 4.6 FITTING DETECTED POINTS ALONG THE PROFILE INTO LINES .......................................... 54FIGURE 4.7 SCREENSHOT OF THE DEVELOPED SOFTWARE SHOWING THE FEATURE EXTRACTION STAGE .. 56FIGURE 5.1 AN EAMPLE OF MEMBERSHIP FUNCRION FOR EVALUATION OF A PROGRAM ..................... 59 IX
  10. 10. FIGURE 5.2 FUZZIFYING CERAMIC PLATES FEATURES VECTOR ........................................................ 61FIGURE 5.3 EXAMPLE OF IMPLEMENTING FUZZY LOGICAL OPERATIONS ........................................... 62FIGURE 5.4 AN IMPLICATION EXAMPLE FROM THE PROPOSED RULES .............................................. 63FIGURE 5.5 EXAMPLE OF AGGREGATING EFFECT ON THE POSSIBILITY OF ACCEPTING THE CERAMIC PLATE ........................................................................................................................... 64FIGURE 5.6 RESULT OF DIFFERENT DEFUZZIFICATION TECHNIQUES ................................................ 66FIGURE 5.7 FUZZY LOGIC SYSTEM .......................................................................................... 67FIGURE 5.8 SCREENSHOT OF THE DEVELOPED MAMDANI’S INFERENCE ENGINE FOR CERAMIC ASSESSMENT ........................................................................................................................... 67FIGURE 5.9 COLOUR SCORE MEMBERSHIP FUNCTION.................................................................. 68FIGURE 5.10 GEOMETRIC SCORE MEMBERSHIP FUNCTION ........................................................... 68FIGURE 5.11 THE OUTPUT MEMBERSHIP FUNCTIONS OF CERAMIC GRADING .................................... 69FIGURE 6.1 THE TEST RIG OF THE PROPOSED SYSTEM .................................................................. 71FIGURE 6.2 THE H-BRIDGE CIRCUIT ......................................................................................... 72FIGURE 6.3 SCHEMATIC DIAGRAM OF THE USB INTERFACING CARD ............................................... 73FIGURE 6.4 PCB LAYOUT OF THE INTERFACING CARD .................................................................. 73FIGURE 6.5 ARCHITECTURAL DESIGN OF THE DEVELOPED SOFTWARE.............................................. 74FIGURE 6.6 ROTATED GARNISHED CERAMIC PLATE ..................................................................... 75FIGURE 6.7 CERAMIC PLATE WITH COLOUR MISMATCH ............................................................... 75FIGURE 6.8 CERAMIC PLATE WITH COLOUR SPOTS ...................................................................... 76FIGURE 6.9 THE EFFECT OF ROTATING CERAMIC PLATES OVER CORRECT DETECTION OF ITS CLASS ......... 77FIGURE 6.10 DETECTION PERCENTAGE OF CERAMIC GRADE (NEAREST NEIGBOUR VS. FUZZY C-MEAN.... 78 X
  11. 11. LIST OF TABLESTABLE 1.1 THE DIFFERENCE BETWEEN MACHINE VISION AND COMPUTER VISION ................................ 5TABLE 2.1 COMPARISON OF AVAILABLE CAMERAS UTILIZED IN MACHINE VISION APPLICATION ............. 32TABLE 5.1 RANGE OF COLOUR SCORE ...................................................................................... 68TABLE 5.2 RANGE OF GEOMETRIC SCORE ................................................................................. 68TABLE 5.3 RULES FOR CLASSIFYING GARNISHED PLATES ............................................................... 69TABLE 6.1 RESULT OF THE PROPOSED CLASSIFICATION SYSTEM FOR THE STATED FIGURES ................... 76TABLE 6.2 ACCURACY OF AUTOMATIC DETECTION OF CORRECT CLASSES ......................................... 77TABLE 6.3 SELECTED MTERICS FOR SOFTWARE PERFORMANCE ASSESSMENT .................................... 79TABLE 6.4 SYSTEM CONFIGURATION ....................................................................................... 79 XI
  12. 12. LIST OF PSEUDO CODELIST 2.1 COLOUR PLANE EXTRACTION ALGORITHM ..................................................................... 21LIST 2.2 PSEUDO OF THRESHOLDING USING CLUSTERING ALGORITHM ............................................ 23LIST 2.3 BOUNDARY EXTRACTION OF GRIDS CONTOUR ................................................................ 24LIST 2.4 RESTORATION OF GRIDS CIRCULAR SHAPE ..................................................................... 26LIST 2.5 RETRIEVAL OF GRIDS CORRECTED LOCATIONS ................................................................. 26LIST 3.1 COLOUR SORTING WITH FUZZY C-MEAN........................................................................ 46LIST 3.2 THE ALGORITHM FOR GROUPING OBJECTS BY COLOUR AND LOCATION ................................ 47 XII
  13. 13. LIST OF ABBREVIATIONS AND ACRONYMSAka Also Known As Application ProgrammingAPI Interfacing Automatic System for SurfaceASSIST Inspection and Sorting of TilesBMU Best Matching UnitCCD Charge Coupled DeviceCFA Colour Filter ArrayCS Colour ScoreCV Computer VisionDOM Degree Of MembershipDS Dimension ScoreDTS Distance ScoreFCM Fuzzy C-MeanFIS Fuzzy Inference SystemFIS Fuzzy Inference SystemFL Fuzzy LogicFLS Fuzzy Logic SystemFMC Flexible Manufacturing CellFMS Flexible Manufacturing SystemIPT Interactive Prototype ToolkitLOM Largest Of MaximumMF Membership FunctionMOM Middle Of MaximumMV Machine VisionOGS Overall Geometry ScoreOS Orientation ScoreRGB Red ,Green and Blue colour planesROI Region Of InterestSOM Self-Organising Maps.SOM Smallest Of MaximumTM Target Machine Template Learning from AtomicTMPLAR RepresentationsVB Visual BasicVs Versus XIII
  14. 14. Literature Review1.1 Introduction Mechatronics is an integration of mechanical engineering,electrical engineering, automatic control and computer science[4]. Thisdiscipline gains momentum among industrial community since researchersin this field became involved in robotics, automated measurements andmachine vision (Figure 1.1). • Handling Units •Interfacing with • Conveyors elctronic preipherals • Stress Analysis • Signal conditioninig Mechanical Electrical • Signal transmission Engineering Engineering Automatic Computer Control Science •Dynamic response •Software engineering • Intelligent control • Code reusability • System stability • Hardware interfacing APIs. Figure 1.1 Mechatronics engineering discipline Machine vision (MV) as shown in Figure 1.2 is concerned with theengineering of integrated mechanical-optical-electronic-software systemsfor examining objects and materials, human artefacts and manufacturingprocesses, in order to detect defects and improve quality, operatingefficiency and the safety of both products and processes. In addition to inspection, machine vision can be used to control themachines used in manufacturing and material processing[5]. These mayperform such operations as cutting, trimming, grasping, manipulating,packing, assembling, painting, decorating, coating, welding…. etc [6]. 1
  15. 15. Automated visual inspection systems allow manufacturers tomonitor and control product quality, thus maintaining/enhancing theircompetitive position. Machine vision is also being used to ensure greatersafety and reliability of manufacturing processes. Figure 1.2 Archetypal Machine Vision systems applicable to a wide The confidence being gained by applying machine vision toengineering manufacturing is now spilling over into industries such asfood processing, agriculture, horticulture, textile manufacturing, etc.,where product variability is intrinsically higher. 2
  16. 16. Thus, we are at the threshold of what we predict will be a period ofrapid growth of interest in machine vision aided flexible manufacturingcell (FMC) in the Egyptian market. Implicit in the preceded figure is the fact that machine vision is amulti-disciplinary subject and necessarily involves designers inmechanical, optical, electrical, electronic (analogue and digital), andsoftware engineering and mathematical analysis (of image processingprocedures) [7]. Less obviously, several aspects of “software engineering” are alsorequired, including human–computer interfacing, work management, andquality assurance procedures. Integrating such varied technologies tocreate a harmonious, unified system is of paramount importance; failure todo so properly will inevitably result in an unreliable and inefficientmachine. We may observe that machine vision is a strong candidate inMechatronics applications since both terms dig into same areas [1].1.2 Machine Vision and Computer Vision Machine vision is not synonymous to Computer Vision (CV).Computer vision is a branch of computer science where manipulation ofimage is the primary concern. Computer vision is also known as digitalimage processing where it involves image manipulation, analysis,compression and presentation. On the other hand, machine vision is anengineering discipline where software and hardware are tied to each other. Several topics comprise machine vision systems such asillumination, selection of cameras, interfacing circuitry and mechanicalactuation systems [1]. Less obviously, several aspects of “multi-domainknowledge” are also required, including human–computer interfacing,work management, and quality assurance procedures. Hence, a team of engineers with a range of skills is needed todesign a successful Machine Vision system. Integrating such variedtechnologies to create a harmonious, unified system is of paramountimportance; failure to do so properly will inevitably result in an unreliableand inefficient machine. 3
  17. 17. This point cannot be over-emphasised. This is why we insisted inthe very first pages of this thesis that Machine Vision is quite distinct fromComputer Vision. The point is made more forcibly in Table 1.1. The distinctionbetween machine vision and computer vision reflects the diversity thatexists between engineering and science. Entries in the central columnrelate to the factory-floor Target Machine (TM), unless otherwise stated,in which case they refer to an Interactive Prototyping Toolkit (IPT). The following table (Table 1.1) depicts the major observationsbetween Machine Vision and Computer Vision. Vision systems are currently being used extensively inmanufacturing industry, where they perform a very wide variety ofinspection, monitoring and control functions. Those areas ofmanufacturing that have benefited most in the past include electronics,automobiles, aircraft and domestic products from furniture polish andtooth-paste to refrigerators and washing machines. Vision systems have also been used in the food industry,agriculture and horticulture, although to a smaller extent. Machine visionis a part of many inspection processes nowadays, these may include: Analysing the shape of whole products as a prelude to processing them using robotic manipulators. Analysing texture. Counting. Detecting foreign bodies. Grading. Measuring linear dimension 4
  18. 18. Table 1.1 The difference between machine vision and computer vision[8] Feature Machine Vision Computer Vision Motivation Practical Academic Yes. Many academic Advanced in Unlikely. (Practical issues papers contain a lot of theoretical sense are likely to dominate) “deep” mathematics Likely to be of Cost Critical secondary importance Dedicated Possibly needed to electronic achieve high-speed No (by definition) hardware processing Yes (e.g., systems are No. There is a strong Use integrated likely to benefit from emphasis on proven solutions careful lighting) algorithmic methods A piece of metal, plastic, Data source Computer file glass, wood, etc. Most important a. easy to use; criteria by which b. cost-effective; c. consistent and reliable; Performance a vision system is judged d. fast Multi-disciplinary Yes No Criterion for good Satisfactory Optimal performance solution performance Systems Engineering Computer Science, Nature of subject (practical) academic (theoretical) IPT: vision engineer Often relies on user TM: Low skill level Human having specialist skills during set-up. interaction (e.g., medical Autonomous in inspection background) mode May rely on user a. IPT medium/high Operator skill having specialist skills b. TM must be able to level required (e.g., medical cope with low skill level background) Simple signal, to control Complex signal, for Output data external equipment human being Principal factor Human interaction. IPT: human interaction determining Often of secondary TM: speed of production importance processing speed 5
  19. 19. 1.3 Machine Vision and Human Vision Machine Vision does not set out to emulate human vision. At sometime in the future, it may be both possible and convenient to build amachine that can “see” like a human being. At the moment, it is not. Today,an industrial Machine Vision engineer is likely to regard any newunderstanding that biologists or psychologists obtain about human oranimal vision as interesting but largely useless. The reason is simple: the “computing machinery” used in thenatural world (networks of neurons) is quite different from that used byelectronic computers [9-11]. Certainly, no person can properly lookintrospectively at his/her own thought processes in order to understandhow he/she analyses visual data. Moreover, there is no need to buildmachines that see the world as we do. Even, if we could decide exactlyhow a person sees the world, it is not necessary to build a machine toperform that task in the same way. In the natural world, there are clearly several different andsuccessful paradigms for visual processing including: insects, fish, andmammals. In any case, such a machine would have the same weaknessesand be susceptible to the same optical illusions as we do.1.4 Inspection of ceramic plates The ceramic tiles industrial sector is a relatively young industrywhich has taken significant advantage of the strong evolution in the worldof automation in recent years. All production phases have been addressedthrough various technical innovations, with the exception of the final stageof the manufacturing process. This is concerned with visual surfaceinspection in order to sort tiles into distinct categories or to reject thosefound with defects and patterns faults. The generally accepted manualmethod of tile inspection is labour intensive, slow and subjective. 6
  20. 20. Automated sorting and packing lines have been in existence for anumber of years, however, the complexity of inspecting tiles for damageand selecting them against the individually set quality criteria of amanufacturer has meant that, until recently, automated tile inspection hasnot been possible.1.5 Challenges in Automated Inspection The research effort expended upon the problem of objectivelyinspecting, analysing and characterizing ceramic tiles is easily justified bythe commercial and safety benefits to the industry: Automation of a currently obsolete and subjective manual inspection procedure Significant reduction for the need of human presence in hazardous and unhealthy environments More robust and less costly inspection Higher homogeneity within sorted classes of products Tiredness and lack of concentration are common problems, leadingto errors in grading tiles. Gradual changes are difficult for humaninspectors to detect and it is possible that slight and progressive changeswill not be noticed at an early stage. Additionally, it is particularly difficultfor the human eye to accurately sort tiles into shades, with changing lightconditions in a factory [12]. Human decision is, as usual, influenced byexpectations and prior knowledge. However this problem is not specific to structural defects. In manydetection tasks for example, edge detection, there is a gradual transitionfrom presence to absence. On the other hand, in “obvious” cases mostnaïve observers agree that the defect is there, even when they cannotidentify the structure. The goal of the inspection is not to give a statistical analysis of theproduction but to classify every tile into quality-constant batches. Thesetasks are often referred to as visual inspection. 7
  21. 21. In 1976, “H. Blossel” [13] developed the first successful model forsorting ceramic tiles. The system inspects images captured by means ofpixel comparison of master plate and the plate under inspection. Althoughthe system was genuine in implementation, it lacks consideration ofmechanical handling system and interfacing with suggested controller(Microprocessor). Moreover, it assumed that image is taken with no noisewhich is hardly achieved up to this moment [2, 14, 15]. Meanwhile, toinspect any plate, same pattern distribution must be retained to perform thepixel comparison successfully. “Desoli” developed system was concerned with corner defects inceramic tiles [16] by converting the captured image into binary image andsearching thereafter for edges by comparing each pixel to its surroundingneighbours. He performed a comparison of edge detection filters andshowed that Sobel filter is the proper one to use in his application. Thesystem showed promising results (97.3 % correct classification). However,the algorithm was affected by illumination that degrades edge detectiondrastically [17]. Specification of practical implementation was notmentioned too. The research carried out in [17, 18] was divided into two parts; thefirst part [18] proposed a group of inclined mirrors for indirect illuminationas a proposed approach to minimize reflectivity. They also proved that ifan image of the tile were taken from an oblique angle, cracks can bedetected easily. In The second part [17], Computer software wasimplemented to perform online inspection. The program runs under DOSand was written in C language. No information about the hardware usedwas provided in this publication. “Boukouvalas” added colour inspection capabilities to his system byutilization of fuzzy logic to transfer the knowledge of domain expert intothe developed software [19]. The system was constrained by 6 differentcolours to distinguish from. Colour matching yielded good results,however, “Marques” [20] proposed a better rules by adding the change oferror during the inspection operation and therefore his system becamestrong candidate to perform trend analysis. A year later, “Boukouvalas”developed a system he named it “ASSIST” as an acronym of (AutomaticSystem for Surface Inspection and Sorting of Tiles) [21]. 8
  22. 22. The system was performing inspection by utilization of 3 stages, aCharge Coupled Device (CCD) colour camera for primary colour grading,a monochrome camera for cracks detection and a line scan camera forprecision colour grading. The system has the ability to distribute itscomputation over local area network connected to two servers. However,the system failed to inspect plates in real time due to excessivecomputation required by each process. The oblique orientation of capturing camera were used again by“Massen” and “Franz” [22]. They illustrated that although dimensionmeasurements is almost impossible from the camera perspective,mounting a system of mirrors to have an indirect perpendicular view of theceramic plate would yield a better cracks detection and less distortion ofthe view. They were arguing the possibility of such a system to producebetter results than visual inspection of humans. The developed system wasable to detect cracks based on a portion of the proposed system in [21](cracks detection). They tackled the issue of intensive computation by estimating thepresence of cracks after performing a Fast Fourier Transform (FFT) of theimage and match it to a master plate to detect existence of cracks. Thesystem is limited to inspect cracks only without colour or shapeverification. The developed technique by “Costa” and “Paretou” was based onmounting 4 cameras at different angles [23]. A master template is capturedby each camera. Thereafter, a feature based technique is used to performimage registration by means of linear transformation to calculatetranslation, rotation and scaling. During the inspection stage, ceramic plateis captured simultaneously by all cameras. The image is then compared toa master template. 9
  23. 23. The technique is meant to combine views from different angles toenhance image features that may not be an intuitive task to be obtained byutilizing a single camera. The system detects cracks; however, it lacks thecapability to perform accurate measurements due to global nature of lineartransformation method. The system could be enhanced if an intensitybased method is used with correlation matrix to find similarity betweencaptured images. Authors pointed out that the limitation of the system isthe memory intensive calculations which are considered the bottleneck ofthe system. “Smith” and “Stamp” in [24] were motivated to investigate the 3Dsurface of ceramic plates as conventional applications of vision-basedsurface inspection. The system relied upon the analysis of abstractprojected features at the image plane for which, in general, is not possibleto reliably distinguish between three-dimensional topographic, and two-dimensional chromatic features. Limitations in established techniquesgenerally follow as a consequence of what might be termed a conventional‘viewer-cantered’ approach to object surface representation. They statedwhenever inspection takes place; the analysis of projected and abstractedtwo-dimensional features within the image domain is restricted. Difficultyarises as the appearance of these features will be closely dependent uponthe viewing and lighting configuration, and as a consequence these aspectswill normally be highly constrained. The developed system is based on capturing a sequence of imagesusing a camera fixed in one position. Each image is captured withcontrolled illumination synchronously which is also known as photometricstereo approach [25] . 3D surface of ceramic plates is then constructedfrom the intensity variation of captured image. An experimental work wasperformed using a CCD camera with resolution 512 X 512 pixels and thesoftware was developed using C++ language. The system lacks accuracyalthough it succeeded to draw the general shape of 3D ceramic surface. 10
  24. 24. By means of Self Organizing Maps (SOM) also known as Kohonenfeature map, “Kukkonen” and his colleagues built a system for colourmeasurements for sorting five classes of brown tiles [26] . SOM utilizesEuclidean distance for computing weights to weight vectors to neurons.Meanwhile, the neuron with weight vector most similar to the input iscalled the Best Matching Unit (BMU). The weights of the BMU andneurons close to it in the SOM lattice are adjusted towards the input vectorwhich is the colour of each pixel and its position with respect to the topleft of the image. Thereafter, a K-Nearest Neighbour clustering is used forfine tuning results. They compared results obtained from a trained 2D(colour and position) SOM. Authors in [25] demonstrated that theproposed system emulates human visual inspection to detect the precededtiles classes with a success rate of 97.6 %. However, they didn’t provideany practical information regarding their experimental setup, which makescomparing results to other methods impossible since their experimentcannot be reproduced. As a further development to the work published by “Kukkonen”, ahybrid system of auto-associative and probabilistic neural network todetect surface defects surface cracks on ceramic tiles [27] was carried outby “Hocenski” and “Nyarko” . They explained their usage of the firstmodel to select the best features of the plate, and the latter one (theprobabilistic neural network) which is involved in classification. They stated that utilization of Hopfield network minimizedcomputation since calculating weights is a one step process. Meanwhile,probabilistic neural network which behaves like K-Nearest Neighbouryielded better results compared to multilayer perceptron neural network.Gaussian function was selected as the radial basis function for computingthe weight of each neighbour point. To perform an experimental work,they used a CCD analogue camera with resolution of 400 X 240 pixels andspeed of 20 frames per second. Illumination technique was not discussedwhich hardened the process of regenerating the experiment elsewhere. 11
  25. 25. The computation time and processing power were principalmotivations for “Elbehiery” et. al. to propose an algorithm based on theprinciples of image processing and morphological operations on theceramic tile images[12] . The algorithm converts the captured colouredimage into grey scale to be able to perform intensity adjustment andhistogram equalization. Thereafter, thresholding the image to convert it tobinary format and applying edge detection algorithm by comparing eachpixel to its surrounding neighbours. Filling gaps between discontinuedlines is done as a final stage before searching for irregular shapes usingSDC morphological operation which are morphology operationsspecialized for grey scale images that exists in MATLAB [28]. Eventually,Inversing each pixel value in the binary image to enhance the visualappearance of the crack detected is carried out. Although the proposed system is quite easy to implement, however, itrequires a good deal of manual tuning. Furthermore, it totally ignored thecolour defects which are critical in ceramic industry[26]. Regardless thelatter point, the major advantage of such an algorithm is the ability to beimplemented as a low cost solution. By the aid of wavelet analysis [29], “Elbehiery” et. al. proposed amore robust technique than their previous work in [12]. They alsoexpanded their work to cover cracks and spots. The selection of waveletanalysis as they illustrated in their paper enabled them to inspect the plateregardless of variability in scale and resolution. Since calculating digitalwavelet transform yielded large amount of data, authors chose only asubset of scales and positions. They employed TMPLAR, which stands forTemplate Learning from Atomic Representations with different techniqueslike Haar, Daubechies and Coiflets wavelet families. They trained the system using only five iterations to reachconvergence. Thereafter, pattern classification was carried out usingmeasurements of variance between the template and ceramic plate underinspection. The proposed algorithm was able to find cracks in texturedplates; however, it is observed that no experiments were performed onceramic plates with more than two different colours. 12
  26. 26. The published results showed only cracks without demonstrating thedetection of colour spots or any other geometric metrics. “Novak” and “Hocenski” [15] investigated methods used to analyzetexture. They were categorized in two main categories: The first one,called the statistical or stochastic approach, treats textures as statisticalphenomena. The formation of a texture is described with the statisticalproperties of the intensities and positions of pixels. The second category,called the structural approach, introduces the concept of texture primitives,often called texels or textons. To describe a texture, a vocabulary of texels and a description of theirrelationships are needed. The goal is to describe complex structures withsimpler primitives, for example via graphs. They proposed the local binaryoperator method for feature extraction since it is a monotonic process andthus efficient under variable illumination methods. The method uses two matrices, the first one is the weight of eachneighbour, and the second one represents the pixel of interest and itssurrounded neighbours. This method was originally used as acomplementary measure of local image contrast. Since measuring the length of consecutive pixels and comparing it tothe master template, researcher proposed the method to be a part ofautomated inspection system for ceramic tiles. The proposed algorithmdoesn’t cover colour inspection. “Novak” extended his work in [30] by adding a pixel pair differencemethod to search the image for surface defects as the method is mainlyused to encrypt the image. The method is simply commenced by flatteningthe image into a vector holding the values of all pixels scanned in zigzagmanner. Thereafter, the difference in grey scale value of every 2 successiveno overlapping pixels were calculated. “Novak” used this method to get afeature vector based on the encryption results by means of the differencein histogram. 13
  27. 27. If the defected image has a different value other than expected , adeeper investigation using the local binary operator he proposed in [15] isto be performed consequently. The key advantage of such a method isminimization of calculation power requirements. The technique is much slower when implemented on colour imagesby comparing all three colour planes Red, Green and Blue (RGB). Thesystem is constrained to cracks and doesn’t investigate geometry, anglesor colour spots. “Ghita” et al [31] had successfully built a complete automated visualinspection system for painted slates. The system employed edge detectionusing median and Sobel edge detector, morphological analysis andfiltering objects by area to classify defects if exists. The major issueconfronted him was the surface reflectivity, which he and his colleaguesendeavoured to tackle using partial lighting for illumination while usingoblique angle for camera acquisition. The system was able to yieldpromising results. However, in regard to accurate geometry metrics,acquiring images from oblique angle may not provide satisfactory results.Moreover, evaluation of defects was the primary concern while colourhomogeneity and drawing geometry were ignored. The system waspublished in details which is a major advantage for researchers carryingout further development. In 2006, “Vasilic” et. al. conducted a survey for edge detectionmethods used to inspect ceramic plates [31] . The study was investigatingPrewitt, Sobel, Median and Canny edge detectors. The study wasperformed over 23 different types of ceramic plates and researchersconcluded that Sobel filter sustained variability in shape and illuminationand thus recommended in this type of applications. This conclusion comes in agreement with later publication by“Vincent” and “Folorunso” [32] as they affirmed that Sobel filterimproves the quality of images taken under extremely unfavourableconditions in several ways: brightness and contrast adjustment, edgedetection, noise reduction, focus adjustment, motion blur reduction. 14
  28. 28. Meanwhile, the Sobel operator is based on convolving the image witha small, separable, and integer valued filter in horizontal and verticaldirection and is therefore relatively inexpensive in terms of computationpower.1.6 Statement of the problem The present problem can be summarized in the following points:  Most of the published work related to ceramic inspection is focusing on finding defects without considering practical aspects of implementation such as camera types, lens distortion, illumination, electrical circuitry and mechanical actuation system.  Since garnished ceramic plates industry is a fast growing field, new inspection demands became mandatory. This involves wider inspection procedure covering drawing geometry, angles, orientation and colour of each drawn object.  Practical implementation may be discussed in more details to allow future work to be carried out in an easier way.1.7 Objective of the present work Since garnished wall plates require more inspection proceduresthan conventional peer covered in the preceded survey, present work isaimed to:  Develop an algorithm to inspect geometry, angles, orientation and colour of each drawn object found in the garnished plate. The developed system is required to maintain the inspection process performed without stopping the production line.  Design and implement a mechatronic system as a Flexible Manufacturing Cell (FMC) to implement the proposed algorithm. The system will comprise a software program to perform the inspection and control the FMC, a mechanical actuation system for sorting the ceramic plates and an electronic circuit which will serve as an interfacing layer between software and mechanical sorting system. 15
  29. 29.  Studying the impact of developed system on productivity.The suggested algorithm may be structurally distilled into five pillars: I. Image acquisition and pre-processing: for removing noise and correcting the lens distortion by means of proposed calibration algorithm. II. Soft partitioning and colour grouping: for creating boundary region among each drawn object in the ceramic plate based on the degree of membership of all its inbound pixels to expected colour patterns.III. Geometric feature extraction: Whereby, dimensions of drawn objects are extracted via edge detection and fitting the points detected by means of an array of spokes.IV. Classification using fuzzy logic: The final stage which yields the ceramic grade (acceptable, colour spot or colour mismatch) based on proposed fuzzy inference system. Image acquisition and pre- Soft partitioning and processing colour grouping (Chapter 1) (Chapter 2) Geometric feature Fuzzy classification extraction (Chapter 4) (Chapter 3) Figure 1.3 Proposed inspection algorithm for garnished ceramic wall plates 16
  30. 30. Image Acquisition and Pre-processing This chapter depicts the first step in the proposed algorithm aimingto prepare the image for processing stage. A proposed algorithm for imagecalibration is discussed. Thereafter, the experimental setup for imageacquisition, gamma correction and colour adjustment is shown in details.2.1 Spatial Calibration Spatial calibration refers to the process of correlating the pixels ofan acquired image to real features in the image. This process can be usedto make accurate measurements in real-world units like millimetres insteadof pixels, and to correct for camera perspective and lens distortion [33]. Asillustrated in Figure 2.1, a grid of equally distant dots is placed in thecamera field of view. Thereafter, an image is taken for the calibration gridand compared to the expected distribution of dots in order to performimage correction. Spatial calibration produces a mapping to depict how each pixelrelates to a real-world location. The image data itself is unaffected by thisprocess. Image correction actually applies the learned calibrationinformation to remove camera perspective or lens distortion effects froman image. This process is computationally intensive. So in many cases it ispreferred to store the calibration results with the image and apply it asnecessary. Figure 2.1 Notion of image calibration 17
  31. 31. 2.1.1 Lens distortion Distortion is one of the worst problems limiting measurementaccuracy; even the best performing optics are affected by some grade ofdistortion, while often even a single pixel of difference between the realimage and the expected image could be critical. Distortion is simplydefined as the percentage difference between the distance of an imagepoint from the image centre and the same distance as it would be measuredin a distortion-free image; it can be thought of as a deviation between theimaged object and the real dimensions of that object. a) Barrel distortion b) Pincushion distortion Figure 2.2 Types of lens distortion Although distortion can be irregular or follow many patterns, themost commonly encountered distortions are radially symmetric, orapproximately so, arising from the symmetry of a photographic lens. Theradial distortion can usually be classified as one of two main types. In"barrel distortion", image magnification decreases with distance from theoptical axis. The apparent effect is that of an image which has been mappedaround a sphere (or barrel) as shown in Figure 2.2a. While In "pincushion distortion" (Figure 2.2b), imagemagnification increases with the distance from the optical axis. The visibleeffect is that lines that do not go through the centre of the image are bowedinwards, towards the centre of the image, like a pincushion. To correctsuch distortion, two approaches are used. 2.1.2 Telecentric Lens Common lenses give different magnifications at differentconjugates: as such, when the object is displaced, the size of its imagechanges almost proportionally with the object-to-lens distance. This issomething anybody can easily experience in everyday life, for examplewhen taking pictures with a camera equipped with a standard photographiclens [5]. 18
  32. 32. With telecentric lenses, the image size is left unchanged with objectdisplacement, provided that the object stays within a certain range oftenreferred to as Depth of Field or Telecentric Range. This is due to theparticular path of the rays within the optical system: only ray cones whoseprincipal ray is parallel to the opto-mechanical main axis are collected bythe objective. For this reason, the front lens diameter must be at least aslarge as the object field diagonal. This optical behaviour is obtained by positioning the stop apertureexactly on the focal plane of the front optical group (Figure 2.3): theincoming rays aim at the entrance pupil which appears as being virtuallyplaced at the infinity. The name telecentric is derived from the twosyllables “tele” which means “far” in ancient Greek and “centre” whichaccounts for the pupil aperture, the actual centre of an optical system [34]. Figure 2.3 In a telecentric system rays get into the optics only The teleentric lens yields a high level correction of lens distortion;however, the cost of such a lens is extremely expensive to be used inprototyping in moderately funded projects. Therefore, software based lenserror correction is considered a much economic approach. 2.1.3 Software based correction Correcting lens distortion by means of computer software has beenchallenging problem in the last decade. The key advantage of utilizationof software is the reduction of cost compared to usage of hardwaretelecentric lens. The challenge emerges from the complicated imageprocessing techniques combined to reduce such an error which will bediscussed in the next sections. 19
  33. 33. 2.2 Proposed algorithm for correcting and calibrating lens distortion Software aided lens distortion correction and calibration has beendeveloped to overcome the problem of distorted barrel shaped ceramicplate image by means of image processing. Image Colour Plane Automatic Particle Acquisition Extraction Thresholding Grouping Displaying Mapping particles Distance Particle corrected image to its expected place Measurements Measurements Figure 2.4 The proposed algorithm for perspective correction The proposed algorithm offers a low cost solution by implementingsoftware manipulation of the image as shown in Figure 2.4. The keyobjective is to estimate the original position of each pixel which correctsthe grid spacing and dots radii. The grid used has circles with 2mm radiiand 5mm distant from each other. 2.2.1 Colour plane extraction Every digital coloured image is composed of rows and columns ofpixels. Each pixel colour is formed by a combination of red, green and bluecomponents called planes (8-bits for each colour plane). Depending on thevalue stored in each colour plane, the pixel gains its visual colour. Thescreenshot annotated by Figure 2.5 shows the resulted image afterextracting the red component of each pixel which produced the shown greyscale image where pixels shown are resampled down from the originalcoloured image (24 bit depth) to have 256 shades due to its 8 bits red colourportion. List 2.1 illustrates the pseudo code for extracting the redcomponent from each pixel in the calibration grid. The red planeexperimentally proved to yield higher contrast and hence better correction. 20
  34. 34. Figure 2.5 Screenshot of calibration grid used in correction after red plane extraction 1. Start at first row and first column in the image. 2. Get the red component of the current pixel located in current row and column 3. Save the new pixel value in same index in grey image format. 4. Increment row number 5. If row count reaches the last row of the image, increment column count by one. Else go to step2 6. If column count reaches the last column of the image, stop. Else, go to step 2 List 2.1 Colour plane extraction algorithm 2.2.2 Automatic Thresholding Thresholding an image is meant to reduce the pixel representationfrom 8 bits which is known as grey image to a single bit or binary image.This method can be achieved using several techniques. The chosen methodfor performing automatic thresholding was by means of clusteringalgorithm. Clustering is considered one of the effective adaptive algorithms inbinary thresholding. It sorts the histogram of the image within a discretenumber of classes corresponding to the number of phases perceived in animage [35]. 21
  35. 35. The grey values are determined, and a barycentre is determined foreach class. This process is repeated until it reaches a value that representsthe centre of mass for each phase or class is obtained. Let p(x,y) be the pixel value of pixel located at xth row and ythcolumn. The threshold value is the pixel value k for which the followingcondition is true: 𝜇1 +𝜇2 𝑘= 2.1 2 Where μ1 is the mean of all pixel values that lie between 0 and k,and μ2 is the mean of all pixel values that lie between k + 1 and 255. Inother words: 𝑘 ∑0 𝑝(𝑥,𝑦) 𝜇1 = 𝑁𝑜.𝑜𝑓 𝑝𝑖𝑥𝑒𝑙𝑠 ℎ𝑎𝑣𝑖𝑛𝑔 𝑣𝑎𝑙𝑢𝑒 ≤𝑘 ; 0 ≤ 𝑝(𝑥, 𝑦) ≤ 𝑘 2.2 ∑255 𝑝(𝑥,𝑦) 𝑘+1and 𝜇2 = ; 𝑘 + 1 ≤ 𝑝(𝑥, 𝑦) ≤ 255 2.3 𝑁𝑜.𝑜𝑓 𝑝𝑖𝑥𝑒𝑙𝑠 ℎ𝑎𝑣𝑖𝑛𝑔 𝑣𝑎𝑙𝑢𝑒 ≥(𝑘+1) Figure 2.6 Adaptive thresholding using clustering technique After this stage (Figure 2.6), the grid is divided into two classes,the white pixels representing dots and the black pixels forming thebackground and hence separating the grid dots is achieved as implementedusing the algorithm depicted in List 2.2. 22
  36. 36. 1. Initialize k=1 2. Calculate 𝜇1 𝑎𝑛𝑑 𝜇2 𝜇 +𝜇 3. If 𝑘 = 1 2 then replace each pixel value less than 2 k with zero, and one to every pixel with grey value equal or above k. 4. If step 3 is false, increment the value of k and go to step 3 again. List 2.2 Pseudo of thresholding using clustering algorithm 2.2.3 Particle grouping After thresholding the image, it is necessary to find all grids (dots)appeared past thresholding. To do so, it is required to group neighbouredparticles that forming a complete dot (circle). The easiest way is to track boundaries of each particle and track itsneighbours until returning to the starting point (aka seeding point). Iftracking reached an end point which differs from the starting one, it meansincomplete circle and it is excluded later from calibration calculation. Figure 2.7 Resulted image of boundary tracking of grid dots Each Circle is stored in an array to be treated independently. Tocorrect the geometry and location of each dot, few measurements areneeded. It is observable that incomplete circles exist around the grid bordersince the field of view of the camera doesn’t cover all the whole grid area. 23
  37. 37. Figure 2.7 shows the detected boundaries of each grid while the algorithm for achieving such a task is illustrated in List 2.3 by means of finding seeding point of each boundary andfollow the boundary until reaching the seeding point again. 1. Start at first row and first column in image. 2. Search for first non-zero pixel and label it S. 3. Search for the 8 neighbored pixels till finding another pixel. 4. Make the last found pixel C and start searching for non-zero pixels in its neighbors. 5. If all 8 neighbors are black, store it in N array. 6. Repeat steps 3 and 4 until returning to pixel S. 7. Increment search after pixel S 8. Go to step 2 until image is totally searched List 2.3 Boundary extraction of grids contour 2.2.4 Particles and distance measurements After boundaries are found, two measurements are carried out tocorrect the image:  Area and perimeter of each dot: to correct distorted dots, area is measured by counting number of pixels inside each boundary region. Meanwhile, perimeter is calculated as the number of pixels of the boundary.  Compactness factor: This is known as the ratio of the perimeter squared over area. The circle has a minimum compactness factor (4π). The compactness of each particle determines if skewness takes place in order to be corrected.  Distance between centres of adjacent particles: There are several methods for measuring distance between pixels, the most known one is Euclidean which is defined as: 𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒 𝑖𝑛 𝑝𝑖𝑥𝑒𝑙𝑠 = √(𝑥2 − 𝑥1 )2 − (𝑦2 − 𝑦1 )2 2.4 Where x1, y1 and x2, y2 are coordinates of two adjacent centresrespectively. As shown in Figure 2.8, incomplete circles have odd centresand thus were excluded from correction process (Figure 2.9). 24
  38. 38. Figure 2.8 Detecting centres to check for skewness of each dot Centres can be determined by measuring distance between twohorizontal pixels on boundary of any object and finding the intersectingpoint with any other 2 pixels on the same boundary. These centres serve to correct the location of each dot with respectto its neighboured peers. Meanwhile, the centre location accompanied bycompactness factor of each dot will be employed to correct the pixellocation of each pixel in this dot. Barrel distortion Figure 2.9 Visualization of detected dots in the grid 25
  39. 39. 2.2.5 Performing correction on each pixel By evaluating the area, perimeter and skewness, it is possible tocorrect the distorted image by restoring each circle area and position. Thisprocess is not difficult since previous knowledge about the original gridpaper is known in terms of radii and location of each dot. Figure 2.10 Pixel mapping to restore expected grids locations 1. Locate first dot in the image. 2. Check for compactness. 3. Move pixels till compactness factor is minimized. 4. Copy pixels around boundary till reaching the expected area. 5. Store new pixel locations and create the new pixels correction lookup table. 6. Repeat for each object until finishing all objects in the image. List 2.4 Restoration of grids circular shape 1. Locate first object center in the top left of the image. 2. Measure distance between nearest horizontal and vertical objects 3. Move objects and its surrounding objects to the expected distance. 4. Locate the next object. 5. Repeat step 2 to 4 until all objects are processed. 6.List 2.5 new pixelsof grids corrected locations Store Retrieval locations 26
  40. 40. The procedure simply rearranges pixels to restore the expected griddistribution as shown in Figure 2.10. In List 2.4, the algorithm forcorrecting the distorted shape of every grid is stated. Meanwhile,relocating the corrected grids to its original location is detailed in List 2.5. After this stage, any acquired image is mapped to the newcorrection map (aka lookup table) and thus measuring errors is minimized.As a result, Figure 2.11 forms the acquired image which appears distortedas the acquired gird. Such a distortion causes drastic skewness which isobserved in the drawn objects near the image border. Observable skewness Figure 2.11 Acquired image of ceramic plate Thereafter, the correction pixel map extracted from the calibrationstage is applied to correct each pixel location as shown in Figure 2.12. Theminimization of skewness could be seen in the corrected image. 27
  41. 41. Corrected image Figure 2.12 The Corrected image after applying the algorithmFigure 2.13 Screenshot of the developed software for calibration of ceramic plate 28
  42. 42. Since the calibration process performs pixel relocation, whenmoving dots near the border few places with empty dots appear (blackarea) of about 1 cm wide and have a side length of the ceramic plate. Thisarea is excluded from the inspection stage. A screenshot shown (Figure 2.13) depicts the developed softwarecarried out where image is loaded offline, thereafter, the proposedalgorithm is executed and an overall average execution time is displayedto measure the execution speed of the proposed algorithm.2.3 Image acquisition The first stage of any vision system is the image acquisition stage.After the image has been obtained, various methods of processing can beapplied to the image to perform many different vision tasks required today[6]. However, if the image has not been acquired satisfactorily then theintended tasks may not be achievable, even with the aid of some form ofimage enhancement. The conducted survey covered the commonly usedcameras in machine vision systems: 2.3.1 Analogue cameras Analogue cameras are cameras that generate a video signal inanalogue format. The analogue signal is digitized by an image acquisitiondevice. The video signal is based on the television standard, makinganalogue the most common standard for representing video signals. A charge-coupled device (CCD) is an array of hundreds ofthousands of interconnected semiconductors. Each pixel is a solid-state,photosensitive element that generates and stores an electric charge when itis illuminated. The pixel is the building block for the CCD imager, arectangular array of pixels on which an image of the scene is focused. Inmost configurations, the sensor includes the circuitry that stores andtransfers its charge to a shift register, which converts the spatial array ofcharges in the CCD imager into a time-varying video signal. Timinginformation for the vertical and horizontal positions and the sensor valuecombine to form the video signal [36] . 29
  43. 43. Sensors Vertical Shift Register Output Horizontal Shift Register Figure 2.14 CCD array configuration For standard analogue cameras, the lines of the CCD are interlacedto increase the perceived image update rate. This means that the odd-numbered rows (the odd field) are scanned first. Then the even-numberedfields (the even field) are scanned. 2.3.2 Digital Cameras Digital cameras have several advantages over analogue cameras.Analogue video is more susceptible to noise during transmission thandigital video. By digitizing at the camera level rather than at the imageacquisition device, the signal-to-noise ratio is typically higher, resulting inbetter accuracy. Because digital cameras are not required to conform totelevision standards, they can offer larger image sizes and faster framerates, as well as higher pixel resolutions. Digital cameras come with 10 to16-bit gray levels of resolution as a standard for machine vision,astronomy, microscopy, and thermal imaging applications. Digitalcameras use the same CCD type devices for acquiring images as analogue;they simply digitize the video before sending it to the frame grabber. 30
  44. 44. 2.3.2.1 Camera Link Camera Link is an interface specification for cables that connectdigital cameras to image acquisition devices. It preserves the benefits ofdigital cameras – such as flexibility for many types of sensors, yet it hasonly a small connector and one or two identical cables, which work withall Camera Link image acquisition devices. Camera Link greatly simplifies cabling, which can be a complextask when working with standard digital cameras. The constraint of usingsuch a type is the expensive cost which may not suit prototyping stage inproposed research. 2.3.2.2 USB cameras USB cameras have a major advantage over all other types,portability. In other words, it requires no special hardware other thandedicated USB port. Although USB cameras offered in market supportshigh resolution (up to 6 MP), the acquisition speed is poor. 2.3.2.3 IEEE 1394 (Firewire cameras) IEEE 1394 is a serial bus standard used by many PC peripherals,including digital cameras. IEEE 1394 cameras use a simple, flexible, 4 or6-wire power cable; and in some cases, the bus can supply power to thecamera. However, because IEEE 1394 is a shared bus, there is a bandwidthlimitation of approximately 250 MB/s when no other device is connectedto the bus. IEEE 1394 cameras also require processor control to move theimage data, which limits available processor bandwidth for imageprocessing. IEEE 1394 is a standard that also includes functions forenumerating and setting up the camera capabilities. A firewire camera with 640 X 480 pixels was selected in thepresent work for the following reasons:  It has a high frame rate per second (60 fps) which is sufficient to acquire images online without the need to stop the conveyor carrying ceramic plates under inspection. 31
  45. 45.  The cost is relatively moderate compared to other high speed acquisition peers. For instance, it costs nearly 30% of the price of the camera link camera. Table 2.1 Comparison of available cameras utilized in machine vision application Parallel Camera Analogue IEEE 1394 Digital Link Cameras Cameras Cameras Cameras Up to 4 Up to 120 Up to 510 Up to 250 Data Rate MB/S MB/S MB/S MB/S Spatial Low High High Medium Resolution Simple and Simple and Functionality Advanced Advanced easy easy Pixel Depth 8-bit to 10- Up to 16- Up to 16- Typically (per colour plane) bit bit bit 8-bit Simple Thicker, Simple, Simple, Cabling BNC custom standard standard cabling cabling cabling cabling2.4 Acquisition parameters The image acquisition has several attributes to be set .Theseattributes are affected by external parameters like the surroundingillumination and object reflectivity. Firewire cameras is accompanied withsoftware Application Programming Interfacing functions (APIs) to allowsoftware programming languages such as Visual Basic and MATLAB toadjust such parameters. When performing parameter adjustment, the ruleof thumb is to use colourful objects for obtaining satisfactory results. Thefollowing subsections demonstrate the key attributes and experimentalsetup of the developed flexible cell for acquisition from the chosen firewirecamera. 2.4.1 Image format The utilized firewire camera has several built in formats. Eachformat comprises pre-processing functions such as the image size in pixelsand the colour depth as shown in Figure 2.15. The setup was commencedusing 656 X 490 and then optimized to 640 X 480 pixels. 32
  46. 46. 2.4.2 Speed The speed is the representation of the data rate mentioned earlierin the comparison table. The speed affects the maximum image size andthe delivered frames per second. The camera has a speed of 100 mb/s. Figure 2.15 Image format and delivered speed rate 2.4.3 Bayer colour filter A Bayer filter is a Colour Filter Array (CFA) for arranging RGBcolour filters on a square grid of photo-sensors. Its particular arrangementof colour filters is used in most single-chip digital image sensors used indigital cameras, camcorders, and scanners to create a colour image. 33
  47. 47. The filter pattern is 50% green, 25% red and 25% blue, hence isalso called GRGB or other permutation such as RGGB as in Figure 2.16a. a) The Bayer arrangement of colour filters on the pixel array of an image sensor b) Configuring Bayer pattern and RGB colour gain of the camera mounted on the test rig Figure 2.16 Bayer filter (concept and configuration) 34
  48. 48. This behaviour mimics the physiology of the human eye. The retinahas more rod cells than cone cells and rod cells are most sensitive to greenlight. These elements are referred to as sensor elements, sensels, pixelsensors, or simply pixels; sample values sensed by them, afterinterpolation, become image pixels. 2.4.4 Brightness and contrast Brightness and contrast represent a way to adjust an image. Theycome from the display technology, being common controls in all monitors.The colour brightness/contrast is similar to the grey scale counterparts, inmost cases being applied to all RGB channels. For a grey scale image,brightness represents an image adjustment where a constant value is addedto all pixel values. The contrast adjustment is a multiplication of the pixelvalues with a constant. The value was using trials and error as depicted inFigure 2.17. Figure 2.17 Brightness adjustment 35
  49. 49. 2.4.5 Gain Since the camera used in the conducted research activity has an on-board 10 bits A/D converter, the camera offers gain control where mappingthe raw 10 bits of each colour plane can be reduced to 8 bits. The gaincontrols which A/D range (0-1024) is mapped to the 8 bits counterparts(Figure 2.18). Figure 2.18 Gain control A glossy cover was chosen with different colours to serve twoobjectives:  To check the colour quality of the acquired image.  To adjust the illumination level to minimize reflectivity without affecting the colour adjustment. 36
  50. 50. 2.5 Gamma Correction Gamma correction, gamma nonlinearity, gamma encoding, oroften simply gamma, is the name of a nonlinear operation used to code anddecode luminance values in video or still image systems. Gammacorrection is defined by the following power-law expression: 𝛾 𝑉 𝑜𝑢𝑡 = 𝑉𝑖𝑛 2.5 Where the input voltage from the acquisition camera (Vin) and outputvalues displayed on the screen (Vout) are non-negative real values, typicallyin a predetermined range such as 0 to 1[36]. To implement gammacorrection digitally as shown in Figure 2.20, each pixel value in everycolour plane (Red, Green and Blue) is raised to correction factor (γ) usingthe following formula: 𝑃 𝛾 𝑖𝑛 𝑃 𝑜𝑢𝑡 = 255 × (255) 2.6 The range of values used for gamma will depend on the application.The selected value for correcting the acquired ceramic image was found tobe 0.7 based on trial and error in the three colour planes of the image. For i=0 to Max. Row Count For j=0 to Max. Row Count GetPixelColour(i,j) newRed = 255 * (Red(colour) / 255) ^ gammaCorrection newGreen = 255 * (Green(colour) / 255) ^ gammaCorrection newBlue = 255 * (Blue(colour) / 255) ^ gammaCorrection Next j Nect i PutPixelColour(x, y) = RGB(newRed, newGreen, newBlue) Figure 2.19 Gamma correction of the acquired ceramic image (γ=0.7) a- Response of gamma correction b- Pseudo code of gamma correction 37
  51. 51. a-Raw image b-Gamma correction Figure 2.20 Implementation of Gamma Correction As shown in Figure 2.20, the gamma correction drasticallyenhanced the acquired image which appeared to be washed out due togamma effect. 38
  52. 52. Soft Partitioning and Colour Grouping In this chapter, the method of fuzzy c-mean is introduced. Theproposed algorithm for colour sorting and grouping is also discussed indetails.3.1Clustering Cluster Analysis encompasses a number of different algorithms andmethods for grouping objects of similar type into respective categories. Ageneral question facing researchers in many areas of inquiry is how toorganize observed data into meaningful structures, that is, to developtaxonomies or classes[37]. In other words cluster analysis is an exploratorydata analysis tool which aims at sorting different objects into groups in away that the degree of association between two objects is maximal if theybelong to the same group and minimal otherwise as shown in Figure 3.1.Cluster analysis can be used to discover structures in data withoutproviding an explanation or interpretation. In other words, cluster analysissimply discovers structures in data without explaining why they exist. Dataset Clustering Figure 3.1 Clustering is performed by grouping each similar data based on common criteria There are a countless number of examples in where clustering playsan important role. For instance, biologists have to organize the differentspecies of animals before a meaningful description of the differencesbetween animals is possible. 39
  53. 53. According to the modern system employed in biology, manbelongs to the primates, the mammals, the amniotes, the vertebrates, andthe animals. The higher the level of aggregation the less similar are themembers in the respective class. Clustering is one of the effective methods for classificationsnowadays. It has been employed in coal classification used in thermalpower stations[38], for Electroencephalogram signal classification[39],even as an efficient algorithm for estimation of insects invasion[40] andfor bearing fault detection[41].3.2 Problem of clustering Let’s consider a dataset 𝑋 consisting of data points 𝑥 𝑖 (1 ≤ 𝑖 ≤ 𝑁)representing objects, patterns, pixels….etc and 𝑥 𝑖 = {𝑎 𝑖1 , 𝑎 𝑖1 , … . . 𝑎 𝑖𝑑 }where each element in 𝑥 𝑖 denotes a nominal attribute (colour, surfaceroughness, shape…. etc. The problem is to find adequate criteria called nodes to group datapoints around it to evaluate the overall classification of each data point.3.3 Advantages of using clustering Clustering can be beneficial solution whenever scalability isrequired. The algorithm is also capable of dealing with different types ofdata which in conclusion offers high dimensionality [32, 37].3.4 Constraints of using clustering One of the major restrictions in using clustering techniques is atime consuming process when dealing with large set of data. Thisbehaviour can be prohibited in time critical applications like web searchengines. Meanwhile, the effectiveness of classification depends on theproper definition of similarity [42]. 40
  54. 54. 3.5 Methodologies of distance measurements An important step in most clustering is to select a distance measure,which will determine how the similarity of two elements is calculated. Thiswill influence the shape of the clusters, as some elements may be close toone another according to one distance and farther away according toanother.3.5.1 Common distance functions The following review covers the most commonly used distancemeasures techniques for distance measure assisted clustering:3.5.1.1 The Euclidean distance It is also called 2-norm distance. A review of cluster analysis inpattern classification research found that the most common distancemeasure in published studies in that research area is the Euclidean distanceor the squared Euclidean distance [43]. The Euclidean distance between point p and point q is the lengthof the line segment ̅̅̅. In Cartesian coordinates, if 𝑝 = (𝑝1 , 𝑝2 , . . . , 𝑝 𝑛 ) 𝑝𝑞and 𝑞 = (𝑞1 , 𝑞2 , . . . , 𝑞 𝑛 ) are two points in Euclidean n-space, then thedistance from p to q is given by: 𝑛 𝑑(𝑝, 𝑞) = √∑ 𝑖=1(𝑝 𝑖 − 𝑞 𝑖 )2 3.1 According to this, in a 2-dimensional space, the distance betweenthe point (x = 1, y = 0) and the origin (x = 0, y = 0) is always 1according to the usual norms, but the distance between the point (x = 1, y = 1) and the origin will be √2.3.5.1.2 The Manhattan distance (aka taxicab norm or 1-norm) This method was derived by “Hermann Minkowski” in the 19thcentury. In comparison to Euclidean geometry, the usual distance functionor metric is replaced by a new metric in which the distance between twopoints is the sum of the absolute differences of their coordinates. 41
  55. 55. Figure 3.2 Taxicab geometry vs. Euclidean distance In other words: 𝑛 𝑑(𝑝, 𝑞) = ∑ 𝑖=1|𝑝 𝑖 − 𝑞 𝑖 | 3.2 Recalculating the example illustrated in the previous section(section 3.5.1.1), the distance measured using Manhattan technique will be2. As shown in Figure 3.2, the red, blue, and yellow lines have the samelength (𝟏𝟐) in taxicab geometry for the same route. In Euclideangeometry, the green line has length of 𝟔 × √𝟐 ≈ 𝟖. 𝟒𝟖, and is the uniqueshortest path.3.5.1.3 The Hamming distance Figure 3.3 Example of hamming distance 42

×