The document discusses machine vision and its key elements. It defines machine vision as using imaging technologies and methods for automatic inspection and analysis in industrial applications like quality control. The main elements of a machine vision system are illumination, imaging, and image processing. Different types of lighting techniques are described like ring light, backlight, and laser light. Digital camera concepts involving sensors, lenses, and focal length are also covered.
This document discusses omnidirectional vision systems and their potential applications in manufacturing. It begins with an overview of vision systems and outlines new technologies like 3D omnidirectional systems. It then describes how such systems work using multiple cameras and mirrors to achieve 360 degree views. Existing applications in robots, drones, and automated assembly are reviewed. Finally, the document proposes ways omnidirectional vision could improve safety, quality control, and efficiency in manufacturing applications like automated guided vehicles.
The document discusses reverse engineering techniques. It describes reverse engineering as generating a CAD model from an existing physical part to reconstruct it. There are contact-based methods like coordinate measuring machines and non-contact methods like 3D scanners. 3D scanners use light or lasers to scan objects and generate CAD designs without contact. Reverse engineering is used to manufacture replacement parts, redesign products lacking documentation, or create cheaper alternatives.
Night vision technology allows users to see in dark environments. It has both biological and technical forms, with the technical using either image intensification or thermal imaging. Image intensifiers amplify available light through a vacuum tube, while thermal imaging detects infrared radiation. Night vision was originally developed for the military but is now used for hunting, wildlife observation, surveillance, security, and navigation. It has progressed through several generations with improvements in gain, resolution, and low-light performance. Common night vision equipment includes scopes, goggles, and cameras.
Computer architecture for vision systemAkashPatil334
Computer vision involves acquiring, processing, analyzing and understanding digital images to extract information from the real world. It has evolved from early research to now being used in thousands of applications across many industries. The architecture for computer vision systems requires different computational resources to handle different tasks efficiently, from low-level operations on images to high-level decision making. Key components include cameras, processors, and software like OpenCV and CUDA. Common applications include automotive safety systems, surveillance, medical imaging, and more.
Ultra High Focusing Speed up to 12000Hz
Long Term Reliability more than 1 billion operating cycle
Ultra Small Power Consumption less than 1mA
Shock Resistance more than 5000G
Operating Temperature Range: -30 ~ 100°C
Single Camera Based 3D Camera
Volumetrically 25% Smaller than Current Technology Based Module
Real-Time Multi Focusing
Applicable to Volumetric 3D DISPLAY, Compact Auto Focus and 3D camera module
A tool maker's microscope is a type of multi-functional microscope used primarily for measuring tools and apparatus in manufacturing industries. It can measure the shape, size, angle, and position of small components using a CCD camera connected to computer software. Common types include vernier traveling and 20x power measuring microscopes. It is used to examine cutting tool edges, verify surface finishes and defects, inspect threads, and verify small part alignments. The stage has an 115mm diameter rotary measuring stage that moves in the X and Y directions using two micrometer heads. The frame provides rigidity and stability, with an overall height of 490mm and weight of 36 lbs.
Inspection Principles and practices, Inspection technologies.pptxSonuSteephen
This document discusses various inspection principles, practices, and technologies. It begins by describing inspection techniques that are either manual or rely on modern machines like CMMs. Key aspects of metrology and desirable instrument characteristics are outlined. The document then differentiates between contact and non-contact inspection, noting advantages of non-contact methods. Specific technologies are examined, including CMMs, machine vision, optical tools, and non-optical techniques using other sensor types.
This document discusses omnidirectional vision systems and their potential applications in manufacturing. It begins with an overview of vision systems and outlines new technologies like 3D omnidirectional systems. It then describes how such systems work using multiple cameras and mirrors to achieve 360 degree views. Existing applications in robots, drones, and automated assembly are reviewed. Finally, the document proposes ways omnidirectional vision could improve safety, quality control, and efficiency in manufacturing applications like automated guided vehicles.
The document discusses reverse engineering techniques. It describes reverse engineering as generating a CAD model from an existing physical part to reconstruct it. There are contact-based methods like coordinate measuring machines and non-contact methods like 3D scanners. 3D scanners use light or lasers to scan objects and generate CAD designs without contact. Reverse engineering is used to manufacture replacement parts, redesign products lacking documentation, or create cheaper alternatives.
Night vision technology allows users to see in dark environments. It has both biological and technical forms, with the technical using either image intensification or thermal imaging. Image intensifiers amplify available light through a vacuum tube, while thermal imaging detects infrared radiation. Night vision was originally developed for the military but is now used for hunting, wildlife observation, surveillance, security, and navigation. It has progressed through several generations with improvements in gain, resolution, and low-light performance. Common night vision equipment includes scopes, goggles, and cameras.
Computer architecture for vision systemAkashPatil334
Computer vision involves acquiring, processing, analyzing and understanding digital images to extract information from the real world. It has evolved from early research to now being used in thousands of applications across many industries. The architecture for computer vision systems requires different computational resources to handle different tasks efficiently, from low-level operations on images to high-level decision making. Key components include cameras, processors, and software like OpenCV and CUDA. Common applications include automotive safety systems, surveillance, medical imaging, and more.
Ultra High Focusing Speed up to 12000Hz
Long Term Reliability more than 1 billion operating cycle
Ultra Small Power Consumption less than 1mA
Shock Resistance more than 5000G
Operating Temperature Range: -30 ~ 100°C
Single Camera Based 3D Camera
Volumetrically 25% Smaller than Current Technology Based Module
Real-Time Multi Focusing
Applicable to Volumetric 3D DISPLAY, Compact Auto Focus and 3D camera module
A tool maker's microscope is a type of multi-functional microscope used primarily for measuring tools and apparatus in manufacturing industries. It can measure the shape, size, angle, and position of small components using a CCD camera connected to computer software. Common types include vernier traveling and 20x power measuring microscopes. It is used to examine cutting tool edges, verify surface finishes and defects, inspect threads, and verify small part alignments. The stage has an 115mm diameter rotary measuring stage that moves in the X and Y directions using two micrometer heads. The frame provides rigidity and stability, with an overall height of 490mm and weight of 36 lbs.
Inspection Principles and practices, Inspection technologies.pptxSonuSteephen
This document discusses various inspection principles, practices, and technologies. It begins by describing inspection techniques that are either manual or rely on modern machines like CMMs. Key aspects of metrology and desirable instrument characteristics are outlined. The document then differentiates between contact and non-contact inspection, noting advantages of non-contact methods. Specific technologies are examined, including CMMs, machine vision, optical tools, and non-optical techniques using other sensor types.
Seminar on night vision technology pptdeepakmarndi
ppt of night vission technology. this is made under the guidance of teacher. withe this report also given in theis side. main things report is given according to the ppt...........
Night vision technology allows humans to see in low-light or no-light conditions. It works through either image intensification or thermal imaging. Image intensification amplifies available light using microchannel plates, while thermal imaging detects infrared radiation emitted by warm objects. Common night vision devices include scopes, goggles, and cameras. Generations of the technology have improved from early vacuum tube designs to current microchannel plate designs with greater amplification and longer lifespans, enabling low-light or even starlight operation. Night vision finds applications in military, security, hunting and wildlife observation.
This document discusses machine vision systems and their components and applications. It describes the basic process of image acquisition, digitization, processing, analysis and interpretation. It outlines the main types of vision systems and cameras used. It also discusses different lighting techniques and image processing methods like segmentation, feature extraction and pattern recognition. Finally, it notes that machine vision is widely used for industrial inspection to automate tasks and improve efficiency.
The document presents a project that aims to neutralize image capturing devices. It discusses detecting cameras using LEDs and image processing, then disabling the camera with a laser. The system works by identifying cameras using their CCD sensor properties when exposed to light. Images are processed to locate cameras then a laser is aimed at the camera lens to overexpose the image sensor. The document outlines the system components, working, safety measures and potential for future development.
Machine vision uses video cameras, lighting, and image processing to analyze physical objects. A video camera's CCD converts light into electrical signals, which are converted to digital signals through analog-to-digital conversion. Image processing includes data reduction, segmentation, feature extraction, and object recognition to analyze images and identify objects. Machine vision is commonly used for industrial inspection and automation applications with robots.
OPTICAL MICROSCOPY AND COORDINATE MEASURING MACHINE sangeetkhule
Introduction
Working principle
Classification
Construction and working
Different types of an optical scope
Process capabilities and analysis
Testing
Process parameters
Components and machine structure
Confocal laser scanning microscopy
Microscopic
Advantages
Applications
Advancement in CMM
Machine characteristics
Process parameters of CMM
Animation video
Research papers
Bar graphs and tables
Conclusion
References
This document discusses night vision technology, which allows vision in low light or no light conditions. It describes two main methods: image intensification and thermal imaging. Image intensification amplifies available light while thermal imaging detects infrared radiation emitted as heat. The document outlines various night vision devices, their applications in fields like military, hunting and security, and concludes that night vision technology has improved accessibility without much skill required and helped reduce accidents.
This document discusses applications of machine vision in industry. It begins by defining machine vision as applying computer vision techniques using additional hardware for tasks like industrial automation. Common applications of machine vision include product inspection in manufacturing to automate and improve the accuracy and efficiency of inspection. The document then discusses the typical components of a machine vision system and how it operates by acquiring images, processing them, and analyzing patterns for tasks like object detection. Finally, it provides several examples of machine vision applications in various industries like automotive, food processing, and rail transport.
This document contains questions and answers about computer graphics. It begins by defining computer graphics as pictures and movies created using computers, usually referring to image data created with specialized graphics hardware and software. Applications of computer graphics mentioned include computer-aided design, presentation graphics, computer art, entertainment, education and training, visualization, image processing, and graphical user interfaces. Key terms like pixel, resolution, aspect ratio, and persistence are also defined. The document then discusses video display devices and CRTs, and explains raster scan and random scan display systems. Color CRTs using beam penetration and shadow mask techniques are also covered.
1) Machine vision uses digital cameras and image processing to automate production processes and quality inspections by replacing manual methods.
2) A machine vision system involves four steps: imaging, image processing/analysis, communicating results to the control system, and taking appropriate action.
3) The main components of a machine vision system are cameras, lighting systems, frame grabbers, and computer/software to process images and analyze results.
Night vision technology allows humans to see in the dark using either biological or technical methods. Technical night vision uses image intensifiers that amplify available light or thermal imaging that detects infrared radiation. Night vision devices include scopes, goggles, and cameras and have progressed through several generations with improved amplification and operating life. Key applications of night vision technology include military operations, hunting, security, and wildlife observation.
Improving image resolution through the cra algorithm involved recycling proce...csandit
Image processing concepts are widely used in medical fields. Digital images are prone to a
variety of types of noise. Noise is the result of errors in the image acquisition process for
reconstruction that result in pixel values that reflect the true intensities of the real scenes. A lot
of researchers are working on the field analysis and processing of multi-dimensional images.
Work previously hasn’t sufficient to stop them, so they continue performance work is due by the
researcher. In this paper we contribute a novel research work for analysis and performance
improvement about to image resolution. We proposed Concede Reconstruction Algorithm (CRA)
Involved Recycling Process to reduce the remained problem in improvement part of an image
processing. The CRA algorithms have better response from researcher to use them
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...cscpconf
Image processing concepts are widely used in medical fields. Digital images are prone to a variety of types of noise. Noise is the result of errors in the image acquisition process for
reconstruction that result in pixel values that reflect the true intensities of the real scenes. A lot of researchers are working on the field analysis and processing of multi-dimensional images. Work previously hasn’t sufficient to stop them, so they continue performance work is due by the researcher. In this paper we contribute a novel research work for analysis and performance improvement about to image resolution. We proposed Concede Reconstruction Algorithm (CRA)
Involved Recycling Process to reduce the remained problem in improvement part of an image processing. The CRA algorithms have better response from researcher to use them.
Eye Tracking Based Human - Computer InteractionSharath Raj
This Presentation aims at explaining how eye tracking works and the usage of Houghman Circle Detection Algorithm in order to detect the iris.
https://www.picostica.com
This document describes a vision assisted pick and place robotic arm guided by image processing concepts for object sorting. It discusses introducing a robotic arm that can pick objects from one location and place them in another using machine vision. The document covers concepts like image acquisition, processing, object identification, and control signal transfer. It provides details on how a webcam captures images that are converted to grayscale and binary before edge detection and other processing to find object boundaries and centroids. This allows generating control signals to guide the robotic arm via a controller. Applications are in automated industries like assembly and potential enhancements are also discussed.
This document describes a vision assisted pick and place robotic arm guided by image processing concepts for object sorting. It discusses introducing a robotic arm that can pick objects from one location and place them in another using machine vision. The document covers key concepts like image acquisition, processing, object identification, and control signal transfer. It provides details on how a webcam captures images that are converted to grayscale and binary before edge detection and other processing to find object boundaries and centroids. Control signals are sent via an interface to guide the robotic arm based on image analysis. Potential applications and advantages like consistency and hazardous task handling are also summarized.
Dip lect2-Machine Vision Fundamentals Abdul Abbasi
Digital image processing and machine vision involve acquiring images using cameras and sensors, preprocessing the images by enhancing contrast and removing noise, segmenting images into meaningful regions, extracting features from the regions, and classifying or interpreting the images. Machine vision has advantages over human vision such as the ability to work in hazardous environments, precisely measure objects, and perform repetitive tasks consistently.
The first of its kind, this project seeks to design and implement a low-cost spin coater specifically for multi-layer all-printed device fabrication. The proposed method involved layering and patterning on flexible substrates and to ensure cost-effectiveness, we used an HDD base to form the foundation for spin coating required in this quest to establish the flexible electronics industry in developing countries like Pakistan.
Computer architecture for vision systemsutsav patel
Computer vision systems analyze visual data from cameras. They typically involve cameras, local processors, network connections, and cloud backends. Computer vision has applications in consumer products, automotive, medical, defense, retail, gaming, security, education, and transportation. The architecture of computer vision systems involves different processors for low-level, medium-level, and high-level operations. Common camera sensors are CCD and CMOS, which use different technologies to capture digital images. Computer vision has applications in automotive safety, object tracking, hazardous area scanning, and biological imaging. Future developments include more heterogeneous and distributed hardware with higher-level programming interfaces.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Seminar on night vision technology pptdeepakmarndi
ppt of night vission technology. this is made under the guidance of teacher. withe this report also given in theis side. main things report is given according to the ppt...........
Night vision technology allows humans to see in low-light or no-light conditions. It works through either image intensification or thermal imaging. Image intensification amplifies available light using microchannel plates, while thermal imaging detects infrared radiation emitted by warm objects. Common night vision devices include scopes, goggles, and cameras. Generations of the technology have improved from early vacuum tube designs to current microchannel plate designs with greater amplification and longer lifespans, enabling low-light or even starlight operation. Night vision finds applications in military, security, hunting and wildlife observation.
This document discusses machine vision systems and their components and applications. It describes the basic process of image acquisition, digitization, processing, analysis and interpretation. It outlines the main types of vision systems and cameras used. It also discusses different lighting techniques and image processing methods like segmentation, feature extraction and pattern recognition. Finally, it notes that machine vision is widely used for industrial inspection to automate tasks and improve efficiency.
The document presents a project that aims to neutralize image capturing devices. It discusses detecting cameras using LEDs and image processing, then disabling the camera with a laser. The system works by identifying cameras using their CCD sensor properties when exposed to light. Images are processed to locate cameras then a laser is aimed at the camera lens to overexpose the image sensor. The document outlines the system components, working, safety measures and potential for future development.
Machine vision uses video cameras, lighting, and image processing to analyze physical objects. A video camera's CCD converts light into electrical signals, which are converted to digital signals through analog-to-digital conversion. Image processing includes data reduction, segmentation, feature extraction, and object recognition to analyze images and identify objects. Machine vision is commonly used for industrial inspection and automation applications with robots.
OPTICAL MICROSCOPY AND COORDINATE MEASURING MACHINE sangeetkhule
Introduction
Working principle
Classification
Construction and working
Different types of an optical scope
Process capabilities and analysis
Testing
Process parameters
Components and machine structure
Confocal laser scanning microscopy
Microscopic
Advantages
Applications
Advancement in CMM
Machine characteristics
Process parameters of CMM
Animation video
Research papers
Bar graphs and tables
Conclusion
References
This document discusses night vision technology, which allows vision in low light or no light conditions. It describes two main methods: image intensification and thermal imaging. Image intensification amplifies available light while thermal imaging detects infrared radiation emitted as heat. The document outlines various night vision devices, their applications in fields like military, hunting and security, and concludes that night vision technology has improved accessibility without much skill required and helped reduce accidents.
This document discusses applications of machine vision in industry. It begins by defining machine vision as applying computer vision techniques using additional hardware for tasks like industrial automation. Common applications of machine vision include product inspection in manufacturing to automate and improve the accuracy and efficiency of inspection. The document then discusses the typical components of a machine vision system and how it operates by acquiring images, processing them, and analyzing patterns for tasks like object detection. Finally, it provides several examples of machine vision applications in various industries like automotive, food processing, and rail transport.
This document contains questions and answers about computer graphics. It begins by defining computer graphics as pictures and movies created using computers, usually referring to image data created with specialized graphics hardware and software. Applications of computer graphics mentioned include computer-aided design, presentation graphics, computer art, entertainment, education and training, visualization, image processing, and graphical user interfaces. Key terms like pixel, resolution, aspect ratio, and persistence are also defined. The document then discusses video display devices and CRTs, and explains raster scan and random scan display systems. Color CRTs using beam penetration and shadow mask techniques are also covered.
1) Machine vision uses digital cameras and image processing to automate production processes and quality inspections by replacing manual methods.
2) A machine vision system involves four steps: imaging, image processing/analysis, communicating results to the control system, and taking appropriate action.
3) The main components of a machine vision system are cameras, lighting systems, frame grabbers, and computer/software to process images and analyze results.
Night vision technology allows humans to see in the dark using either biological or technical methods. Technical night vision uses image intensifiers that amplify available light or thermal imaging that detects infrared radiation. Night vision devices include scopes, goggles, and cameras and have progressed through several generations with improved amplification and operating life. Key applications of night vision technology include military operations, hunting, security, and wildlife observation.
Improving image resolution through the cra algorithm involved recycling proce...csandit
Image processing concepts are widely used in medical fields. Digital images are prone to a
variety of types of noise. Noise is the result of errors in the image acquisition process for
reconstruction that result in pixel values that reflect the true intensities of the real scenes. A lot
of researchers are working on the field analysis and processing of multi-dimensional images.
Work previously hasn’t sufficient to stop them, so they continue performance work is due by the
researcher. In this paper we contribute a novel research work for analysis and performance
improvement about to image resolution. We proposed Concede Reconstruction Algorithm (CRA)
Involved Recycling Process to reduce the remained problem in improvement part of an image
processing. The CRA algorithms have better response from researcher to use them
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...cscpconf
Image processing concepts are widely used in medical fields. Digital images are prone to a variety of types of noise. Noise is the result of errors in the image acquisition process for
reconstruction that result in pixel values that reflect the true intensities of the real scenes. A lot of researchers are working on the field analysis and processing of multi-dimensional images. Work previously hasn’t sufficient to stop them, so they continue performance work is due by the researcher. In this paper we contribute a novel research work for analysis and performance improvement about to image resolution. We proposed Concede Reconstruction Algorithm (CRA)
Involved Recycling Process to reduce the remained problem in improvement part of an image processing. The CRA algorithms have better response from researcher to use them.
Eye Tracking Based Human - Computer InteractionSharath Raj
This Presentation aims at explaining how eye tracking works and the usage of Houghman Circle Detection Algorithm in order to detect the iris.
https://www.picostica.com
This document describes a vision assisted pick and place robotic arm guided by image processing concepts for object sorting. It discusses introducing a robotic arm that can pick objects from one location and place them in another using machine vision. The document covers concepts like image acquisition, processing, object identification, and control signal transfer. It provides details on how a webcam captures images that are converted to grayscale and binary before edge detection and other processing to find object boundaries and centroids. This allows generating control signals to guide the robotic arm via a controller. Applications are in automated industries like assembly and potential enhancements are also discussed.
This document describes a vision assisted pick and place robotic arm guided by image processing concepts for object sorting. It discusses introducing a robotic arm that can pick objects from one location and place them in another using machine vision. The document covers key concepts like image acquisition, processing, object identification, and control signal transfer. It provides details on how a webcam captures images that are converted to grayscale and binary before edge detection and other processing to find object boundaries and centroids. Control signals are sent via an interface to guide the robotic arm based on image analysis. Potential applications and advantages like consistency and hazardous task handling are also summarized.
Dip lect2-Machine Vision Fundamentals Abdul Abbasi
Digital image processing and machine vision involve acquiring images using cameras and sensors, preprocessing the images by enhancing contrast and removing noise, segmenting images into meaningful regions, extracting features from the regions, and classifying or interpreting the images. Machine vision has advantages over human vision such as the ability to work in hazardous environments, precisely measure objects, and perform repetitive tasks consistently.
The first of its kind, this project seeks to design and implement a low-cost spin coater specifically for multi-layer all-printed device fabrication. The proposed method involved layering and patterning on flexible substrates and to ensure cost-effectiveness, we used an HDD base to form the foundation for spin coating required in this quest to establish the flexible electronics industry in developing countries like Pakistan.
Computer architecture for vision systemsutsav patel
Computer vision systems analyze visual data from cameras. They typically involve cameras, local processors, network connections, and cloud backends. Computer vision has applications in consumer products, automotive, medical, defense, retail, gaming, security, education, and transportation. The architecture of computer vision systems involves different processors for low-level, medium-level, and high-level operations. Common camera sensors are CCD and CMOS, which use different technologies to capture digital images. Computer vision has applications in automotive safety, object tracking, hazardous area scanning, and biological imaging. Future developments include more heterogeneous and distributed hardware with higher-level programming interfaces.
Similar to Chapter 1. Introduction to machine vision.pdf (20)
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
1. XỬ LÝ ẢNH TRONG CƠ ĐIỆN TỬ
Machine Vision
1
TRƯỜNG ĐẠI HỌC BÁCH KHOA HÀ NỘI
Giảng viên: TS. Nguyễn Thành Hùng
Đơn vị: Bộ môn Cơ điện tử, Viện Cơ khí
Hà Nội, 2021
2. 2
Chapter 1. Introduction to machine vision
1. Introduction
2. Basic elements of machine vision system
3. Classification
4. Technical specifications
5. Designing a Machine Vision System
Phân đoạn ảnh
3. 3
1. Introduction
❖Definition
➢Machine vision (MV) is the technology and methods used to provide imaging-
based automatic inspection and analysis for such applications as automatic
inspection, process control, and robot guidance, usually in industry.
➢Machine vision is a term encompassing a large number of technologies,
software and hardware products, integrated systems, actions, methods and
expertise.
➢Machine vision as a systems engineering discipline can be considered distinct
from computer vision, a form of computer science.
➢It attempts to integrate existing technologies in new ways and apply them to
solve real world problems.
4. 4
1. Introduction
❖Definition
➢The overall machine vision process includes planning the details of the
requirements and project, and then creating a solution. During run-time, the
process starts with imaging, followed by automated analysis of the image and
extraction of the required information.
11. 11
Chapter 1. Introduction to machine vision
1. Introduction
2. Basic elements of machine vision system
3. Classification
4. Technical specifications
5. Designing a Machine Vision System
12. 12
2. Basic elements of machine vision system
2.1. Overview
2.2. Illumination
2.3. Imaging
2.4. Image processing and analysis
13. 13
2. Basic elements of machine vision system
❑ 2.1. Overview
http://www.digikey.com/en/articles/techzone/2012/jan/versatile-leds-drive-machine-vision-in-automated-manufacture
14. 14
2. Basic elements of machine vision system
❑ 2.1. Overview
OK NG
How to automatically detect the defect?
15. 15
2. Basic elements of machine vision system
❑ 2.1. Overview
❖Illumination
➢Illumination: is the way an object is lit up and lighting is the actual lamp that
generates the illumination.
❖Imaging (Camera and lens)
➢The term imaging defines the act of creating an image.
16. 16
2. Basic elements of machine vision system
❑ 2.1. Overview
❖Image processing and analysis
➢This is where the desired features are extracted automatically by algorithms and
conclusions are drawn.
➢A feature is the general term for information in an image, for example a
dimension or a pattern.
➢Algorithms are also referred to as tools or functions.
17. 17
2. Basic elements of machine vision system
2.1. Overview
2.2. Illumination
2.3. Imaging
2.4. Image processing and analysis
18. 18
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖The goal of lighting in machine vision is to obtain a robust application by:
➢Enhancing the features to be inspected.
➢Assuring high repeatability in image quality.
SICK IVP, “Machine Vision Introduction,” 2006.
19. 19
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Illumination Principles
Light can be described as waves with three properties:
➢Wavelength or color, measured in nm (nanometers)
➢Intensity
➢Polarization.
SICK IVP, “Machine Vision Introduction,” 2006.
20. 20
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Illumination Principles
➢ The spectral response of a sensor is the sensitivity curve for different
wavelengths.
Spectral response of a
gray scale CCD sensor.
Maximum sensitivity
is for green (500 nm).
SICK IVP, “Machine Vision Introduction,” 2006.
21. 21
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Illumination Principles
➢ The optical axis is a thought line through the center of the lens, i.e. the direction
the camera is looking.
SICK IVP, “Machine Vision Introduction,” 2006.
22. 22
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Illumination Principles
SICK IVP, “Machine Vision Introduction,” 2006.
23. 23
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> Ring Light
➢ A ring light is mounted around the optical axis of the lens, either on the camera or
somewhere in between the camera and the object.
SICK IVP, “Machine Vision Introduction,” 2006.
24. 24
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> Ring Light
SICK IVP, “Machine Vision Introduction,” 2006.
Pros
• Easy to use
• High intensity and short exposure time possible
Ambient light.
Cons
• Direct reflections, called hot spots, on reflective surfaces
Ring light. The printed matte s
urface is evenly illuminated. Ho
t spots appear on shiny
surfaces (center), one for each
of the 12 LEDs of the ring light.
25. 25
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> Spot Light
➢ A spot light has all the light emanating from one direction that is different from the
optical axis. For flat objects, only diffuse reflections reach the camera.
SICK IVP, “Machine Vision Introduction,” 2006.
Mainly diffuse reflections
reach the camera
Object
Spot light
26. 26
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> Spot Light
SICK IVP, “Machine Vision Introduction,” 2006.
Pros
• No hot spots
Cons
• Uneven illumination
• Requires intense light since it is
dependent on diffuse reflections
27. 27
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> Backlight
➢ The backlight principle has the object being illuminated from behind to produce a
contour or silhouette.
SICK IVP, “Machine Vision Introduction,” 2006.
28. 28
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> Backlight
SICK IVP, “Machine Vision Introduction,” 2006.
Pros
• Very good contrast
• Robust to texture, color, and ambient light
Cons
• Dimension must be
larger than object
Ambient light.
Backlight: Enhances contours
by creating a silhouette
29. 29
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> Darkfield
➢ Darkfield means that the object is illuminated at a large angle of incidence.
SICK IVP, “Machine Vision Introduction,” 2006.
30. 30
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> Darkfield
SICK IVP, “Machine Vision Introduction,” 2006.
Pros
• Good enhancement of scratches, protruding
edges, and dirt on surfaces
Cons
• Mainly works on flat surfaces with small features
• Requires small distance to object
• The object needs to be somewhat reflective
Ambient light. Darkfield: Enhances relief co
ntours, i.e., lights up edges
31. 31
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> On-Axis Light
➢ When an object needs to be illuminated parallel to the optical axis, a semi-
transparent mirror is used to create an on- axial light source.
SICK IVP, “Machine Vision Introduction,” 2006.
32. 32
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> On-Axis Light
SICK IVP, “Machine Vision Introduction,” 2006.
Pros
• Very even illumination, not hot spots
• High contrast on materials with different
reflectivity
Cons
• Low intensity requires long exposure times
• Cleaning off semi-transparent mirror (beam-
splitter) often needed
Inside of a can
as seen with
ambient light
Inside of the same can
as seen with a coaxial
(on-axis) light
33. 33
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> Dome Light
➢ The dome light produces the needed uniform light intensity inside of the dome
walls.
SICK IVP, “Machine Vision Introduction,” 2006.
34. 34
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> Dome Light
SICK IVP, “Machine Vision Introduction,” 2006.
Pros
• Works well on highly reflective
materials
• Uniform illumination, except for the
darker middle of the image. No hot
spots
Cons
• Low intensity requires
long exposure times
• Dimensions must be
larger than object
• Dark area in the middle of
the image
35. 35
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> Dome Light
SICK IVP, “Machine Vision Introduction,” 2006.
Ambient light. On top of the key numbers is a curved,
transparent material causing direct reflections.
The direct reflections are eliminated by
the dome light’s even illumination.
36. 36
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> Laser Light
➢ A 2D camera with a laser line can provide a cost efficient solution for low-contrast
and 3D inspections.
SICK IVP, “Machine Vision Introduction,” 2006.
Pros
• Robust against ambient light
• Allows height measurements (z parallel
to the optical axis).
• Low-cost 3D for simpler applications
Cons
• Laser safety issues
• Data along y is lost in favor of z (height) data
• Lower accuracy than 3D cameras
37. 37
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Types >> Laser Light
SICK IVP, “Machine Vision Introduction,” 2006.
Ambient light. Contract lens containers, the left
is facing up (5mm high at cross) and the right is
facing down (1mm high at minus sign.
The laser line clearly shows the
height difference.
38. 38
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Variants and Accessories >> Strobe or Constant light
➢ A strobe light is a flashing light.
➢ Strobing allows the LED to emit higher light intensity than what is achieved with a
constant light by turbo charging.
SICK IVP, “Machine Vision Introduction,” 2006.
39. 39
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Variants and Accessories >> Diffusor Plate
➢ The diffusor plate converts direct light into diffuse.
➢ The purpose of a diffusor plate is to avoid bright spots in the image, caused by
the direct light's reflections in glossy surfaces.
SICK IVP, “Machine Vision Introduction,” 2006.
Two identical white bar lights, with diffusor
plate (top) and without (bottom).
40. 40
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Variants and Accessories >> LED Color
➢ LED lightings come in several colors. Most common are red and green. There are
also LEDs in blue, white, UV, and IR.
➢ Different objects reflect different colors. A blue object appears blue because it
reflects the color blue.
➢ Therefore, if blue light is used to illuminate a blue object, it will appear bright in a
gray scale image.
SICK IVP, “Machine Vision Introduction,” 2006.
41. 41
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Variants and Accessories >> LED Color
SICK IVP, “Machine Vision Introduction,” 2006.
42. 42
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Variants and Accessories >> Optical Filters
➢ An optical filter is a layer in front of the sensor or lens that absorbs certain
wavelengths (colors) or polarizations.
➢ Two main optical filter types are used for machine vision:
SICK IVP, “Machine Vision Introduction,” 2006.
1
2
Polarization filter: Only transmits light with a certain polarization. Light changes its
polarization when it is reflected, which allows us to filter out unwanted reflections.
Band-pass filter: Only transmits light of a certain color, i.e. within a certain
wavelength interval. example, a red filter only lets red through
43. 43
2. Basic elements of machine vision system
❑ 2.2. Illumination
❖Lighting Variants and Accessories >> Optical Filters
SICK IVP, “Machine Vision Introduction,” 2006.
Original image Image seen by gray
scale camera with
ambient light and
without filter
Red light and a
red band-pass filter
Green light and a
green band-pass filter
44. 44
2. Basic elements of machine vision system
2.1. Overview
2.2. Illumination
2.3. Imaging
2.4. Image processing and analysis
45. 45
2. Basic elements of machine vision system
❑ 2.3. Imaging
➢ The term imaging defines the act of creating an image.
➢ Imaging has several technical names: Acquiring, capturing, or grabbing
➢ To grab a high-quality image → the number one goal for a successful vision
application.
SICK IVP, “Machine Vision Introduction,” 2006.
46. 46
2. Basic elements of machine vision system
❑ 2.3. Imaging
❖Basic Camera Concepts
➢ A simplified camera setup consists of camera, lens, lighting, and object.
SICK IVP, “Machine Vision Introduction,” 2006.
47. 47
2. Basic elements of machine vision system
❑ 2.3. Imaging
❖Basic Camera Concepts: Digital Imaging
➢ A sensor chip is used to grab a digital image.
➢ On the sensor there is an array of lightsensitive pixels.
SICK IVP, “Machine Vision Introduction,” 2006.
Sensor chip with an array
of light-sensitive pixels.
48. 48
2. Basic elements of machine vision system
❑ 2.3. Imaging
❖Basic Camera Concepts: Digital Imaging
There are two technologies used for digital image sensors:
➢ CCD (Charge-Coupled Device)
➢ CMOS (Complementary Metal Oxide Semiconductor).
SICK IVP, “Machine Vision Introduction,” 2006.
49. 49
2. Basic elements of machine vision system
❑ 2.3. Imaging
❖Basic Camera Concepts: Digital Imaging
http://www.f4news.com/2016/05/09/ccd-vs-cmos-infographic/
50. 50
2. Basic elements of machine vision system
❑ 2.3. Imaging
❖Basic Camera Concepts: Lenses and Focal Length
➢ The lens (Objective) focuses the light that enters the camera in a way that
creates a sharp image.
SICK IVP, “Machine Vision Introduction,” 2006.
Focused or sharp image. Unfocused or blurred image.
51. 51
2. Basic elements of machine vision system
❑ 2.3. Imaging
❖Basic Camera Concepts: Lenses and Focal Length
➢ The angle of view determines how much of the visual scene the camera sees.
SICK IVP, “Machine Vision Introduction,” 2006.
52. 52
2. Basic elements of machine vision system
❑ 2.3. Imaging
❖Basic Camera Concepts: Lenses and Focal Length
➢ The focal length is the distance between the lens and the focal point.
➢ When the focal point is on the sensor, the image is in focus.
SICK IVP, “Machine Vision Introduction,” 2006.
53. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Lenses and Focal Length
▪ Focal length is related to angle of view in that a long focal length corresponds to a
small angle of view, and vice versa.
Basic Camera Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
54. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Field of View in 2D
▪ The FOV (Field of View) in 2D systems is the full area that a camera sees. The FOV
is specified by its width and height.
▪ The object distance is the distance between the lens and the object.
Basic Camera Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
55. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Aperture and F-stop
▪ The aperture is the opening in the lens that controls the amount of light that is let
onto the sensor. In quality lenses, the aperture is adjustable.
Basic Camera Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
56. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Aperture and F-stop
▪ The size of the aperture is measured by its F-stop value. A large F-stop value means
a small aperture opening, and vice versa.
▪ For standard CCTV lenses, the F-stop value is adjustable in the range between F1.4
and F16.
Basic Camera Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
57. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Depth of Field
▪ The minimum object distance (sometimes abbreviated MOD) is the closest
distance in which the camera lens can focus and maximum object distance is the
farthest distance.
Basic Camera Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
58. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Depth of Field
▪ The focal plane is found at the distance where the focus is as sharp as possible.
▪ Objects closer or farther away than the focal plane can also be considered to be in
focus. This distance interval where good-enough focus is obtained is called depth of
field (DOF).
Basic Camera Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
59. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Depth of Field
Basic Camera Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
60. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Depth of Field
▪ The depth of field depends on both the focal length and the aperture adjustment.
Basic Camera Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
61. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Depth of Field
▪ By adding a distance ring between the camera and the lens, the focal plane (and
thus the MOD) can be moved closer to the camera. A distance ring is also referred to
as shim, spacer, or extension ring.
Basic Camera Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
62. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Depth of Field
▪ A side-effect of using a distance ring is that a maximum object distance is
introduced and that the depth of field range decreases.
Basic Camera Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
63. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Pixels and Resolution
▪ A pixel is the smallest element in a digital image. Normally, the
pixel in the image corresponds directly to the physical pixel on
the sensor.
▪ To the right is an example of a very small image with dimension
8x8 pixels. The dimensions are called x and y, where x
corresponds to the image columns and y to the rows.
Basic Image Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
64. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Pixels and Resolution
▪ Typical values of sensor resolution in 2D machine
vision are:
➢ VGA (Video Graphics Array): 640x480 pixels
➢ XGA (Extended Graphics Array): 1024x768 pixels
➢ SXGA (Super Extended Graphics Array):
1280x1024 pixels
Basic Image Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
65. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Pixels and Resolution
▪ The object resolution is the physical dimension on the object that corresponds to
one pixel on the sensor. Common units for object resolution are μm (microns) per
pixel and mm per pixel.
▪ Example: Object Resolution Calculation: FOV width = 50 mm, Sensor resolution =
640x480 pixels, Calculation of object resolution in x:
Basic Image Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
66. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Intensity
▪ The brightness of a pixel is called intensity. The intensity information is stored for
each pixel in the image and can be of different types. Examples:
➢ Binary: One bit per pixel.
➢ Gray scale: Typically one byte per pixel.
Basic Image Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
67. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Intensity
➢ Color: Typically one byte per pixel and color. Three bytes are needed to obtain full
color information. One pixel thus contains three components (R, G, B).
Basic Image Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
68. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Intensity
▪ When the intensity of a pixel is digitized and described by a byte, the information is
quantized into discrete levels. The number of bits per byte is called bit-depth.
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
Basic Image Concepts
69. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Exposure
▪ Exposure is how much light is detected by the photographic film or sensor. The
exposure amount is determined by two factors:
➢ Exposure time: Duration of the exposure, measured in milliseconds (ms). Also
called shutter time from traditional photography.
➢ Aperture size: Controls the amount of light that passes through the lens.
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
Basic Image Concepts
70. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Exposure
▪ If the exposure time is too short for the sensor to capture enough light, the image is
said to be underexposed. If there is too much light and the sensor is saturated, the
image is said to be overexposed.
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
Basic Image Concepts
71. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Gain
▪ Gain amplifies the intensity values after the sensor has already been exposed, very
much like the volume control of a radio (which doesn’t actually make the artist sing
louder). The tradeoff of compensating insufficient exposure with a high gain is
amplified noise.
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
Basic Image Concepts
72. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Contrast and Histogram
▪ Contrast is the relative difference between bright and dark areas in an image.
Contrast is necessary to see anything at all in an image.
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
Basic Image Concepts
73. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Contrast and Histogram
▪ A histogram is a diagram where the pixels are sorted in order of increasing intensity
values.
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
Basic Image Concepts
74. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Contrast and Histogram
Basic Image Concepts
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.3. Imaging
75. 75
2. Basic elements of machine vision system
2.1. Overview
2.2. Illumination
2.3. Imaging
2.4. Image processing and analysis
76. 2. Basic elements of machine vision system
❑ 2.4. Image processing and analysis
➢ The basic stages in image processing include: preprocessing, image
segmentation, feature extraction, and recognition and analysis.
77. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
▪ The captured image may have low contrast, noise, or contain some unnecessary
information.
▪ The main function of the preprocessing is to filter noise, increase contrast to make
images clearer and sharper.
▪ Some other functions such as converting color images to grayscale images,
extracting areas of interest (ROI - Region of Interest).
Preprocessing
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.4. Image processing and analysis
78. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Preprocessing
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.4. Image processing and analysis
ROI Extraction
One ROI is created to verify the logotype (blue) and
another is created for barcode reading (green).
A ROI is placed around each pill in the blister pack
and the pass/fail analysis is performed once per ROI.
79. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Preprocessing
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.4. Image processing and analysis
Pixel Counting
Automotive part with crack. The crack is found using a darkfield illumination
and by counting the dark pixels inside the ROI.
80. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Preprocessing
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.4. Image processing and analysis
Digital Filters
Noisy version of original image. Image (left) after noise reduction.
81. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Image segmentation
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.4. Image processing and analysis
▪ Image segmentation is to split an input image into component areas to
represent analysis and image recognition.
▪ This is the most difficult part of image processing and is also error-prone,
losing the accuracy of the image processing. The result of object
identification depends very much on this stage.
82. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Image segmentation
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.4. Image processing and analysis
Original intensity-coded 3D image. Image after a binarization operation. Image after edge enhancement.
83. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Image segmentation
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.4. Image processing and analysis
84. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Feature Extraction
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.4. Image processing and analysis
▪ The output of the segmentation contains pixels of the image area
(segmented image) plus the code associated with the neighborhood.
▪ Features for image rendering called Feature Selection are associated with
separating image properties in the form of quantitative information.
85. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Feature Extraction
R. C. Gonzalez and R. E. Woods, “Digital Image Processing,” 4th edition, Prentice Hall, 2018.
2. Basic elements of machine vision system
❑ 2.4. Image processing and analysis
Digital boundary with resampling
grid superimposed.
Result of resampling. 8-directional chain-coded boundary.
86. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Feature Extraction
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
2. Basic elements of machine vision system
❑ 2.4. Image processing and analysis
The smallest axis-parallel enclosing
rectangle of a region.
The smallest enclosing rectangle
of arbitrary orientation. The smallest enclosing circle.
87. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Image recognition and analysis
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.4. Image processing and analysis
▪ Image recognition is the process of identifying images.
▪ This process is usually obtained by comparing with the standard sample
that has been learned (or saved) before.
▪ Interpolation is a judgment based on the meaning of identification.
➢identification of parameter
➢identification of structure
88. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Image recognition and analysis
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.4. Image processing and analysis
Reference image for teaching. Matching in new image.
89. The spectral response of a sensor is the sensitivity curve for different
wavelengths. Camera
sensors can have a different spectral response than the human eye.
Image recognition and analysis
SICK IVP, “Machine Vision Introduction,” 2006.
2. Basic elements of machine vision system
❑ 2.4. Image processing and analysis
90. 90
Chapter 1. Introduction to machine vision
1. Introduction
2. Basic elements of machine vision system
3. Classification
4. Technical specifications
5. Designing a Machine Vision System
92. 92
3. Classification
❑ 3.1. 1D vision systems
➢ 1D vision analyzes a digital signal one line at a time instead of looking at a whole
picture at once.
➢ This technique commonly detects and classifies defects on materials
manufactured in a continuous process, such as paper, metals, plastics, and other
non-woven sheet or roll goods.
COGNEX, “Introduction to Machine Vision,” 2016.
93. 93
3. Classification
❑ 3.1. 1D vision systems
COGNEX, “Introduction to Machine Vision,” 2016.
1D vision systems scan one
line at a time while the
process moves. In the
above example, a defect in
the sheet is detected.
94. 94
3. Classification
❑ 3.2. 2D vision systems
➢ Most common inspection cameras perform area scans that involve capturing 2D
snapshots in various resolutions.
COGNEX, “Introduction to Machine Vision,” 2016.
2D vision systems can
produce images with
different resolutions
95. 95
3. Classification
❑ 3.2. 2D vision systems
➢ Another type of 2D machine vision–line scan–builds a 2D image line by line.
COGNEX, “Introduction to Machine Vision,” 2016.
Line scan techniques build the 2D image one line at a time.
96. 96
3. Classification
❑ 3.3. 3D vision systems
➢ 3D machine vision systems typically comprise multiple cameras or one or more
laser displacement sensors.
➢ Multi-camera 3D vision in robotic guidance applications provides the robot with
part orientation information.
➢ These systems involve multiple cameras mounted at different locations and
“triangulation” on an objective position in 3-D space.
COGNEX, “Introduction to Machine Vision,” 2016.
97. 97
3. Classification
❑ 3.3. 3D vision systems
COGNEX, “Introduction to Machine Vision,” 2016.
3D vision systems typically employ
multiple cameras
3D inspection system using
a single camera
98. 98
3. Classification
❑ 3.3. 3D vision systems
➢ 3D laser-displacement sensor applications typically include surface inspection
and volume measurement, producing 3D results with as few as a single camera.
➢ A height map is generated from the displacement of the reflected lasers’ location
on an object.
➢ The object or camera must be moved to scan the entire product similar to line
scanning.
COGNEX, “Introduction to Machine Vision,” 2016.
99. 99
Chapter 1. Introduction to machine vision
1. Introduction
2. Basic elements of machine vision system
3. Classification
4. Technical specifications
5. Designing a Machine Vision System
100. 100
4. Technical Specifications
4.1. Parts
4.2. Part Presentation
4.3. Performance Requirements
4.4. Information Interfaces
4.5. Installation space
4.6. Environment
101. 101
4. Technical Specifications
❑ 4.1. Parts
➢ Discrete parts or endless material (i.e., paper or woven goods) minimum and
maximum dimensions
➢ Changes in shape
➢ Description of the features that have to be extracted
➢ Changes of these features concerning error parts and common product variation
➢ Surface finish
➢ Color
➢ Corrosion, oil films, or adhesives
➢ Changes due to part handling, i.e., labels, fingerprints
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
102. 102
4. Technical Specifications
4.1. Parts
4.2. Part Presentation
4.3. Performance Requirements
4.4. Information Interfaces
4.5. Installation space
4.6. Environment
103. 103
4. Technical Specifications
❑ 4.2. Parts Presentation
➢ Regarding part motion, the following options are possible:
▪ indexed positioning
▪ continuous movement
➢ If there is more than one part in view, the following topics are important:
▪ number of parts in view
▪ overlapping parts
▪ touching parts
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
104. 104
4. Technical Specifications
4.1. Parts
4.2. Part Presentation
4.3. Performance Requirements
4.4. Information Interfaces
4.5. Installation space
4.6. Environment
105. 105
4. Technical Specifications
❑ 4.3. Performance Requirements
➢ The performance requirements can be seen in the aspects of:
▪ accuracy and
▪ time performance
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
106. 106
4. Technical Specifications
❑ 4.3. Performance Requirements
➢ Time performance:
▪ cycle time
▪ start of acquisition
▪ maximum processing time
▪ number of production cycles from inspection to result using (for result
buffering)
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
107. 107
4. Technical Specifications
4.1. Parts
4.2. Part Presentation
4.3. Performance Requirements
4.4. Information Interfaces
4.5. Installation space
4.6. Environment
108. 108
4. Technical Specifications
❑ 4.4. Information Interfaces
➢ User interface for handling and visualizing results
➢ Declaration of the current part type
➢ Start of the inspection
➢ Setting results
➢ Storage of results or inspection data in log files or databases
➢ Generation of inspection protocols for storage or printout
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
109. 109
4. Technical Specifications
4.1. Parts
4.2. Part Presentation
4.3. Performance Requirements
4.4. Information Interfaces
4.5. Installation space
4.6. Environment
110. 110
4. Technical Specifications
❑ 4.5. Installation Space
➢ The possibility of aligning the illumination and the camera
➢ Is an insight into the inspection scene possible?
➢ What variations are possible for minimum and maximum distances between the
part and the camera?
➢ The distance between the camera and the processing unit
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
111. 111
4. Technical Specifications
4.1. Parts
4.2. Part Presentation
4.3. Performance Requirements
4.4. Information Interfaces
4.5. Installation space
4.6. Environment
112. 112
4. Technical Specifications
❑ 4.6. Environment
➢ Ambient light
➢ Dirt or dust that the equipment needs to be protected from shock or vibration that
affects the part of the equipment heat or cold
➢ Necessity of a certain protection class
➢ Availability of power supply
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
113. 113
Chapter 1. Introduction to machine vision
1. Introduction
2. Basic elements of machine vision system
3. Classification
4. Technical specifications
5. Designing a Machine Vision System
114. 114
5. Designing a Machine Vision System
5.1. Camera Type
5.2. Field of View
5.3. Resolution
5.4. Choice of Camera, Frame Grabber, and Hardware Platform
5.5. Lens Design
5.6. Choice of Illumination
5.7. Mechanical Design
5.8. Electrical Design
5.9. Software
5.10. Costs
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
115. 115
5. Designing a Machine Vision System
❑ 5.1. Camera Type
➢ Line scan camera
➢ Area scan camera
➢ 3D camera
Directions for a line scan camera.
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
116. 116
5. Designing a Machine Vision System
5.1. Camera Type
5.2. Field of View
5.3. Resolution
5.4. Choice of Camera, Frame Grabber, and Hardware Platform
5.5. Lens Design
5.6. Choice of Illumination
5.7. Mechanical Design
5.8. Electrical Design
5.9. Software
5.10. Costs
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
117. 117
5. Designing a Machine Vision System
❑ 5.2. Field of View
The field of view is determined by the following factors:
➢ maximum part size
➢ maximum variation of part presentation in
translation and orientation
➢ margin as an offset to part size
➢ aspect ratio of the camera sensor
Field of view.
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
118. 118
5. Designing a Machine Vision System
❑ 5.2. Field of View
Example:
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
Horizontal Vertical
maximum part size 10 mm 6 mm
tolerance in positioning 1 mm
margin 2 mm
aspect ratio 4:3
𝐹𝑂𝑉ℎ𝑜𝑟_𝑐𝑎𝑙 = 10 𝑚𝑚 + 1 𝑚𝑚 + 2 𝑚𝑚 = 13 𝑚𝑚 → 𝐹𝑂𝑉𝑣𝑒𝑟_𝑒𝑠𝑡 =
3
4
𝐹𝑂𝑉ℎ𝑜𝑟_𝑐𝑎𝑙 = 9.75 𝑚𝑚
𝐹𝑂𝑉𝑣𝑒𝑟_𝑐𝑎𝑙 = 6 𝑚𝑚 + 1 𝑚𝑚 + 2 𝑚𝑚 = 9 𝑚𝑚 < 𝐹𝑂𝑉𝑣𝑒𝑟_𝑒𝑠𝑡
𝐹𝑂𝑉ℎ𝑜𝑟 = 𝐹𝑂𝑉ℎ𝑜𝑟_𝑐𝑎𝑙 𝑎𝑛𝑑 𝐹𝑂𝑉
𝑣𝑒𝑟 = 𝐹𝑂𝑉𝑣𝑒𝑟_𝑒𝑠𝑡 𝐹𝑂𝑉 = 13 𝑚𝑚 × 9.75 𝑚𝑚
119. 119
5. Designing a Machine Vision System
5.1. Camera Type
5.2. Field of View
5.3. Resolution
5.4. Choice of Camera, Frame Grabber, and Hardware Platform
5.5. Lens Design
5.6. Choice of Illumination
5.7. Mechanical Design
5.8. Electrical Design
5.9. Software
5.10. Costs
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
120. 120
5. Designing a Machine Vision System
❑ 5.3. Resolution
➢ camera sensor resolution
➢ spatial resolution
➢ measurement accuracy
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
121. 121
5. Designing a Machine Vision System
❑ 5.3. Resolution
➢ Calculation of Resolution
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
122. 122
5. Designing a Machine Vision System
❑ 5.3. Resolution
➢ Example: Measure the dimension of the object 10mmx6mm above with accuracy
0.01 mm.
▪ Using edge detection for dimention measurement → Nf = 1/3 pixel. But there is a
tolerance in positioning, the number of pixels for the smallest feature is set to 1
pixel (Nf = 1 pixel).
▪ Size of the smallest feature Sf = 0.01 mm
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
124. 124
5. Designing a Machine Vision System
❑ 5.3. Resolution
▪ Object resolution (spatial resolution): assuming that a lens with a field of view of
14 mm ( > 13 mm) was chosen, a camera with resolution of 1440x1080 was
chosen.
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
𝑅𝑠 =
𝐹𝑂𝑉
𝑅𝐶
=
14 𝑚𝑚
1440 𝑝𝑖𝑥𝑒𝑙
= 0.01 𝑚𝑚/𝑝𝑖𝑥𝑒𝑙
125. 125
5. Designing a Machine Vision System
❑ 5.3. Resolution
➢ Calculation of Resolution: Resolution for a Line Scan Camera
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
126. 126
5. Designing a Machine Vision System
5.1. Camera Type
5.2. Field of View
5.3. Resolution
5.4. Choice of Camera, Frame Grabber, and Hardware Platform
5.5. Lens Design
5.6. Choice of Illumination
5.7. Mechanical Design
5.8. Electrical Design
5.9. Software
5.10. Costs
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
127. 127
5. Designing a Machine Vision System
❑ 5.4. Choice of Camera, Frame Grabber, and Hardware Platform
➢ Camera Model
▪ color sensor
▪ interface technology
▪ progressive scan for area cameras
▪ packaging size
▪ price and availability
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
128. 128
5. Designing a Machine Vision System
❑ 5.4. Choice of Camera, Frame Grabber, and Hardware Platform
➢ Frame Grabber
▪ compatibility with the pixel rate
▪ compatibility with the software library
▪ number of cameras that can be addressed
▪ utilities to control the camera via the frame grabber
▪ timing and triggering of the camera
▪ availability of on-board processing
▪ availability of general purpose I/O
▪ price and availability
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
129. 129
5. Designing a Machine Vision System
❑ 5.4. Choice of Camera, Frame Grabber, and Hardware Platform
➢ Pixel Rate
▪ This is the speed of imaging in terms of pixels per second.
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
130. 130
5. Designing a Machine Vision System
❑ 5.4. Choice of Camera, Frame Grabber, and Hardware Platform
➢ Pixel Rate
▪ For an area camera:
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
An overhead of 10% to 20% should be considered
due to additional bus transfer.
131. 131
5. Designing a Machine Vision System
❑ 5.4. Choice of Camera, Frame Grabber, and Hardware Platform
➢ Pixel Rate
▪ For a line scan camera:
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
132. 132
5. Designing a Machine Vision System
❑ 5.4. Choice of Camera, Frame Grabber, and Hardware Platform
➢ Hardware Platform
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
• Compatibility with frame grabber
• Operating system
• Development process
• means for a user-friendly human machine
interface
• Processing load
• Miscellaneous points: available interfaces,
memory, packaging size, price, and availability
▪ smart cameras
▪ compact vision systems
▪ PC-based systems
133. 133
5. Designing a Machine Vision System
❑ Example about a camera
https://www.baslerweb.com/en/products/cameras/area-scan-cameras/ace/aca2440-75uc/
134. 134
5. Designing a Machine Vision System
❑ Example about a camera
https://www.baslerweb.com/en/products/cameras/area-scan-cameras/ace/aca2440-75uc/
135. 135
5. Designing a Machine Vision System
❑ Example about a camera
https://www.baslerweb.com/en/products/cameras/area-scan-cameras/ace/aca2440-75uc/
136. 136
5. Designing a Machine Vision System
5.1. Camera Type
5.2. Field of View
5.3. Resolution
5.4. Choice of Camera, Frame Grabber, and Hardware Platform
5.5. Lens Design
5.6. Choice of Illumination
5.7. Mechanical Design
5.8. Electrical Design
5.9. Software
5.10. Costs
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
137. 137
5. Designing a Machine Vision System
❑ 5.5. Lens Design
➢ Focal Length
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
standoff distance
focal length
Magnification
lens extension
focus distance
138. 138
5. Designing a Machine Vision System
❑ 5.5. Lens Design
➢ Focal Length
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
139. 139
5. Designing a Machine Vision System
❑ 5.5. Lens Design
➢ Lens Flange Focal Distance
▪ This is the distance between the lens mount face and the image plane.
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
140. 140
5. Designing a Machine Vision System
❑ 5.5. Lens Design
➢ Extension Tubes
▪ The lens extension l can be increased using the focus adjustmentof the lens.
▪ If the distance cannot be increased, extension tubes can be used to focus close
objects. As a result, the depth of view is decreased.
▪ For higher magnifications, such as from 0.4 to 4, macro lenses offer better image
quality.
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
141. 141
5. Designing a Machine Vision System
❑ 5.5. Lens Design
➢ Lens Diameter and Sensor Size
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
Areas illuminated by the lens and camera;
the left side displays an appropriate choice.
142. 142
5. Designing a Machine Vision System
❑ 5.5. Lens Design
➢ Sensor Resolution and Lens Quality
▪ As for high resolution cameras, the requirements on the lens are higher than
those for standard cameras.
▪ Using a low-budget lens might lead to poor image quality for high resolution
sensors, whereas the quality is acceptable for lower resolutions.
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
143. 143
5. Designing a Machine Vision System
❑ Example about lens
https://www.baslerweb.com/en/products/cameras/area-scan-cameras/ace/aca2440-75uc/
144. 144
5. Designing a Machine Vision System
❑ Example about lens
https://www.baslerweb.com/en/products/vision-components/lenses/basler-lens-c23-1620-5m-p-f16mm/
145. 145
5. Designing a Machine Vision System
❑ Example about lens
https://www.baslerweb.com/en/products/vision-components/lenses/basler-lens-c23-1620-5m-p-f16mm/
146. 146
5. Designing a Machine Vision System
5.1. Camera Type
5.2. Field of View
5.3. Resolution
5.4. Choice of Camera, Frame Grabber, and Hardware Platform
5.5. Lens Design
5.6. Choice of Illumination
5.7. Mechanical Design
5.8. Electrical Design
5.9. Software
5.10. Costs
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
147. 147
5. Designing a Machine Vision System
❑ 5.6. Choice of Illumination
➢ Concept: Maximize Contrast
▪ Direction of light: diffuse from all directions or directed from a range of angles
▪ Light spectrum
▪ Polarization: effect on surfaces, such as metal or glass
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
148. 148
5. Designing a Machine Vision System
❑ 5.6. Choice of Illumination
➢ Illumination Setups
▪ Backlight and
▪ Frontlight
• Diffused light
• Directed light
• Confocal frontlight
• Bright field
• Dark field
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
149. 149
5. Designing a Machine Vision System
❑ 5.6. Choice of Illumination
➢ Light Sources
▪ Fluorescent tubes
▪ Halogen and xenon lamps
▪ LED
▪ Laser
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
150. 150
5. Designing a Machine Vision System
❑ 5.6. Choice of Illumination
➢ Approach to the Optimum Setup
▪ A confirmation of the setup based on experiments with sample parts is
mandatory.
▪ The alignment of light, the part and the camera needs to be documented.
▪ To balance between similar setups images have to be captured and compared
for the maximum contrast.
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
151. 151
5. Designing a Machine Vision System
❑ 5.6. Choice of Illumination
➢ Interfering Lighting
▪ The influences of different lamps on the images have to be checked.
▪ To avoid interfering, a spatial separation can be achieved by using different
camera stations.
▪ The part is imaged with different sets of cameras and illuminations.
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
152. 152
5. Designing a Machine Vision System
5.1. Camera Type
5.2. Field of View
5.3. Resolution
5.4. Choice of Camera, Frame Grabber, and Hardware Platform
5.5. Lens Design
5.6. Choice of Illumination
5.7. Mechanical Design
5.8. Electrical Design
5.9. Software
5.10. Costs
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
153. 153
5. Designing a Machine Vision System
❑ 5.7. Mechanical Design
➢ As the cameras, lenses, standoff distances, and
illumination devices are determined, the mechanical
conditions can be defined.
➢ As for mounting of cameras and lights the
adjustment is important for installation, operation,
and maintenance.
➢ The devices have to be protected against vibration
or shock.
➢ The position of cameras and lights should be
changed easily.
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
154. 154
5. Designing a Machine Vision System
5.1. Camera Type
5.2. Field of View
5.3. Resolution
5.4. Choice of Camera, Frame Grabber, and Hardware Platform
5.5. Lens Design
5.6. Choice of Illumination
5.7. Mechanical Design
5.8. Electrical Design
5.9. Software
5.10. Costs
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
155. 155
5. Designing a Machine Vision System
❑ 5.8. Electrical Design
➢ The power supply
➢ The housing of cameras and illumination
➢ The length of cables as well as their laying
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
156. 156
5. Designing a Machine Vision System
5.1. Camera Type
5.2. Field of View
5.3. Resolution
5.4. Choice of Camera, Frame Grabber, and Hardware Platform
5.5. Lens Design
5.6. Choice of Illumination
5.7. Mechanical Design
5.8. Electrical Design
5.9. Software
5.10. Costs
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
157. 157
5. Designing a Machine Vision System
❑ 5.9. Software
➢ selection of a software library
➢ design and implementation of the application-specific software
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
158. 158
5. Designing a Machine Vision System
❑ 5.9. Software
➢ Software Library
➢ Software Structure
▪ Image acquisition
▪ Preprocessing
▪ Feature localization
▪ Feature extraction
▪ Feature interpretation
▪ Generation of results
▪ Handling interfaces
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
159. 159
5. Designing a Machine Vision System
❑ 5.9. Software
➢ General Topics
▪ Visualization of live images for all cameras
▪ Possibility of image saving
▪ Maintenance modus
▪ Log files for the system state
▪ Detailed visualization of the image processing
▪ Crucial processing parameters
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
160. 160
5. Designing a Machine Vision System
5.1. Camera Type
5.2. Field of View
5.3. Resolution
5.4. Choice of Camera, Frame Grabber, and Hardware Platform
5.5. Lens Design
5.6. Choice of Illumination
5.7. Mechanical Design
5.8. Electrical Design
5.9. Software
5.10. Costs
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
161. 161
5. Designing a Machine Vision System
❑ 5.10. Costs
➢ The development costs
▪ project management
▪ base design
▪ hardware components
▪ software licenses
▪ software development
▪ installation
▪ test runs, feasibility tests, and acceptance test
▪ training
▪ documentation
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
162. 162
5. Designing a Machine Vision System
❑ 5.10. Costs
➢ The operating costs
▪ maintenance, such as cleaning of the optical equipment
▪ change of equipment, such as lamps
▪ utility, for instance electrical power or compressed air if needed
▪ costs for system modification due to product changes
Alexander Hornberg, “Handbook of Machine Vision,” WILEY-VCH, Weinheim, 2006.
163. Quiz 1
Quiz Number 1 Quiz Type
OX Example Select
Question
Choose the lighting for measuring the radius R and r of following
object:
Example
A. Dome Light B. On-Axis Light
C. Darkfield D. Backlight
Answer
Feedback
R
r
164. Quiz 2
Quiz Number 2 Quiz Type
OX Example Select
Question
Assume that the object size = 10 cm x 20 cm, sensor resolution =
640x480 pixels. Calculate the object resolution?
Example
A. 0.31 mm/pixel B. 0.21 mm/pixel C. 0.17 mm/pixel D. 0.42
mm/pixel
Answer
Feedback
165. Quiz 3
Quiz Number 3 Quiz Type
OX Example Select
Question Splitting an input image into component areas is called:
Example
A. Image preprocessing B. Image segmentation
C. Image recognition D. Image representation
Answer
Feedback
166. Quiz 4
Quiz Number 4 Quiz Type
OX Example Select
Question The performance requirements of a machine vision system are:
Example
A. Accuracy B. Time performance
C. Both A and B D. None of the above
Answer
Feedback
167. Quiz 5
Quiz Number 5 Quiz Type OX Example Select
Question
Diameter Inspection of Rivets:
+ The nominal size of the rivets lies in a range of 3 mm to 4 mm
+ The required accuracy is 0.1 mm
+ The tolerance of part positioning is less than ±1 mm across the
optical axis and ±0.1 mm in the direction of the optical axis. The
belt stops for 1.5 s.
+ The maximum processing time is 2 s; the cycle time is 2.5s.
+ The maximum space for installing equipment is 500 mm.
168. Quiz 5
Quiz Number 5 Quiz Type OX Example Select
Question Bearing with rivet and disk