Machine vision systems are used to perform tasks such as part selection, identification, and inspection. A typical machine vision system consists of a camera, digitizing hardware, and a computer for image processing and analysis. The key functions of a vision system are sensing and digitizing image data, image processing and analysis, and application of the results. Image processing techniques used include data reduction methods like digital conversion and windowing, segmentation methods like thresholding, region growing and edge detection, and feature extraction to analyze objects and enable recognition. Machine vision has applications in industrial inspection, identification, and visual servoing and navigation in robotics.
1) Sensors are devices that detect physical quantities and convert them into signals that can be measured. They are needed for industrial monitoring, safety, and automation.
2) Common sensors include position, proximity, range, touch, and force sensors. Position sensors like LVDT and RVDT convert linear or angular displacement into electrical signals.
3) Sensors have characteristics like range, sensitivity, accuracy, and response time that determine their effectiveness. Understanding sensor types and properties is important for robotics applications.
ROBOTICS-ROBOT KINEMATICS AND ROBOT PROGRAMMINGTAMILMECHKIT
Forward Kinematics, Inverse Kinematics and Difference; Forward Kinematics and Reverse Kinematics of manipulators with Two, Three Degrees of Freedom (in 2 Dimension), Four Degrees of freedom (in 3 Dimension) Jacobians, Velocity and Forces-Manipulator Dynamics, Trajectory Generator, Manipulator Mechanism Design-Derivations and problems. Lead through Programming, Robot programming Languages-VAL Programming-Motion Commands, Sensor Commands, End Effector commands and simple Programs
This document discusses machine vision and various components of machine vision systems. It describes different types of sensors used in machine vision like cameras, frame grabbers, and describes the process of sensing and digitizing image data through analog to digital conversion, image storage, and lighting techniques. It also discusses image processing and analysis techniques like segmentation, feature extraction and object recognition. Finally, it provides examples of applications of machine vision systems in inspection, identification, and navigation.
Machine vision uses computer vision techniques to automate inspection and measurement tasks in manufacturing processes. It incorporates computer science, optics, and mechanical engineering. Machine vision systems typically use digital cameras and specialized lenses to capture images that are then processed to check for attributes like dimensions, serial numbers, and defects. Common applications include inspecting semiconductor chips, automobiles, food, and pharmaceuticals. Key components of machine vision systems include cameras, lighting, lenses, and image processing software to analyze the captured images.
The document discusses different types of end effectors used in robotics, specifically focusing on grippers. It describes two main types of end effectors - grippers and tools. Grippers are used for holding parts and objects, and come in several varieties, including mechanical grippers, hooks/scoops, magnetic grippers, vacuum grippers, expandable bladder grippers, and adhesive grippers. Each type is suited to different applications and has unique advantages and limitations. The document provides details on the design and use of each gripper type.
1) Machine vision uses digital cameras and image processing to automate production processes and quality inspections by replacing manual methods.
2) A machine vision system involves four steps: imaging, image processing/analysis, communicating results to the control system, and taking appropriate action.
3) The main components of a machine vision system are cameras, lighting systems, frame grabbers, and computer/software to process images and analyze results.
1) Sensors are devices that detect physical quantities and convert them into signals that can be measured. They are needed for industrial monitoring, safety, and automation.
2) Common sensors include position, proximity, range, touch, and force sensors. Position sensors like LVDT and RVDT convert linear or angular displacement into electrical signals.
3) Sensors have characteristics like range, sensitivity, accuracy, and response time that determine their effectiveness. Understanding sensor types and properties is important for robotics applications.
ROBOTICS-ROBOT KINEMATICS AND ROBOT PROGRAMMINGTAMILMECHKIT
Forward Kinematics, Inverse Kinematics and Difference; Forward Kinematics and Reverse Kinematics of manipulators with Two, Three Degrees of Freedom (in 2 Dimension), Four Degrees of freedom (in 3 Dimension) Jacobians, Velocity and Forces-Manipulator Dynamics, Trajectory Generator, Manipulator Mechanism Design-Derivations and problems. Lead through Programming, Robot programming Languages-VAL Programming-Motion Commands, Sensor Commands, End Effector commands and simple Programs
This document discusses machine vision and various components of machine vision systems. It describes different types of sensors used in machine vision like cameras, frame grabbers, and describes the process of sensing and digitizing image data through analog to digital conversion, image storage, and lighting techniques. It also discusses image processing and analysis techniques like segmentation, feature extraction and object recognition. Finally, it provides examples of applications of machine vision systems in inspection, identification, and navigation.
Machine vision uses computer vision techniques to automate inspection and measurement tasks in manufacturing processes. It incorporates computer science, optics, and mechanical engineering. Machine vision systems typically use digital cameras and specialized lenses to capture images that are then processed to check for attributes like dimensions, serial numbers, and defects. Common applications include inspecting semiconductor chips, automobiles, food, and pharmaceuticals. Key components of machine vision systems include cameras, lighting, lenses, and image processing software to analyze the captured images.
The document discusses different types of end effectors used in robotics, specifically focusing on grippers. It describes two main types of end effectors - grippers and tools. Grippers are used for holding parts and objects, and come in several varieties, including mechanical grippers, hooks/scoops, magnetic grippers, vacuum grippers, expandable bladder grippers, and adhesive grippers. Each type is suited to different applications and has unique advantages and limitations. The document provides details on the design and use of each gripper type.
1) Machine vision uses digital cameras and image processing to automate production processes and quality inspections by replacing manual methods.
2) A machine vision system involves four steps: imaging, image processing/analysis, communicating results to the control system, and taking appropriate action.
3) The main components of a machine vision system are cameras, lighting systems, frame grabbers, and computer/software to process images and analyze results.
This document discusses the use of sensors in robotics. It begins by introducing how sensors give robots human-like sensing abilities like vision, touch, hearing, and movement. It then describes several key sensors used in robotics - vision sensors that allow robots to see their environment, touch sensors that allow robots to feel contact and interpret emotions, and hearing sensors that allow robots to perceive speech. The document also lists and describes other common sensors like proximity, range, tactile, light, sound, temperature, contact, voltage, and current sensors and their applications in robotics.
This document discusses machine vision and various components of machine vision systems. It describes different types of sensors used in machine vision like cameras, frame grabbers, and describes the process of sensing and digitizing image data through analog to digital conversion, image storage, and lighting techniques. It also discusses image processing and analysis techniques like segmentation, feature extraction and object recognition. Finally, it provides examples of applications of machine vision systems in inspection, identification, and navigation.
This document discusses machine vision systems and their components. A basic machine vision system includes a camera, light source, frame grabber, circuitry and programming, and a computer. Key components of machine vision systems are the image, camera, framegrabber, preprocessor, memory, processor, and output interface. The document also describes CCD and vidicon cameras, their advantages and disadvantages, and the functions of framegrabbers in sampling and quantizing images. Object properties that can be analyzed from pixel grey values include color, specular properties, non-uniformities, lighting. Applications of machine vision systems are also mentioned.
This document provides information about robotics and machine vision systems courses. The objectives are to study robot components, derive kinematics and dynamics equations, manipulate trajectories, and learn machine vision. Key topics covered include robot history, components, configurations like Cartesian and cylindrical, applications in material handling, processing, assembly, and inspection. Benefits of robots are also discussed.
The document discusses various methods of robot programming including manual programming, walkthrough programming, leadthrough programming, and offline programming. It also covers different robot languages such as VAL, AL, AML, MCL, and RAIL as well as features of a teach pendant and basic modes of robot operation including monitor, edit, and run modes. Common robot motions, sensors, and welding patterns are also explained.
The document discusses various robot drive systems and end effectors. It describes hydraulic, pneumatic, and electric drive systems. Hydraulic drives use pressurized fluid and are suitable for heavy loads, while pneumatic drives use compressed air. Electric drives include AC servo motors, DC servo motors, and stepper motors. End effectors for grasping objects include grippers that are mechanical, pneumatic, hydraulic, magnetic, or vacuum-based. The document also provides details on actuators, motors, advantages and disadvantages of different drive types.
This document discusses machine vision systems and their applications in semiconductor manufacturing. It begins with definitions of machine vision systems and an overview of their components and functions. It then discusses various applications of machine vision in semiconductor front-end and back-end processes like inspection, metrology, and assembly. Specific applications mentioned include inspection of wafers, dies, packages, leads, and printed circuit boards. The document provides examples of machine vision aiding processes like die bonding, wire bonding, laser marking, and automated assembly.
The document discusses different types of robot end effectors and grippers. It describes various gripper mechanisms including mechanical, pneumatic, hydraulic, vacuum, magnetic, and adhesive grippers. It also covers classifications of grippers based on the method of holding parts, incorporated tools, and functionality. Key factors for gripper design and selection are highlighted.
Sensors are the first element in a measuring system that takes information about a variable being measured and transforms it into a suitable form. Sensors directly measure a physical quantity and transducers convert one form of energy into another. Sensors are essential in robotics for safety monitoring, work cell control, quality inspection, and collecting data. Common sensors used in robotics include position sensors like potentiometers and encoders, proximity sensors, force/torque sensors, and range finders. Temperature can be measured by resistive sensors like thermistors or thermocouples that relate a change in resistance or voltage to a change in temperature.
1) The document discusses various topics related to robotics including definitions, degrees of freedom, robot arm and wrist configurations, joint classifications, robot safety, components and control systems.
2) It provides details on common robot arm configurations including rectangular, cylindrical, spherical and revolute coordinated systems.
3) The document also describes robot control systems including limited sequence control, playback with point-to-point control and continuous path control as well as intelligent control.
Requirements of a sensor, Principles and Applications of the following types of sensors- Position sensors - Piezo Electric Sensor, LVDT, Resolvers, Optical Encoders, pneumatic Position Sensors, Range Sensors Triangulations Principles, Structured, Lighting Approach, Time of Flight, Range Finders, Laser Range Meters, Touch Sensors ,binary Sensors., Analog Sensors, Wrist Sensors, Compliance Sensors, Slip Sensors, Camera, Frame Grabber, Sensing and Digitizing Image Data- Signal Conversion, Image Storage, Lighting Techniques, Image Processing and Analysis-Data Reduction, Segmentation, Feature Extraction, Object Recognition, Other Algorithms, Applications- Inspection, Identification, Visual Serving and Navigation.
This document discusses different types of industrial robots classified by their arm configuration, power source, and path control. It describes Cartesian robots which move along three orthogonal axes and have a rectangular working envelope. Cylindrical robots use a vertical column that can rotate and slide up/down, giving them a cylindrical working volume. Both robot types have advantages like payload capacity and work area, as well as disadvantages like limited movement directions and lower accuracy for cylindrical robots. Common applications include pick and place, assembly, welding, and machine loading/unloading.
Industrial robots have a variety of specifications that must be considered when selecting a robot for a particular application. These include the robot's axes of movement, range of motion, speed, payload, accuracy, and repeatability. The document provides details on common axis specifications, including the number of axes, range of movement, speed, and accuracy measurements. It also lists other important robot specifications like weight, power requirements, and work envelope. Selection of a suitable robot involves using multi-criteria decision making to evaluate robots based on their specifications and the weights of different criteria for the target application. Future trends suggest robots will become more lightweight, compact, and integrated with sensors and vision systems to enable safer human-robot collaboration.
The document discusses factors to consider when selecting robots for various applications, including the number of axes, control system, work volume, and precision required. It provides a table outlining typical technical features needed for common robot applications like material transfer, machine loading, and spot welding. The document also covers factors to evaluate when selecting and designing grippers for robots, such as the part being handled, actuation method, power and signals, and operating environment.
This document discusses various applications of industrial robots including material handling, machine loading and unloading, assembly, inspection, welding, spray painting, mobile robots, and recent developments in robotics. It provides details on how robots are used for tasks like transferring parts between machines, loading/unloading machines, putting parts together, inspecting products, welding metals, and painting large objects. Robots allow for improved quality, safety, productivity and flexibility compared to human workers performing these automated industrial tasks.
This document discusses robot programming methods. It describes different types of robot programming including joint-level, robot-level, and high-level programming. It also covers various robot programming methods such as manual, walkthrough, leadthrough, and offline programming. Specific programming languages and their applications are also summarized.
This document provides an overview of mechatronics systems. It defines mechatronics as the synergistic integration of mechanical engineering, electronics, and computer technology. Mechatronics systems combine sensors, actuators, signal conditioning, power electronics, decision-making algorithms, and computer hardware/software. The document discusses the evolution of mechatronics through the industrial, semiconductor, and information revolutions. It also outlines the key elements of a mechatronics system, including actuators/sensors, signal conditioning, digital logic, software/data acquisition, and computers/displays. Examples of mechatronics applications are provided.
Robotics and automation _ power sources and sensorsJAIGANESH SEKAR
Hydraulic, pneumatic and electric drives – determination of HP of motor and gearing ratio – variable speed arrangements – path determination – micro machines in robotics – machine vision – ranging – laser – acoustic – magnetic, fiber optic and tactile sensors.
Introduction
Types of end effectors
Mechanical gripper
Other types of grippers
Tools as end effectors
The Robot/End effectors interface
Considerations in gripper selection and design
Iaetsd vlsi implementation of efficient convolutionalIaetsd Iaetsd
This document describes the implementation of an efficient convolutional encoder and modified Viterbi decoder using FPGA technology. It begins with an introduction to convolutional encoding and Viterbi decoding. It then discusses the implementation of a convolutional encoder, including the state diagram and state table. Next, it describes the Viterbi algorithm and its components: branch metric calculation, path metric calculation using an add-compare-select unit, and traceback. It introduces the modified Viterbi algorithm and how it reduces computations and path storage requirements. It presents the design of the convolutional encoder and modified Viterbi decoder in Verilog HDL. Finally, it shows simulation results of the convolutional encoder and components of the modified Viterbi decoder.
The document describes the design and implementation of a charge-coupled device (CCD) based optical spectrometer. The spectrometer uses a diffraction grating and CCD array to detect light wavelengths from 600-900nm and displays the results graphically on a Windows computer. Specifically, it involves:
1) Designing the spectrometer to disperse light into wavelengths using a diffraction grating.
2) Using a CCD detector to convert light wavelengths into electrical signals proportional to intensity.
3) Interfacing the CCD to a computer via an analog-to-digital converter and microcontroller to digitize and buffer the signals.
4) Developing Windows software to plot the spectrum recovered from the C
This document discusses the use of sensors in robotics. It begins by introducing how sensors give robots human-like sensing abilities like vision, touch, hearing, and movement. It then describes several key sensors used in robotics - vision sensors that allow robots to see their environment, touch sensors that allow robots to feel contact and interpret emotions, and hearing sensors that allow robots to perceive speech. The document also lists and describes other common sensors like proximity, range, tactile, light, sound, temperature, contact, voltage, and current sensors and their applications in robotics.
This document discusses machine vision and various components of machine vision systems. It describes different types of sensors used in machine vision like cameras, frame grabbers, and describes the process of sensing and digitizing image data through analog to digital conversion, image storage, and lighting techniques. It also discusses image processing and analysis techniques like segmentation, feature extraction and object recognition. Finally, it provides examples of applications of machine vision systems in inspection, identification, and navigation.
This document discusses machine vision systems and their components. A basic machine vision system includes a camera, light source, frame grabber, circuitry and programming, and a computer. Key components of machine vision systems are the image, camera, framegrabber, preprocessor, memory, processor, and output interface. The document also describes CCD and vidicon cameras, their advantages and disadvantages, and the functions of framegrabbers in sampling and quantizing images. Object properties that can be analyzed from pixel grey values include color, specular properties, non-uniformities, lighting. Applications of machine vision systems are also mentioned.
This document provides information about robotics and machine vision systems courses. The objectives are to study robot components, derive kinematics and dynamics equations, manipulate trajectories, and learn machine vision. Key topics covered include robot history, components, configurations like Cartesian and cylindrical, applications in material handling, processing, assembly, and inspection. Benefits of robots are also discussed.
The document discusses various methods of robot programming including manual programming, walkthrough programming, leadthrough programming, and offline programming. It also covers different robot languages such as VAL, AL, AML, MCL, and RAIL as well as features of a teach pendant and basic modes of robot operation including monitor, edit, and run modes. Common robot motions, sensors, and welding patterns are also explained.
The document discusses various robot drive systems and end effectors. It describes hydraulic, pneumatic, and electric drive systems. Hydraulic drives use pressurized fluid and are suitable for heavy loads, while pneumatic drives use compressed air. Electric drives include AC servo motors, DC servo motors, and stepper motors. End effectors for grasping objects include grippers that are mechanical, pneumatic, hydraulic, magnetic, or vacuum-based. The document also provides details on actuators, motors, advantages and disadvantages of different drive types.
This document discusses machine vision systems and their applications in semiconductor manufacturing. It begins with definitions of machine vision systems and an overview of their components and functions. It then discusses various applications of machine vision in semiconductor front-end and back-end processes like inspection, metrology, and assembly. Specific applications mentioned include inspection of wafers, dies, packages, leads, and printed circuit boards. The document provides examples of machine vision aiding processes like die bonding, wire bonding, laser marking, and automated assembly.
The document discusses different types of robot end effectors and grippers. It describes various gripper mechanisms including mechanical, pneumatic, hydraulic, vacuum, magnetic, and adhesive grippers. It also covers classifications of grippers based on the method of holding parts, incorporated tools, and functionality. Key factors for gripper design and selection are highlighted.
Sensors are the first element in a measuring system that takes information about a variable being measured and transforms it into a suitable form. Sensors directly measure a physical quantity and transducers convert one form of energy into another. Sensors are essential in robotics for safety monitoring, work cell control, quality inspection, and collecting data. Common sensors used in robotics include position sensors like potentiometers and encoders, proximity sensors, force/torque sensors, and range finders. Temperature can be measured by resistive sensors like thermistors or thermocouples that relate a change in resistance or voltage to a change in temperature.
1) The document discusses various topics related to robotics including definitions, degrees of freedom, robot arm and wrist configurations, joint classifications, robot safety, components and control systems.
2) It provides details on common robot arm configurations including rectangular, cylindrical, spherical and revolute coordinated systems.
3) The document also describes robot control systems including limited sequence control, playback with point-to-point control and continuous path control as well as intelligent control.
Requirements of a sensor, Principles and Applications of the following types of sensors- Position sensors - Piezo Electric Sensor, LVDT, Resolvers, Optical Encoders, pneumatic Position Sensors, Range Sensors Triangulations Principles, Structured, Lighting Approach, Time of Flight, Range Finders, Laser Range Meters, Touch Sensors ,binary Sensors., Analog Sensors, Wrist Sensors, Compliance Sensors, Slip Sensors, Camera, Frame Grabber, Sensing and Digitizing Image Data- Signal Conversion, Image Storage, Lighting Techniques, Image Processing and Analysis-Data Reduction, Segmentation, Feature Extraction, Object Recognition, Other Algorithms, Applications- Inspection, Identification, Visual Serving and Navigation.
This document discusses different types of industrial robots classified by their arm configuration, power source, and path control. It describes Cartesian robots which move along three orthogonal axes and have a rectangular working envelope. Cylindrical robots use a vertical column that can rotate and slide up/down, giving them a cylindrical working volume. Both robot types have advantages like payload capacity and work area, as well as disadvantages like limited movement directions and lower accuracy for cylindrical robots. Common applications include pick and place, assembly, welding, and machine loading/unloading.
Industrial robots have a variety of specifications that must be considered when selecting a robot for a particular application. These include the robot's axes of movement, range of motion, speed, payload, accuracy, and repeatability. The document provides details on common axis specifications, including the number of axes, range of movement, speed, and accuracy measurements. It also lists other important robot specifications like weight, power requirements, and work envelope. Selection of a suitable robot involves using multi-criteria decision making to evaluate robots based on their specifications and the weights of different criteria for the target application. Future trends suggest robots will become more lightweight, compact, and integrated with sensors and vision systems to enable safer human-robot collaboration.
The document discusses factors to consider when selecting robots for various applications, including the number of axes, control system, work volume, and precision required. It provides a table outlining typical technical features needed for common robot applications like material transfer, machine loading, and spot welding. The document also covers factors to evaluate when selecting and designing grippers for robots, such as the part being handled, actuation method, power and signals, and operating environment.
This document discusses various applications of industrial robots including material handling, machine loading and unloading, assembly, inspection, welding, spray painting, mobile robots, and recent developments in robotics. It provides details on how robots are used for tasks like transferring parts between machines, loading/unloading machines, putting parts together, inspecting products, welding metals, and painting large objects. Robots allow for improved quality, safety, productivity and flexibility compared to human workers performing these automated industrial tasks.
This document discusses robot programming methods. It describes different types of robot programming including joint-level, robot-level, and high-level programming. It also covers various robot programming methods such as manual, walkthrough, leadthrough, and offline programming. Specific programming languages and their applications are also summarized.
This document provides an overview of mechatronics systems. It defines mechatronics as the synergistic integration of mechanical engineering, electronics, and computer technology. Mechatronics systems combine sensors, actuators, signal conditioning, power electronics, decision-making algorithms, and computer hardware/software. The document discusses the evolution of mechatronics through the industrial, semiconductor, and information revolutions. It also outlines the key elements of a mechatronics system, including actuators/sensors, signal conditioning, digital logic, software/data acquisition, and computers/displays. Examples of mechatronics applications are provided.
Robotics and automation _ power sources and sensorsJAIGANESH SEKAR
Hydraulic, pneumatic and electric drives – determination of HP of motor and gearing ratio – variable speed arrangements – path determination – micro machines in robotics – machine vision – ranging – laser – acoustic – magnetic, fiber optic and tactile sensors.
Introduction
Types of end effectors
Mechanical gripper
Other types of grippers
Tools as end effectors
The Robot/End effectors interface
Considerations in gripper selection and design
Iaetsd vlsi implementation of efficient convolutionalIaetsd Iaetsd
This document describes the implementation of an efficient convolutional encoder and modified Viterbi decoder using FPGA technology. It begins with an introduction to convolutional encoding and Viterbi decoding. It then discusses the implementation of a convolutional encoder, including the state diagram and state table. Next, it describes the Viterbi algorithm and its components: branch metric calculation, path metric calculation using an add-compare-select unit, and traceback. It introduces the modified Viterbi algorithm and how it reduces computations and path storage requirements. It presents the design of the convolutional encoder and modified Viterbi decoder in Verilog HDL. Finally, it shows simulation results of the convolutional encoder and components of the modified Viterbi decoder.
The document describes the design and implementation of a charge-coupled device (CCD) based optical spectrometer. The spectrometer uses a diffraction grating and CCD array to detect light wavelengths from 600-900nm and displays the results graphically on a Windows computer. Specifically, it involves:
1) Designing the spectrometer to disperse light into wavelengths using a diffraction grating.
2) Using a CCD detector to convert light wavelengths into electrical signals proportional to intensity.
3) Interfacing the CCD to a computer via an analog-to-digital converter and microcontroller to digitize and buffer the signals.
4) Developing Windows software to plot the spectrum recovered from the C
Iaetsd designing of cmos image sensor test-chip and its characterizationIaetsd Iaetsd
This document describes the design and testing of a CMOS image sensor test chip. It discusses the development of the test chip, including the circuit design using OrCAD and layout using CADstar. A VHDL code was developed to generate drive signals for the sensor using an FPGA board. The CMOS image sensor test chip was able to detect images in various lighting conditions and output digital data. The sensor was characterized and achieved specifications such as integration time, frame rate, power consumption, sensitivity and dark current. The test results demonstrated the functionality of the CMOS image sensor.
This document provides information about various camera settings and technologies for capturing clear images, including:
1. Clear Scan helps eliminate banding caused when a camera's frame rate does not match a CRT display's refresh rate.
2. Slow Shutter extends the camera's exposure time to produce blur effects or allow more light in low-light scenes.
3. Super Sampling uses a 1080p camera to produce sharper 720p images by maintaining higher frequency response.
4. Detail correction adds a spike-shaped detail signal to make edges appear sharper without degrading resolution. Settings like detail level and H/V ratio control the amount and balance of detail correction.
5. Other topics covered
Machine vision uses video cameras, lighting, and image processing to analyze physical objects. A video camera's CCD converts light into electrical signals, which are converted to digital signals through analog-to-digital conversion. Image processing includes data reduction, segmentation, feature extraction, and object recognition to analyze images and identify objects. Machine vision is commonly used for industrial inspection and automation applications with robots.
This document is a seminar report on digital image processing submitted by a student, N.Ch. Karthik, in partial fulfillment of a Bachelor of Technology degree. It discusses correcting raw images by subtracting dark current and bias, flat fielding for pixel sensitivity variations, and displaying images by limiting histograms, using transfer functions, and histogram equalization. The report also covers mathematical image manipulations and references other works.
The document outlines the course objectives, outcomes, examination scheme, and units of a Computer Graphics course. The course aims to acquaint students with basic concepts, algorithms, and techniques of computer graphics through understanding, applying, and creating graphics using OpenGL. Students will learn about primitives, transformations, projections, lighting, shading, animation and gaming. The course assessment includes a mid-semester test, end-semester test, and covers topics ranging from graphics primitives to fractals and animation.
Real time-image-processing-applied-to-traffic-queue-detection-algorithmajayrampelli
This document describes a real-time image processing algorithm for detecting traffic queues. The algorithm uses two operations: motion detection and vehicle detection. Motion detection involves differencing consecutive image frames and comparing the difference to a threshold. Vehicle detection applies edge detection techniques to image profiles. The algorithm aims to measure queue parameters like length, occurrence period, and slope in real-time using low-cost systems. It processes image sub-profiles to reduce computation time. Experimental results found the algorithm could measure queue length with 95% accuracy.
Hard Decision Viterbi Decoder: Implementation on FPGA and Comparison of Resou...IJERA Editor
Viterbi decoder is the most important part in receiver section of wireless communication system. Viterbi
algorithm is one of the most common decoding algorithms used for decoding convolutional codes. Viterbi
decoder employs Maximum Likelihood technique to decode the convolutionally encoded data stream. In
practice, most of the communication systems use Viterbi decoding scheme in processors. But increased interest
on high speed Viterbi decoder in a single chip forced to realize with the speed of Giga-bit per second, without
using memory or off-chip processors. Implementation of proposed design and realization is possible due to
advanced Field Programmable Gate Array (FPGA) technologies and advanced Electronic Design Automatic
(EDA) tools. This paper involves designing an optimum structure of Viterbi decoder and implementing it on
different FPGA devices to compare the resource utilizations.
AREA OPTIMIZED FPGA IMPLEMENTATION FOR GENERATION OF RADAR PULSE COM-PRESSION...VLSICS Design
Pulse compression technique is most widely used in radar and communication areas. Its implementation requires an opti-mized and dedicated hardware. The real time implementation places several constraints such as area occupied, power con-sumption, etc. The good design needs optimization of these constraints. This paper concentrates on the design of optimized model which can reduce these. In the proposed architecture a single chip is used for generating the pulse compression se-quence like BPSk, QPSk, 6-PSK and other Polyphase codes. The VLSI architecture is implemented on the Field Programm-able Gate Array (FPGA) as it provides the flexibility of reconfigurability and reprogrammability .It was found that the proposed architecture has generated the pulse compression sequences efficiently while improving some of the parameters like area, power consumption and delay when compared to previous methods.
The document summarizes key differences between vector scan and raster scan displays. Vector scan displays directly draw lines between points by moving the electron beam between endpoints, while raster scan displays sweep the beam across the entire screen in lines from top to bottom. Raster scan is more common as it does not flicker even with complex images and has lower cost and hardware requirements than vector scan. Both methods store images in a frame buffer but raster scan must convert graphics to pixels while vector scan does not.
project ppt on anti counterfeiting technique for credit card transaction systemRekha dudiya
CREDI-CRYPT is a technique that hides sensitive credit card information in a background image using both cryptography and steganography. It encodes credit card details like the card number and CVV using techniques like arithmetic coding and Hamming codes. The encoded text is then embedded into the pixel coefficients of the cover image using an integer wavelet transform and optimal pixel adjustment. This ensures confidentiality and integrity of credit card transactions while transmitting payment details over unsecured networks. The system was designed to be used as a standalone application by small to medium banks and organizations for securely storing credit card data without network transfer.
This document presents an analysis of Binary Phase Shift Keying (BPSK) modulation technique. It discusses the theoretical background of digital modulation and BPSK specifically. It demonstrates the implementation of a BPSK system in MATLAB, including modulation, addition of noise in the channel, and demodulation. The performance of the BPSK system is analyzed by calculating the bit error rate both theoretically and using Monte Carlo simulations in MATLAB. The results show the theoretical and simulation BER values are similar at low SNR but differ slightly at higher SNR. Overall, the document introduces BPSK modulation and demonstrates its implementation and performance analysis in MATLAB.
This PPT gives detailed information about Computer Graphics, Raster Scan System, Random Scan System, CRT Display, Color CRT Monitors, Input and Output Devices
Automatic Real Time Auditorium Power Supply Control using Image Processingidescitation
One of the major problems in the most populated and developing countries like
India, is Energy or Power crisis. Hence, there is a pressing need to conserve power. There
are many simple ways to save electricity, like using the electric and electronic gadgets
whenever and wherever needed and switching them off, while not in use. But in places such
as large auditoriums and meeting halls, there will be a fan or an Air-conditioner keeps
running in unmanned area too, even before the people arrive. This contributes to a
considerable amount of electricity wastage. There are many ways to prevent this wastage,
like, installing IR sensors to detect people etc. These methods are quite costlier and complex
for larger areas. Hence, here we propose a new method of controlling the power supply of
auditoriums using, Image Processing. Here first we take a reference image of an empty
auditorium and any change in that reference image is detected and then according to that
change respective equipments alone are turned on. Thus power wastage is controlled. This is
dual usage system in which a camera is used for detecting people as well as surveillance
purposes. This is very simple, efficient and cheaper technique to save energy. Another big
advantage is, we can extend this up to applications like home automation etc.
Audio Watermarking using Empirical Mode DecompositionIRJET Journal
This document proposes an audio watermarking technique using empirical mode decomposition (EMD). Key points:
1. The audio signal is divided into frames and each frame is decomposed using EMD into intrinsic mode functions (IMFs).
2. A binary watermark obtained from a compressed color image is embedded into the extrema of the last IMF.
3. Experiments show the watermark is robust against attacks like Gaussian, filtering, cropping, resampling and pink noise. However, filtering attacks cause the highest bit error rate, while Gaussian attacks have a lower bit error rate.
4. The technique provides imperceptible watermarking and better performance than other recent schemes.
The document discusses techniques for constant bit rate video streaming over packet switching networks. It proposes adapting variable bit rate video to a constant bit rate by controlling the video encoder's output rate based on buffer level feedback. This allows transporting video over networks using constant bit rate channels while avoiding network congestion issues. The key techniques involve bit allocation to each coding unit based on buffer status, and adjusting encoder quantization parameters to encode units with the allocated bits. Simulation results show the approach maintains constant compression ratio and peak signal-to-noise ratio while varying group of picture size and quality settings.
A digital camera captures images using an image sensor, which converts light into electric signals. The image sensor is typically a CCD or CMOS chip containing arrays of light-sensitive photosites. When the shutter button is pressed, light from the photographed scene hits the image sensor, causing each photosite to generate an electric charge proportional to the light intensity. These charges are converted into digital values and stored as pixels in a memory card. The resolution, or clarity, of the captured images depends on the number of pixels in the image sensor. Common image sensors used in digital cameras include CCD and CMOS sensors.
This document discusses high-speed, high-resolution inspection of flat panel displays. It introduces a distributed image sensor computing system (DISCS) that uses GPUs for parallel processing to enable fast, in-line inspection. The DISCS uses dark-field illumination and line scan cameras for 2D defect detection. Algorithms like binarization and edge detection are implemented on the GPUs. Experimental results on touch panels and glass show inspection of megapixel images in under 3 seconds. Stereoscopic line scanning and moire topography techniques are discussed for 3D surface profiling with nanometer resolution and micrometer depth detection. Phase shifting interferometry is used to extract height maps. The system is designed for industrial inspection and could integrate
This document discusses computer graphics and various computer graphics concepts. It provides definitions and explanations of terms like scan conversion, rasterization, computer graphics, input devices, color displays, video controllers, frame buffers, transformations, lines, circles, fonts, clipping, projections, animation, and random/raster scan displays. It also includes examples of computer graphics concepts and algorithms like Bresenham's line drawing algorithm.
Similar to Aiar. unit v. machine vision 1462642546237 (20)
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
1. Rajarambapu Institute of Technology, Rajaramnagar
Department of Mechanical Engineering
Machine Vision
Unit V
2. Machine Vision
The systems are used to perform tasks which
include:
• selecting parts that are randomly oriented from a
bin or conveyer,
• Parts identification, and
• limited inspection.
3. Machine Vision
Machine vision is concerned with the sensing of vision data and its
interpretation by a computer. The typical vision system consists of:
• Camera and digitizing hardware,
• Digital computer and hardware and software necessary to
interface them.
This interface hardware and software is often referred to as a
preprocessor. The operation of the vision system consists of three
functions:
• Sensing and digitizing image data
• Image processing and analysis
• Application
5. The Sensing and Digitizing Function
Image sensing requires some type of image
formation device such as a camera and a digitizer
which stores a video frame in the computer
memory.
We divide the sensing and digitizing functions into
several steps.:
• Capturing the image of the scene
• Digitizing
6. The Sensing and Digitizing Function
Fig. 5.2 Cross section of a Vidicon tube
7. The Sensing and Digitizing Function
Basic principle of charge-coupled device: (a) Accumulation of an
electron charge in a pixel element: (b) Movement of accumulated
charge through the silicon by changing the voltages on the
electrodes A, B, and C.
8. The Sensing and Digitizing Function
Fig. 5.4 One type of charge-coupled device imager. Register A accumulates the pixel
charges produced by photoconductivity generated by the light image. The B register
stores the lines of pixel charges and transfers each line in turn into register C. Register
C reads out the charges laterally as shown into the amplifier.
10. Lighting Techniques
Technique Function/use
A Front light source
1 Front illumination Area flooded such that surface is
defining feature of image.
2 Specular illumination
(dark field)
Used for surface defect recognition
(background dark).
3 Specular illumination
(light field)
Used for surface defect recognition;
camera in-line with reflected rays
(background light).
4 Front imager Structured light application; imaged
light superimposed on object surface-
light beam displaced as function of
thickness.
11. Lighting Techniques
Technique Function/use
B. Back light source
1 Rear illumination
(lighted field)
Uses surface diffuser to silhouette
features; used in parts inspection and
basic measurements.
2 Rear illumination
(condenser)
Produces high-contrast images;
useful for high magnification
application.
3 Rear illumination
(collimator)
Produces parallel light ray source
such that the features of object do
not lie in same plane.
4 Rear offset
illumination
Useful to produce feature highlights
when feature is in transparent
medium.
12. Lighting Techniques
Technique Function/use
C Other miscellaneous devices
1 Beam splitter Transmits light along same optical axis as
sensor; advantage is that it can illuminate
difficult-to-view objects.
2 Split mirror Similar to beam splitter but more efficient
with lower intensity requirements.
3 Non selective redirectors Light source is redirected to provide proper
illumination
4 Retroreflector A device that redirects incident rays back to
sensor; incident angel capable of being
varied; provides high contrast for object
between source and reflector.
5 Double density A technique used to increase illumination
intensity at sensor, used with transparent
13. Lighting Techniques
Technique Function/use
C Other miscellaneous devices
1 Beam splitter Transmits light along same optical axis as
sensor; advantage is that it can illuminate
difficult-to-view objects.
2 Split mirror Similar to beam splitter but more efficient
with lower intensity requirements.
3 Non selective redirectors Light source is redirected to provide proper
illumination
4 Retroreflector A device that redirects incident rays back to
sensor; incident angel capable of being
varied; provides high contrast for object
between source and reflector.
5 Double density A technique used to increase illumination
intensity at sensor, used with transparent
15. Analog-to-Digital Signal Conversion
Example 5.1 Consider a vision system using a vidicon tube. An analog video signal is
generated for each line of the 512 lines comprising the faceplate. The sampling capability of
the A/D converter is 100 nanoseconds (100 x 109 s).This is the cycle time required to
complete the A/D conversion process for one pixel. Using the American standard of 33.33
milliseconds ( s)to scan the entire faceplate consisting of 512 lines, determine the sampling
rate and the number of pixels that can be processed per line.
The cycle time per pixel is limited by the A/D conversion process to 100 x 10-9 s
= 0.1 x 10-6 s/pixel
The scanning rate for the 512 lines in the faceplate is s = 33.33 x 10-3 s
Accordingly, the scanning rate for each line is = (33.33 x 10-3 s)/512 lines
= 65.1 x 106 s/line
The number of pixels that can be processed per line is therefore
16. Quantization
Each sampled discrete time voltage level is assigned to a finite number of defined
amplitude levels. These amplitude levels correspond to the gray scale used in the
system. The predefined amplitude levels are characteristic to a particular A/D
converter and consist of a set of discrete values of voltage levels.
The number of quantization levels is defined by
Number of quantization levels = 2n
where n is the number of bits of the A/D converter. A large number of bits enables
a signal to be represented more precisely. For example, an 8 bit converter would
allow us to quantize at 28 = 256 different values whereas 4 bits would allow only
24 = 16 different quantization levels.
17. Encoding
The amplitude levels that are quantized must be changed into digital code. This
process, termed encoding, involves representing an amplitude level by binary
digit sequence. The ability of the encoding process to distinguish between various
amplitude levels is a function of the spacing of each quantization level. Given the
full-scale range of an analog video signal, the spacing of each level would be
defined by
Quantization level spacing =
The quantization error resulting from the quantization process can be defined as
Quantization error = 1/2 (quantization level spacing)
18. Encoding
Example 5.2 A continuous video voltage signal is to be converted into a discrete
signal. The range of the signal after amplification is 0 to 5 V. The A/D converter
has an 8 bit capacity. Determine the number of quantization levels, the
quantization level spacing, the resolution, and the quantization
For an 8 bit capacity, the number of quantization levels is 28 = 256.
The A/D Converter resolution = = 0.0039 or 0.39 per cent.
For the 5 V range, Quantization level spacing = V
Quantization error = (0.0195 V) = 0.00975 V
To represent the voltage signal in binary form involves the process of encoding.
This is accomplished by assigning the sequence of binary digits to represent
increasing quantization levels.
19. Encoding
Example 5.3 For the A/D converter of Example 5.2, indicate how the voltage
signal might be indicated in binary form. Of the eight bits available, we can use
these bits to represent increasing quantization levels as follows:
Voltage range, V Binary number Gray scale
0-0.0195 0000 0000 0 (black)
0.0195-0.0390 0000 0001 1 (dark gray)
0.0390-0.0585 0000 0010 2
.
.
.
.
.
.
.
.
.
4.9610-4.9805 1111 1110 254 (light gray)
4.9805-5.0 1111 1111 255 (white)
If we had a voltage signal which, for example, varied over a + 5 V range, we
could then assign the first bit to indicate polarity and use the succeeding 7 bits
to identify specific voltage ranges. The number of quantization levels would be
27= 128 and the quantization level spacing would be 5 V divided by 128 or 0.039
V.
20. Image Storage
• Image is stored in computer memory, typically called a frame buffer. This
buffer may be part of the frame grabber or in the computer itself.
• The frame grabber is one example of a video data acquisition device that will
store a digitized picture and acquire it in = s.
• Digital frames are typically quantized to 8 bits per pixel. However, a 6 bit
buffer is adequate since the average camera system cannot produce 8 bits
of noise-free data. Thus the lower-order bits are dropped as a means of
noise cleaning. In addition, the human eye can only resolve about 26 = 64
grabber levels
• A combination of row and column counters are used in the frame grabber
which are synchronized with the scanning of the electron beam in the
camera. Thus, each position on the screen can be uniquely addressed.
• To read the information stored in the frame buffer, the data is 'grabbed' via a
signal sent from the computer to the address corresponding to a row-column
combination.
21. Image Processing And Analysis
1. Image data reduction
i. Digital conversion
ii. Windowing
2. Segmentation
i. Thresholding
ii. Region growing
iii. Edge detection
3. Feature extraction
4. Object recognition
22. Image Data Reduction
In image data reduction, the objective is to reduce the volume of data. As a preliminary
step in the data analysis, the following two schemes have found common usage for data
reduction:
• Digital conversion
• Windowing
The function of both schemes is to eliminate the bottleneck that can occur from the large
volume of data in image processing.
Digital conversion reduces the number of gray levels used by the machine vision system.
For example, an 8-bit register used for each pixel would have 28 = 256 gray levels.
Depending on the requirements of the application, digital conversion can be used to
reduce the number of gray levels by using fewer bits to represent the pixel light intensity.
Four bits would reduce the number of gray levels to 16. This kind of conversion would
significantly reduce the magnitude of the image-processing problem.
23. Image Data Reduction
Example 5.4 For an image digitized at 128 points per line and 128 lines. Determine:
a. the total number of bits to represent the gray level values required if an 8-bit A/D
converter is used to indicate various shades of gray, and
b. the reduction in data volume if only black and white values are digitized.
a. For gray scale imaging with 28 = 256 levels of gray
Number of bits = 128 x 128 x 256 = 41,94,304 bits.
b. For black and white (binary bit conversion)
Number of bits = 128 x 128 x 2 = 16,384 bits
Reduction in data volume = 41,94,304 - 16,384
= 4,177,920 bits.
Windowing involves using only a portion of the total image stored in the frame buffer for
image processing and analysis. This portion is called the window. For example, for
inspection of printed circuit boards, one may wish to inspect and analyze only one
component on the board. A rectangular window is selected to surround the component of
interest and only pixels within the window are analyzed. The rationale for windowing is
that proper recognition of an object requires only certain portions of the total scene.
24. Image Data Reduction
Windowing
Windowing involves using only a portion of the total image stored in the frame
buffer for image processing and analysis. This portion is called the window. For
example, for inspection of printed circuit boards, one may wish to inspect and
analyze only one component on the board. A rectangular window is selected to
surround the component of interest and only pixels within the window are
analyzed. The rationale for windowing is that proper recognition of an object
requires only certain portions of the total scene.
25. Segmentation
In segmentation, the objective is to group areas of an image having similar characteristics
or features into distinct entities representing parts of the image. For example, boundaries
(edges) or regions (areas) represent two natural segments of an image.
Three important techniques that we will discuss are:
1. Thresholding
2. Region growing
3. Edge detection
In its simplest form, thresholding is a binary conversion technique in which each pixel is
converted into a binary value, either black or white. This is accomplished by utilizing a
frequency histogram of the image and establishing what intensity (gray level) is to be the
border between black and white. This is illustrated for an image of an object as shown in
Fig. 5.6. Fig. 5.6(a) shows a regular image with each pixel having a specific gray tone out
of 256 possible gray levels. The histogram of Fig. 5.6(b) plots the frequency (number of
pixels) versus the gray level for the image. For histograms that are bimodal in shape,
each peak of the histogram represents either the object itself or the background upon
which the object rests. Since we are trying to differentiate between the object and
background, the procedure is to establish a threshold (typically between the two peaks)
and assign, for example, a binary bit 1 for the object and 0 for the background. The
outcome of this Thresholding technique is illustrated in the binary-digitized image of Fig,
5.6(c).
26. Thresholding
Fig. 5.6 Obtaining a binary image by thresholding: (a) Image of object with all gray-Ievel present, (b)
Histogram of image, (c) Binary image of object after thresholding
27. Region Growing
Region growing is a collection of segmentation techniques in which pixels are
grouped in regions called grid elements based on attribute similarities. Defined
regions can then be examined as to whether they are independent or can be
merged to other regions by means of an analysis of the difference in their average
properties and spatial connectiveness.
28. Edge Detection
Edge detection considers the intensity
change that occurs in the pixels at the
boundary or edges of a part. Given that a
region of similar attributes has been
found but the boundary shape is
unknown, the boundary can be
determined by a simple edge following
procedure. This can be illustrated by the
schematic of a binary image as shown
below. For the binary image, the
procedure is to scan the image until a
pixel within the region is encountered.
For a pixel within the region, turn left
and step, otherwise, turn right and step.
The procedure is stopped when the
boundary is traversed and the path has
returned to the starting pixel. The
contour-following procedure described
can be extended to gray level images.
29. Feature Extraction
Gray level (maximum, average, or minimum)
Area
Perimeter length
Diameter
Minimum enclosing rectangle
Center of gravity—For all pixels (n) in a region where each pixel is specified by (x, y)
coordinates, the x and / coordinates of the center of gravity are defined as
Eccentricity: A measure of 'elongation' Several measures exist of which the simplest is
Eccentricity =
where maximum chord length B is chosen perpendicular to A.
Aspect ratio—The length-to-width ratio of a boundary rectangle which encloses the
object
30. Image Processing And Analysis
One objective is to find the rectangle which gives the minimum aspect ratio.
Thinness—This is a measure of how thin an object is. Two definitions are in use
(a) Thinness =
This is also referred to as compactness.
Diameter (b) Thinness =
The diameter of an object, regardless of its shape, is the maximum distance obtainable
for two points on the boundary of an object.
Holes—Number of holes in the object.
Moments—Given a region, R, and coordinates of the points (x, y) in or on the
boundary of the region, the pgth order moment of the image of the region is given as
31. Image Processing And Analysis
Example 5.5 Consider the schematic of the image in Fig. 5.9. Determine
the area, the minimum aspect ratio, the diameter, the centroid, and the
thinness measures of the image.
Minimum aspect ratio =
32. Image Processing And Analysis
Example 5.5 Consider the schematic of the image in Fig. 5.9. Determine
the area, the minimum aspect ratio, the diameter, the centroid, and the
thinness measures of the image.
34. Robotic Applications
The three levels of difficulty used to categorize machine vision
applications in an industrial setting are:
1. The object can be controlled in both position and appearance.
2. Either position or appearance of the object can be controlled but not
both.
3. Neither position nor appearance of the object can be controlled.
Robotic applications of machine vision fall into the three broad
categories listed below:
1. Inspection
2. Identification
3. Visual servoing and navigation
35. Problems
Consider a vision system which provides one frame of 256
lines every 1/2 s. The system is a raster scan system.
Assume that the time for the electron beam to move from
one line to the next takes 15 per cent of the time to scan a
single line. Determine the sampling rate for the system if it
is specified that there will be 320 pixels on each line.
36. Robotic Applications
A video voltage signal is to be digitized by an A/D
converter. The maximum voltage range is +15 V. The
A/D converter has a 6-bit capacity. Determine the
number of quantizing levels, the quantization level
spacing, and the quantization error.
For the A/D converter of Prob. 5.2, indicate how the
voltage signal might be expressed in binary form.
Compare the answer with one obtained for an A/D
converter of only 4-bit capacity.
37. Robotic Applications
Consider the image shown in fig. Determine the area,
the minimum aspect ratio, the diameter, the centroid and
the thinness measures of the image.
38. Robotic Applications
For the image in the accompanying sketch in which
white pixels are designated as Is on a black background
of Os, determine (a) the area, (b) the minimum aspect
ratio, (c) the eccentricity, and (d) the centroid of the
image. Choose the coordinates of the frame in the lower
left-hand corner as indicated.
39. Robotic Applications
Consider the image representation of an object as given in
the figure. White pixels are designated as 1s on a black
background of 0s. Determine the boundary of the object by
using the edge following procedure as discussed in the text.
Start in the upper left-hand comer. Discuss any limitations of
the technique.