This document summarizes camera calibration methods. It begins by introducing the pinhole camera model and describing its four step process: from world to camera coordinates, projection to the image plane, modeling lens distortion, and conversion to image coordinates. It then overviews several calibration methods, including the method of Hall which uses a linear transformation matrix, and the method of Faugeras-Toscani which obtains camera parameters through an iterative process accounting for radial distortion. The document focuses on explaining the method of Hall in detail, showing how its modeling leads to equations that can be solved using a pseudoinverse to obtain the camera calibration parameters.
The document discusses camera calibration techniques. It aims to determine intrinsic camera parameters like focal length and optical center, and extrinsic parameters like the camera's position and orientation in 3D space. Zhang's algorithm is described, which allows estimating these parameters using a planar calibration target. It formulates the camera projection model and shows how to estimate the homography H relating the target's 3D points to 2D image points. H is defined up to a scale factor, so the absolute scale of the scene cannot be determined from this calibration alone. Constraints are also described to impose orthonormality of the rotation vectors.
Camera calibration involves determining the internal camera parameters like focal length, image center, distortion, and scaling factors that affect the imaging process. These parameters are important for applications like 3D reconstruction and robotics that require understanding the relationship between 3D world points and their 2D projections in an image. The document describes estimating internal parameters by taking images of a calibration target with known geometry and solving the equations that relate the 3D target points to their 2D image locations. Homogeneous coordinates and projection matrices are used to represent the calibration transformations mathematically.
Image Interpolation Techniques with Optical and Digital Zoom Conceptsmmjalbiaty
Digital image concepts and interpolation techniques for optical and digital zoom are discussed. There are three main types of interpolation used for resizing images: nearest neighbor, bilinear, and bicubic. Nearest neighbor is the simplest but produces the lowest quality, while bicubic is the most complex but highest quality. Optical zoom uses lens magnification before sensing, whereas digital zoom interpolates after sensing, resulting in lower quality than optical zoom. Interpolation methods assign pixel values to new locations during resizing based on weighting patterns around the original pixel values.
The document discusses image denoising techniques based on partial differential equations (PDEs). It begins by defining image noise and describing conventional denoising filters like averaging and median filters. It then focuses on diffusion-based denoising methods, particularly the influential 1987 work of Perona and Malik which introduced nonlinear anisotropic diffusion. Their approach uses an edge-stopping function to reduce diffusion near edges. The document outlines linear and nonlinear diffusion models, conditions for the diffusion coefficient function, and extensions of the Perona-Malik model. It summarizes a 2014 paper proposing a robust anisotropic diffusion scheme using novel variants of the edge-stopping function and diffusivity parameter computation.
The document summarizes camera calibration techniques. It discusses:
1) Projecting 3D world points to 2D image points using a projection matrix with intrinsic and extrinsic parameters.
2) Computing camera parameters by estimating the projection matrix from known 3D points and corresponding 2D image points using linear and non-linear optimization methods.
3) Modeling lens distortion and different distortion types that must be accounted for during calibration.
Requirements of a sensor, Principles and Applications of the following types of sensors- Position sensors - Piezo Electric Sensor, LVDT, Resolvers, Optical Encoders, pneumatic Position Sensors, Range Sensors Triangulations Principles, Structured, Lighting Approach, Time of Flight, Range Finders, Laser Range Meters, Touch Sensors ,binary Sensors., Analog Sensors, Wrist Sensors, Compliance Sensors, Slip Sensors, Camera, Frame Grabber, Sensing and Digitizing Image Data- Signal Conversion, Image Storage, Lighting Techniques, Image Processing and Analysis-Data Reduction, Segmentation, Feature Extraction, Object Recognition, Other Algorithms, Applications- Inspection, Identification, Visual Serving and Navigation.
Edge detection algorithms identify points in a digital image where the image brightness changes sharply or has discontinuities. Common edge detection methods include gradient operators like Prewitt and Sobel, the Laplacian of Gaussian (LoG) used in Marr-Hildreth edge detection, and the Canny edge detector. The Canny edge detector applies smoothing, finds the image gradient, performs non-maximum suppression and double thresholding to detect edges with good localization and a single response to each edge.
This document discusses Fourier descriptors and moments which are used in object recognition and image processing. Fourier descriptors represent the boundary shape of an object using the coefficients of its Fourier transform. They are useful because they are invariant to scaling, translation, and rotation. Central moments are another type of descriptor that are translation and rotation invariant. Velocity moments describe both shape and motion over time. Moment invariants are derived from moments to be invariant to specific transformations and are commonly used in image analysis applications such as object detection.
The document discusses camera calibration techniques. It aims to determine intrinsic camera parameters like focal length and optical center, and extrinsic parameters like the camera's position and orientation in 3D space. Zhang's algorithm is described, which allows estimating these parameters using a planar calibration target. It formulates the camera projection model and shows how to estimate the homography H relating the target's 3D points to 2D image points. H is defined up to a scale factor, so the absolute scale of the scene cannot be determined from this calibration alone. Constraints are also described to impose orthonormality of the rotation vectors.
Camera calibration involves determining the internal camera parameters like focal length, image center, distortion, and scaling factors that affect the imaging process. These parameters are important for applications like 3D reconstruction and robotics that require understanding the relationship between 3D world points and their 2D projections in an image. The document describes estimating internal parameters by taking images of a calibration target with known geometry and solving the equations that relate the 3D target points to their 2D image locations. Homogeneous coordinates and projection matrices are used to represent the calibration transformations mathematically.
Image Interpolation Techniques with Optical and Digital Zoom Conceptsmmjalbiaty
Digital image concepts and interpolation techniques for optical and digital zoom are discussed. There are three main types of interpolation used for resizing images: nearest neighbor, bilinear, and bicubic. Nearest neighbor is the simplest but produces the lowest quality, while bicubic is the most complex but highest quality. Optical zoom uses lens magnification before sensing, whereas digital zoom interpolates after sensing, resulting in lower quality than optical zoom. Interpolation methods assign pixel values to new locations during resizing based on weighting patterns around the original pixel values.
The document discusses image denoising techniques based on partial differential equations (PDEs). It begins by defining image noise and describing conventional denoising filters like averaging and median filters. It then focuses on diffusion-based denoising methods, particularly the influential 1987 work of Perona and Malik which introduced nonlinear anisotropic diffusion. Their approach uses an edge-stopping function to reduce diffusion near edges. The document outlines linear and nonlinear diffusion models, conditions for the diffusion coefficient function, and extensions of the Perona-Malik model. It summarizes a 2014 paper proposing a robust anisotropic diffusion scheme using novel variants of the edge-stopping function and diffusivity parameter computation.
The document summarizes camera calibration techniques. It discusses:
1) Projecting 3D world points to 2D image points using a projection matrix with intrinsic and extrinsic parameters.
2) Computing camera parameters by estimating the projection matrix from known 3D points and corresponding 2D image points using linear and non-linear optimization methods.
3) Modeling lens distortion and different distortion types that must be accounted for during calibration.
Requirements of a sensor, Principles and Applications of the following types of sensors- Position sensors - Piezo Electric Sensor, LVDT, Resolvers, Optical Encoders, pneumatic Position Sensors, Range Sensors Triangulations Principles, Structured, Lighting Approach, Time of Flight, Range Finders, Laser Range Meters, Touch Sensors ,binary Sensors., Analog Sensors, Wrist Sensors, Compliance Sensors, Slip Sensors, Camera, Frame Grabber, Sensing and Digitizing Image Data- Signal Conversion, Image Storage, Lighting Techniques, Image Processing and Analysis-Data Reduction, Segmentation, Feature Extraction, Object Recognition, Other Algorithms, Applications- Inspection, Identification, Visual Serving and Navigation.
Edge detection algorithms identify points in a digital image where the image brightness changes sharply or has discontinuities. Common edge detection methods include gradient operators like Prewitt and Sobel, the Laplacian of Gaussian (LoG) used in Marr-Hildreth edge detection, and the Canny edge detector. The Canny edge detector applies smoothing, finds the image gradient, performs non-maximum suppression and double thresholding to detect edges with good localization and a single response to each edge.
This document discusses Fourier descriptors and moments which are used in object recognition and image processing. Fourier descriptors represent the boundary shape of an object using the coefficients of its Fourier transform. They are useful because they are invariant to scaling, translation, and rotation. Central moments are another type of descriptor that are translation and rotation invariant. Velocity moments describe both shape and motion over time. Moment invariants are derived from moments to be invariant to specific transformations and are commonly used in image analysis applications such as object detection.
This document discusses machine vision and various components of machine vision systems. It describes different types of sensors used in machine vision like cameras, frame grabbers, and describes the process of sensing and digitizing image data through analog to digital conversion, image storage, and lighting techniques. It also discusses image processing and analysis techniques like segmentation, feature extraction and object recognition. Finally, it provides examples of applications of machine vision systems in inspection, identification, and navigation.
This document discusses image denoising techniques. It begins by defining image denoising as removing unwanted noise from an image to restore the original signal. It then discusses several types of noise like additive Gaussian noise, impulse noise, uniform noise, and periodic noise. For denoising, it covers spatial domain techniques like linear filters (mean, weighted mean), non-linear filters (median filter), and frequency domain techniques that apply a low-pass filter to the Fourier transform of the noisy image. The document provides examples of denoising noisy images using mean and median filters to remove different types of noise.
This document discusses various point processing and gray level transformation techniques used in image enhancement. It describes point processing as operating directly on pixel intensity values individually to alter them using transformation functions. The document outlines several basic gray level transformations including linear, logarithmic and power law. It also discusses piecewise linear transformations such as contrast stretching, intensity level slicing, and bit plane slicing. These transformations are used to enhance images by modifying their brightness, contrast and emphasis on certain gray levels.
Image processing involves the alteration and analysis of pictorial information. There are two main methods: optical processing using lenses and electronic processing. Electronic processing can be analog, controlling brightness and contrast, or digital, where images are composed of pixels that can be processed by a computer. Image processing has applications in fields like robotics, medicine, graphics, and satellite imaging. It allows for tasks like image restoration, compression, and segmentation.
This document discusses machine vision and various components of machine vision systems. It describes different types of sensors used in machine vision like cameras, frame grabbers, and describes the process of sensing and digitizing image data through analog to digital conversion, image storage, and lighting techniques. It also discusses image processing and analysis techniques like segmentation, feature extraction and object recognition. Finally, it provides examples of applications of machine vision systems in inspection, identification, and navigation.
The document discusses camera calibration. It describes determining the internal optical parameters (IOP) of cameras like focal length and principal point coordinates. It discusses different distortion models like radial, decentric, and atmospheric distortions. It outlines laboratory, field, and stellar calibration methods. It explains how equivalent focal length, radial distortion, and calibrated focal length are calculated. It also mentions self-calibration, using calibration objects, and bundle adjustment methods for calibration.
Edge detection is the name for a set of mathematical methods which aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities.
The inverse kinematics problem - Aiman Al-AllaqAimanAlAllaq
The document discusses inverse kinematics and how to solve the inverse kinematics problem for a robotic manipulator. It uses a 5-axis Rhino XR-3 robot as an example. It explains that inverse kinematics determines the joint variables given a desired position and orientation of the tool, which is important for tasks planned using external sensors. It then outlines the step-by-step process used to solve the inverse kinematics problem, which involves using the tool configuration vector obtained from the arm matrix and performing trigonometric operations to isolate each joint variable.
Machine vision uses computer vision techniques to automate inspection and measurement tasks in manufacturing processes. It incorporates computer science, optics, and mechanical engineering. Machine vision systems typically use digital cameras and specialized lenses to capture images that are then processed to check for attributes like dimensions, serial numbers, and defects. Common applications include inspecting semiconductor chips, automobiles, food, and pharmaceuticals. Key components of machine vision systems include cameras, lighting, lenses, and image processing software to analyze the captured images.
The document discusses object recognition in computer vision. It begins with an overview of object recognition, describing it as the task of finding and identifying objects in images. It then discusses several specific applications of object recognition, including fingerprint recognition and license plate recognition. Fingerprint recognition involves extracting features called minutiae from fingerprint images, which are ridge endings and bifurcations. License plate recognition uses an ALPR system to segment character images, normalize them, and recognize the characters.
This document provides an overview of digital image fundamentals and operations. It defines what a digital image is, how it is represented as a matrix, and common image types like RGB, grayscale, and binary. Pixels, resolution, neighborhoods, and basic relationships between pixels are discussed. The document also covers different types of image operations including point, local, and global operations as well as examples like arithmetic, logical, and geometric transformations. Finally, it introduces concepts of linear and nonlinear operations and announces the topic of the next lecture on image enhancement in the spatial domain.
This document provides an introduction to object recognition, including existing challenges, solutions, examples, and applications. It discusses common object recognition steps and algorithms used. Face detection and recognition examples are provided. Applications discussed include autonomous vehicles, quality control, security monitoring, and more. While object recognition has many applications, it remains a difficult task due to challenges like illumination changes, occlusions, and large datasets.
The document discusses edge detection methods including gradient based approaches like Sobel and zero crossing based techniques like Laplacian of Gaussian. It proposes a new algorithm that applies fuzzy logic to the results of gradient and zero crossing edge detection on an image to more accurately identify edges. The algorithm calculates gradient and zero crossings, applies fuzzy rules to classify pixels, and thresholds to determine final edge pixels.
Digital Image Processing 3rd edition Rafael C. Gonzalez, Richard E. Woods.pdfssuserbe3944
This document provides information about the third edition of the book "Digital Image Processing" by Rafael C. Gonzalez and Richard E. Woods. It includes publication details such as the publisher, editors, and copyright information. The book is dedicated to Samantha and to Janice, David, and Jonathan.
Image segmentation is based on three principal concepts
Detection of discontinuities.
Thresholding
Region Processing
Morphological Watershed Image Segmentation embodies many of the concepts of above three approaches
This document provides an overview of digital image processing and human vision. It discusses the key stages of digital image processing including image acquisition, enhancement, restoration, morphological processing, segmentation, representation and description, object recognition, and compression. It also covers the anatomy of the human eye, photoreceptors, color perception, image formation in the eye, brightness adaptation, and the Weber ratio relating the just noticeable difference in light intensity to background intensity. The document uses images and diagrams from the textbook "Digital Image Processing" to illustrate concepts in digital images and the human visual system.
This document outlines presentations on computer vision, robotics, and an image analysis paper. It discusses what computer vision and robotics are, provides examples of applications and challenges. It also summarizes a paper on using image analysis to classify Ethiopian coffee varieties by region. Key topics include face recognition, types of robots and their purposes, and examples like Shakey and wall-climbing robots. The future directions discussed include developing universal robots and improving visual recognition and manipulation abilities.
Computer vision has received great attention over the last two decades.
This research field is important not only in security-related software but also in the advanced interface between people and computers, advanced control methods, and many other areas.
The document summarizes different techniques for pattern projection used in 3D shape acquisition, including passive stereo, active stereo using coded structured light, and classifications of pattern projection methods. It discusses techniques such as time multiplexing using binary patterns, spatial codification using De Bruijn sequences, and direct codification using color. Diagrams illustrate concepts like correspondence problems in passive stereo and how encoded patterns can reduce these issues in active stereo.
This document discusses machine vision and various components of machine vision systems. It describes different types of sensors used in machine vision like cameras, frame grabbers, and describes the process of sensing and digitizing image data through analog to digital conversion, image storage, and lighting techniques. It also discusses image processing and analysis techniques like segmentation, feature extraction and object recognition. Finally, it provides examples of applications of machine vision systems in inspection, identification, and navigation.
This document discusses image denoising techniques. It begins by defining image denoising as removing unwanted noise from an image to restore the original signal. It then discusses several types of noise like additive Gaussian noise, impulse noise, uniform noise, and periodic noise. For denoising, it covers spatial domain techniques like linear filters (mean, weighted mean), non-linear filters (median filter), and frequency domain techniques that apply a low-pass filter to the Fourier transform of the noisy image. The document provides examples of denoising noisy images using mean and median filters to remove different types of noise.
This document discusses various point processing and gray level transformation techniques used in image enhancement. It describes point processing as operating directly on pixel intensity values individually to alter them using transformation functions. The document outlines several basic gray level transformations including linear, logarithmic and power law. It also discusses piecewise linear transformations such as contrast stretching, intensity level slicing, and bit plane slicing. These transformations are used to enhance images by modifying their brightness, contrast and emphasis on certain gray levels.
Image processing involves the alteration and analysis of pictorial information. There are two main methods: optical processing using lenses and electronic processing. Electronic processing can be analog, controlling brightness and contrast, or digital, where images are composed of pixels that can be processed by a computer. Image processing has applications in fields like robotics, medicine, graphics, and satellite imaging. It allows for tasks like image restoration, compression, and segmentation.
This document discusses machine vision and various components of machine vision systems. It describes different types of sensors used in machine vision like cameras, frame grabbers, and describes the process of sensing and digitizing image data through analog to digital conversion, image storage, and lighting techniques. It also discusses image processing and analysis techniques like segmentation, feature extraction and object recognition. Finally, it provides examples of applications of machine vision systems in inspection, identification, and navigation.
The document discusses camera calibration. It describes determining the internal optical parameters (IOP) of cameras like focal length and principal point coordinates. It discusses different distortion models like radial, decentric, and atmospheric distortions. It outlines laboratory, field, and stellar calibration methods. It explains how equivalent focal length, radial distortion, and calibrated focal length are calculated. It also mentions self-calibration, using calibration objects, and bundle adjustment methods for calibration.
Edge detection is the name for a set of mathematical methods which aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities.
The inverse kinematics problem - Aiman Al-AllaqAimanAlAllaq
The document discusses inverse kinematics and how to solve the inverse kinematics problem for a robotic manipulator. It uses a 5-axis Rhino XR-3 robot as an example. It explains that inverse kinematics determines the joint variables given a desired position and orientation of the tool, which is important for tasks planned using external sensors. It then outlines the step-by-step process used to solve the inverse kinematics problem, which involves using the tool configuration vector obtained from the arm matrix and performing trigonometric operations to isolate each joint variable.
Machine vision uses computer vision techniques to automate inspection and measurement tasks in manufacturing processes. It incorporates computer science, optics, and mechanical engineering. Machine vision systems typically use digital cameras and specialized lenses to capture images that are then processed to check for attributes like dimensions, serial numbers, and defects. Common applications include inspecting semiconductor chips, automobiles, food, and pharmaceuticals. Key components of machine vision systems include cameras, lighting, lenses, and image processing software to analyze the captured images.
The document discusses object recognition in computer vision. It begins with an overview of object recognition, describing it as the task of finding and identifying objects in images. It then discusses several specific applications of object recognition, including fingerprint recognition and license plate recognition. Fingerprint recognition involves extracting features called minutiae from fingerprint images, which are ridge endings and bifurcations. License plate recognition uses an ALPR system to segment character images, normalize them, and recognize the characters.
This document provides an overview of digital image fundamentals and operations. It defines what a digital image is, how it is represented as a matrix, and common image types like RGB, grayscale, and binary. Pixels, resolution, neighborhoods, and basic relationships between pixels are discussed. The document also covers different types of image operations including point, local, and global operations as well as examples like arithmetic, logical, and geometric transformations. Finally, it introduces concepts of linear and nonlinear operations and announces the topic of the next lecture on image enhancement in the spatial domain.
This document provides an introduction to object recognition, including existing challenges, solutions, examples, and applications. It discusses common object recognition steps and algorithms used. Face detection and recognition examples are provided. Applications discussed include autonomous vehicles, quality control, security monitoring, and more. While object recognition has many applications, it remains a difficult task due to challenges like illumination changes, occlusions, and large datasets.
The document discusses edge detection methods including gradient based approaches like Sobel and zero crossing based techniques like Laplacian of Gaussian. It proposes a new algorithm that applies fuzzy logic to the results of gradient and zero crossing edge detection on an image to more accurately identify edges. The algorithm calculates gradient and zero crossings, applies fuzzy rules to classify pixels, and thresholds to determine final edge pixels.
Digital Image Processing 3rd edition Rafael C. Gonzalez, Richard E. Woods.pdfssuserbe3944
This document provides information about the third edition of the book "Digital Image Processing" by Rafael C. Gonzalez and Richard E. Woods. It includes publication details such as the publisher, editors, and copyright information. The book is dedicated to Samantha and to Janice, David, and Jonathan.
Image segmentation is based on three principal concepts
Detection of discontinuities.
Thresholding
Region Processing
Morphological Watershed Image Segmentation embodies many of the concepts of above three approaches
This document provides an overview of digital image processing and human vision. It discusses the key stages of digital image processing including image acquisition, enhancement, restoration, morphological processing, segmentation, representation and description, object recognition, and compression. It also covers the anatomy of the human eye, photoreceptors, color perception, image formation in the eye, brightness adaptation, and the Weber ratio relating the just noticeable difference in light intensity to background intensity. The document uses images and diagrams from the textbook "Digital Image Processing" to illustrate concepts in digital images and the human visual system.
This document outlines presentations on computer vision, robotics, and an image analysis paper. It discusses what computer vision and robotics are, provides examples of applications and challenges. It also summarizes a paper on using image analysis to classify Ethiopian coffee varieties by region. Key topics include face recognition, types of robots and their purposes, and examples like Shakey and wall-climbing robots. The future directions discussed include developing universal robots and improving visual recognition and manipulation abilities.
Computer vision has received great attention over the last two decades.
This research field is important not only in security-related software but also in the advanced interface between people and computers, advanced control methods, and many other areas.
The document summarizes different techniques for pattern projection used in 3D shape acquisition, including passive stereo, active stereo using coded structured light, and classifications of pattern projection methods. It discusses techniques such as time multiplexing using binary patterns, spatial codification using De Bruijn sequences, and direct codification using color. Diagrams illustrate concepts like correspondence problems in passive stereo and how encoded patterns can reduce these issues in active stereo.
Lecture 4 Reconstruction from Two ViewsJoaquim Salvi
The document summarizes lecture 4 on reconstruction from two views. It discusses techniques for shape reconstruction including shape from X methods using multiple camera views or additional information. It then covers the triangulation principle for reconstructing 3D points from 2D point correspondences in multiple views. Finally, it introduces epipolar geometry which models the geometric relationship between two views and can be used to reconstruct the fundamental matrix and epipolar lines.
This document outlines the contents of Lecture 1 on rigid body transformations. The key topics covered include Cartesian coordinates, points and vectors, inner products and cross products, translations using translation matrices, rotations using rotation matrices, and homogeneous coordinates for representing transformations. The lecture will define and provide examples of how to represent and compute rigid body transformations such as translations and rotations between coordinate systems.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Film vocab for eal 3 students: Australia the movie
Lecture 2 Camera Calibration
1. 1
Lecture 2: Camera Calibration
Lecture 2
Camera Calibration
Joaquim Salvi
Universitat de Girona
Visual Perception
2. 2
Lecture 2: Camera Calibration
Contents
2. Camera Calibration
2.1 Calibration introduction
2.2 The pinhole model
2.3 The method of Hall
2.4 The method of Faugeras-Toscani – Modelling
2.5 The method of Faugeras-Toscani – Calibration
2.6 The method of Faugeras-Toscani with distortion
2.7 Experimental comparison of methods
3. 3
Lecture 2: Camera Calibration
Contents
2. Camera Calibration
2.1 Calibration introduction
2.2 The pinhole model
2.3 The method of Hall
2.4 The method of Faugeras-Toscani – Modelling
2.5 The method of Faugeras-Toscani – Calibration
2.6 The method of Faugeras-Toscani with distortion
2.7 Experimental comparison of methods
4. 4
Lecture 2: Camera Calibration
– Dense reconstruction – Visual inspection
– Object localization – Camera localization
2.1 Calibration Introduction
• Some applications of this capability include
5. 5
Lecture 2: Camera Calibration
Image courtesy of C. Taylor
“The Scholar of Athens,” Raphael, 1518
2.1 Calibration Introduction – Perspective Imaging
6. 6
Lecture 2: Camera Calibration
W
Z
W
Y
W
X
W
O
w
P
Image Plane
{ }W
u
P
I
Y
I
X
I
O
{ }I
Focal Point
W
Z
W
Y
W
X
W
O
w
P
Image Plane
{ }W
u
P
I
Y
I
X
I
O
{ }I
Focal Point
1
I
u
I I
u u
X
P Y
1
W
w
W
W w
w W
w
X
Y
P
Z
In pixels
In metrics?
w
l
w
l
0
W
w
W
W w
w W
w
X
Y
l
Z
2.1 Calibration Introduction
7. 7
Lecture 2: Camera Calibration
Modelling
G(X) X ?
Calibration
X !!!
Modelling:
• Determine the equation that approximates the camera behaviour.
• Define the set of unknowns in the equation (camera parameters).
• The camera model is an approximation of the physics & optics of the camera.
Calibration:
• Get the numeric value of every camera parameter.
G(X)
2.1 Calibration Introduction
8. 8
Lecture 2: Camera Calibration
Contents
2. Camera Calibration
2.1 Calibration introduction
2.2 The pinhole model
2.3 The method of Hall
2.4 The method of Faugeras-Toscani – Modelling
2.5 The method of Faugeras-Toscani – Calibration
2.6 The method of Faugeras-Toscani with distortion
2.7 Experimental comparison of methods
9. 9
Lecture 2: Camera Calibration
Contents
2. Camera Calibration
2.1 Calibration introduction
2.2 The pinhole model
2.3 The method of Hall
2.4 The method of Faugeras-Toscani – Modelling
2.5 The method of Faugeras-Toscani – Calibration
2.6 The method of Faugeras-Toscani with distortion
2.7 Experimental comparison of methods
10. 10
Lecture 2: Camera Calibration
2.2 Pinhole Model
Camera
coordinate
system
World
coordinate
system
0 0,u v
f
CY
CX CZ
CO
WZ
WY WX
WO
wP
Image plane
{ }W
{ }C
uP
dP
IY
IXIO
{ }I
RY
RX
{ }R
RO
Image
coordinate
system
11. 11
Lecture 2: Camera Calibration
2.2 Pinhole Model (Step 1: World to Camera)
Camera
coordinate
system
World
coordinate
system
CY
CX CZ
CO
WZ
WY WX
WO
wP
Image plane
{ }W
{ }C
C
WK
Step 1
12. 12
Lecture 2: Camera Calibration
2.2 Pinhole Model (Step 2: Projection)
Camera
coordinate
system
World
coordinate
system
CY
CX CZ
CO
WZ
WY WX
WO
wP
Image plane
{ }W
{ }C
uP
f
Step 2
wX
wY
wZ
uX
uY
RY
RX
{ }R
RO
13. 13
Lecture 2: Camera Calibration
2.2 Pinhole Model (Step 3: Lens Distortion)
Camera
coordinate
system
World
coordinate
system
f
CY
CX CZ
CO
WZ
WY WX
WO
wP
Image plane
{ }W
{ }C
uP
RY
RX
{ }R
RO
Step 3
dP
14. 14
Lecture 2: Camera Calibration
2.2 Pinhole Model (Step 3: Lens Distortion)
dP
uP dr
CY
CX
Observed position
Ideal
projection
dr: radial distortion
a
b
Radial distortion effect (a: negative, b: positive)
Radial Distortion
15. 15
Lecture 2: Camera Calibration
2.2 Pinhole Model (Step 3: Lens Distortion)
Axis with
maximum
radial
distortion
Axis with
minimum
tangential
distortion
CY
CX
Ideal
projection
Observed
position
dr: radial distortion
dt: tangential distortion
dPuP
dr
CY
CX
dt
Radial and Tangential Distortion
Image with distortionImage without distortion
16. 16
Lecture 2: Camera Calibration
2.2 Pinhole Model (Step 4: Camera to Image)
Camera
coordinate
system
World
coordinate
system
0 0,u v
f
CY
CX CZ
CO
WZ
WY WX
WO
wP
Image plane
{ }W
{ }C
uP
IY
IXIO
{ }I
RY
RX
{ }R
RO
Image
coordinate
system
Step 4
dP
17. 17
Lecture 2: Camera Calibration
Camera
coordinate
system
World
coordinate
system
0 0,u v
f
CY
CX CZ
CO
WZ
WY WX
WO
wP
Image plane
{ }W
{ }C
uP
dP
IY
IXIO
{ }I
RY
RX { }R
RO
C
WK
Image
coordinate
system
Step 1
Step 2Step 3
Step 4
2.2 Pinhole Model
18. 18
Lecture 2: Camera Calibration
2.2 Calibration Methods (I)
• Method of Hall
– Lineal method
– Transformation matrix
• Method of Faugeras-Toscani
– Lineal method
– Obtaining camera parameters
• Method of Faugeras-Toscani with distortion
– Iterative method
– Radial distortion
• Method of Tsai
– Iterative method
– Radial distortion
– Focal distance estimation
• Method of Weng
– Iterative method
– Radial and tangential distortion
• … and many more
19. 19
Lecture 2: Camera Calibration
Contents
2. Camera Calibration
2.1 Calibration introduction
2.2 The pinhole model
2.3 The method of Hall
2.4 The method of Faugeras-Toscani – Modelling
2.5 The method of Faugeras-Toscani – Calibration
2.6 The method of Faugeras-Toscani with distortion
2.7 Experimental comparison of methods
20. 20
Lecture 2: Camera Calibration
Contents
2. Camera Calibration
2.1 Calibration introduction
2.2 The pinhole model
2.3 The method of Hall
2.4 The method of Faugeras-Toscani – Modelling
2.5 The method of Faugeras-Toscani – Calibration
2.6 The method of Faugeras-Toscani with distortion
2.7 Experimental comparison of methods
21. 21
Lecture 2: Camera Calibration
2.3 The Method of Hall
• Method of Hall
– Lineal method
– Transformation matrix
• Method of Faugeras-Toscani
– Lineal method
– Obtaining camera parameters
• Method of Faugeras-Toscani with distortion
– Iterative method
– Radial distortion
• Method of Tsai
– Iterative method
– Radial distortion
– Focal distance estimation
• Method of Weng
– Iterative method
– Radial and tangential distortion
• … and many more
22. 22
Lecture 2: Camera Calibration
World
coordinate
system
WZ
WY WX
WO
wP
Image plane
{ }W
uP
IY
IXIO
{ }IImage
coordinate
system
2.3 The Method of Hall - Modelling
23. 23
Lecture 2: Camera Calibration
11 12 13 14
21 22 23 24
31 32 33 34
1
W
I w
u W
I w
u W
w
X
s X A A A A
Y
s Y A A A A
Z
s A A A A
Assume light is captured on the image plane by a linear projection
The matrix is defined up to a scale factor Multiple Solutions
A component is fixed to the unity Unique Solution
11 12 13 14
21 22 23 24
31 32 33 1
1
W
I w
u W
I w
u W
w
X
s X A A A A
Y
s Y A A A A
Z
s A A A
2.3 The Method of Hall - Modelling
24. 24
Lecture 2: Camera Calibration
11 12 13 14
31 32 33
21 22 23 24
31 32 33
1
1
W W W
I w w w
u W W W
w w w
W W W
I w w w
u W W W
w w w
A X A Y A Z A
X
A X A Y A Z
A X A Y A Z A
Y
A X A Y A Z
11 12 13 14
21 22 23 24
31 32 33 1
1
W
I w
u W
I w
u W
w
X
s X A A A A
Y
s Y A A A A
Z
s A A A
11 31 12 32 13 33 14
21 31 22 32 23 33 24
W I W W I W W I W I
w u w w u w w u w u
W I W W I W W I W I
w u w w u w w u w u
A X A X X A Y A X Y A Z A X Z A X
A X A Y X A Y A Y Y A Z A Y Z A Y
2.3 The Method of Hall - Calibration
25. 25
Lecture 2: Camera Calibration
2 1
2
1 0 0 0 0
0 0 0 0 1
W W W I W I W I W
i w w w u w u w u wi i i i i i i i i
W W W I W I W I W
i w w w u w u w u wi i i i i i i i i
Q X Y Z X X X Y X Z
Q X Y Z Y X Y Y Y Z
2 1
2
I
i ui
I
i ui
B X
B Y
T
11 12 13 14 21 22 23 24 31 32 33A A A A A A A A A A A A
1
t t
A Q Q Q B
QA B
Pseudoinverse leads to a unique solution:
1
A Q B
Obtaining 11 unknowns and each 2D point gives two equations
So, at least 6 points are needed. More points leads to a more accurate solution.
11 31 12 32 13 33 14
21 31 22 32 23 33 24
W I W W I W W I W I
w u w w u w w u w u
W I W W I W W I W I
w u w w u w w u w u
A X A X X A Y A X Y A Z A X Z A X
A X A Y X A Y A Y Y A Z A Y Z A Y
2.3 The Method of Hall - Calibration
26. 26
Lecture 2: Camera Calibration
Camera
coordinate
system
CY
CX CZ
CO
{ }C
IY
IXIO
{ }I
RY
RX
RO
{ }R
World
coordinate
system
WZ
WY
WX
WO
{ }W
Reconstruction
Area
Image of the calibrating pattern
2 1
2
1 0 0 0 0
0 0 0 0 1
W W W I W I W I W
i w w w u w u w u wi i i i i i i i i
W W W I W I W I W
i w w w u w u w u wi i i i i i i i i
Q X Y Z X X X Y X Z
Q X Y Z Y X Y Y Y Z
2 1
2
I
i ui
I
i ui
B X
B Y
1
t t
A Q Q Q B
2.3 The Method of Hall - Calibration
27. 27
Lecture 2: Camera Calibration
Contents
2. Camera Calibration
2.1 Calibration introduction
2.2 The pinhole model
2.3 The method of Hall
2.4 The method of Faugeras-Toscani – Modelling
2.5 The method of Faugeras-Toscani – Calibration
2.6 The method of Faugeras-Toscani with distortion
2.7 Experimental comparison of methods
28. 28
Lecture 2: Camera Calibration
Contents
2. Camera Calibration
2.1 Calibration introduction
2.2 The pinhole model
2.3 The method of Hall
2.4 The method of Faugeras-Toscani – Modelling
2.5 The method of Faugeras-Toscani – Calibration
2.6 The method of Faugeras-Toscani with distortion
2.7 Experimental comparison of methods
29. 29
Lecture 2: Camera Calibration
2.4 The Method of Faugeras-Toscani
• Method of Hall
– Lineal method
– Transformation matrix
• Method of Faugeras-Toscani
– Lineal method
– Obtaining camera parameters
• Method of Faugeras-Toscani with distortion
– Iterative method
– Radial distortion
• Method of Tsai
– Iterative method
– Radial distortion
– Focal distance estimation
• Method of Weng
– Iterative method
– Radial and tangential distortion
• … and many more
30. 30
Lecture 2: Camera Calibration
Camera
coordinate
system
World
coordinate
system
0 0,u v
f
CY
CX CZ
CO
WZ
WY WX
WO
wP
Image plane
{ }W
{ }C
uP
IY
IXIO
{ }I RY
RX { }R
RO
C
WK
Image
coordinate
system
Step 1
Step 2Step 3
Step 4
2.4 The Method of Faugeras-Toscani
31. 31
Lecture 2: Camera Calibration
• Extrinsic parameters: Model the situation and orientation of the camera with
respect to a world co-ordinate system.
• Intrinsic parameters: Model the behaviour of the internal geometry and the optical
characteristics of the camera.
u
w
v
Yc
Xc
Zc
Oc
Oi
(u0, v0)
Pu
P
image
co-ordinate
system
(píxels)
retinal
co-ordinate
system
(mm.)
Image plane
Retinal plane
Yr
Xr
Zr
w
World
co-ordinate
system
W
Z
W
Y
W
X
W
O { }W
Camera
co-ordinate system
2.4 The Method of Faugeras-Toscani
32. 32
Lecture 2: Camera Calibration
Yc
Xc
Zc
Zw
Yw
Oc
Ow
Pw
Camera
co-ordinate
system World
co-ordinate system
Retinal Plane
K
Xw
X
C
W Y
Z
t
T t
t
11 12 13
21 22 23
31 32 33
, , ,C
W
C
W
R Rot X Rot Y Rot Z
r r r
R r r r
r r r
C W
w w
C C W C
w W w W
C W
w w
X X
Y R Y T
Z Z
1 1
C W
Cw w
W
P P
K
3 3 3 1
1 30 1
C C
C W Wx x
W
x
R T
K
2.4 Extrinsic Parameters
33. 33
Lecture 2: Camera Calibration
CPw
CPu
Yc
Xc
Zc
Oc C
f
PZc
Yu
PYc Xu
PXc
C
C w
u C
w
C
C w
u C
w
X
X f
Z
Y
Y f
Z
2.4 The Intrinsic Parameters: Ideal Projection
34. 34
Lecture 2: Camera Calibration
pixel
Retinal
plane
(0, 0)
Yr
Xr (0, 0)
(Xd, Yd)
Image
Plane
(Xp, Yp)
R C
d u u
R C
d v u
X k X
Y k Y
2.4 The Intrinsic Parameters: Pixel Conversion
35. 35
Lecture 2: Camera Calibration
Yr
Xr
V
U
(0, 0)
Principal point
(u0,v0)
Computer image
co-ordinate
system
Camera
co-ordinate
system
0
0
I R
d d
I R
d d
X X u
Y Y v
2.4 The Intrinsic Parameters: Principal Point
36. 36
Lecture 2: Camera Calibration
Camera
coordinate
system
World
coordinate
system
0 0,u v
f
CY
CX CZ
CO
WZ
WY WX
WO
wP
Image plane
{ }W
{ }C
uP
IY
IXIO
{ }I RY
RX { }R
RO
C
WK
Image
coordinate
system
Step 1
Step 2Step 3
Step 4
2.4 The Method of Faugeras-Toscani
37. 37
Lecture 2: Camera Calibration
Real projection on the image plane(Xi, Yi)
(Xw, Yw, Zw) 3D object point with respect to world co-ordinate system
Affine transformation.
Modelled parameters: R, T
(Xc, Yc, Zc) 3D object point with respect to camera co-ordinate system
Perspective transformation.
Modelled parameter: f
(Xu, Yu) Ideal projection on the retinal plane
Pixel adjustment
Modelled parameters: ku, kv
(Xp,Yp) Real projection on the image plane
Adaptation to the computer image buffer
Modelled parameters: u0, v0
2.4 The Method of Faugeras-Toscani
38. 38
Lecture 2: Camera Calibration
C
C w
u C
w
C
C w
u C
w
X
X f
Z
Y
Y f
Z
R C
d u u
R C
d v u
X k X
Y k Y
0
0
I R
d d
I R
d d
X X u
Y Y v
0
0
C
I w
u u C
w
C
I w
u v C
w
X
X k f u
Z
Y
Y k f v
Z
0
0
0 0
0 0
0 0 1 0
1
C
I w
u u C
I w
u v C
w
X
s X u
Y
s Y v
Z
s
vv
uu
fk
fk
2.4 The Method of Faugeras-Toscani - Modelling
39. 39
Lecture 2: Camera Calibration
11 12 13
0
21 22 23
0
31 32 33
0 0
0 0
0 0 1 0
0 0 0 1 1
W
xI w
u u W
yI w
u v W
z w
r r r t X
s X u
r r r t Y
s Y v
r r r t Z
s
1 0 3 0
2 0 3 0
3
u u x z
v v y z
z
r u r t u t
A r v r t v t
r t
Intrínsecs Extrínsecs
2.4 The Method of Faugeras-Toscani - Modelling
40. 40
Lecture 2: Camera Calibration
Contents
2. Camera Calibration
2.1 Calibration introduction
2.2 The pinhole model
2.3 The method of Hall
2.4 The method of Faugeras-Toscani – Modelling
2.5 The method of Faugeras-Toscani – Calibration
2.6 The method of Faugeras-Toscani with distortion
2.7 Experimental comparison of methods
41. 41
Lecture 2: Camera Calibration
Contents
2. Camera Calibration
2.1 Calibration introduction
2.2 The pinhole model
2.3 The method of Hall
2.4 The method of Faugeras-Toscani – Modelling
2.5 The method of Faugeras-Toscani – Calibration
2.6 The method of Faugeras-Toscani with distortion
2.7 Experimental comparison of methods
42. 42
Lecture 2: Camera Calibration
11 12 13
0
21 22 23
0
31 32 33
0 0
0 0
0 0 1 0
0 0 0 1 1
W
xI w
u u W
yI w
u v W
z w
r r r t X
s X u
r r r t Y
s Y v
r r r t Z
s
1 0 3 0
2 0 3 0
3
u u x z
v v y z
z
r u r t u t
A r v r t v t
r t
Intrínsecs Extrínsecs
2.5 The Method of Faugeras-Toscani – Modelling
43. 43
Lecture 2: Camera Calibration
1 14 3 34
2 24 3 34
0
0
W I W
w u w
W I W
w u w
A P A X A P A
A P A Y A P A
31 14
34 34 34
32 24
34 34 34
I W W I
u w w u
I W W I
u w w u
AA A
X P P X
A A A
AA A
Y P P Y
A A A
1 1 2
3 2 2
I W W I
u w w u
I W W I
u w w u
X T P C T P X
Y T P C T P Y
v
z
y
v
zz
z
u
z
x
u
zz
t
t
vC
t
r
v
t
r
T
t
r
T
t
t
uC
t
r
u
t
r
T
02
2
0
3
3
3
2
01
1
0
3
1
2.5 The Method of Faugeras-Toscani – Calibration
1 0 3 0
2 0 3 0
3
u u x z
v v y z
z
r u r t u t
A r v r t v t
r t
343
242
141
AA
AA
AA
1
343
242
141
w
W
u
I
u
I
P
AA
AA
AA
s
Ys
Xs
44. 44
Lecture 2: Camera Calibration
1
2
3
1
2
T
T
X T
C
C
B QX
1 3
1 3
0 1 0
0 0 1
t tW I W
w u w xi i i
t tI W W
x u w wi i i
P X P
Q
Y P P
I
ui
I
ui
X
B
Y
1
t t
X Q Q Q B
2.5 The Method of Faugeras-Toscani – Calibration
1 1 2
3 2 2
I W W I
u w w u
I W W I
u w w u
X T P C T P X
Y T P C T P Y
11 unknowns
minimum 6 points
45. 45
Lecture 2: Camera Calibration
v
z
y
v
zz
z
u
z
x
u
zz
t
t
vC
t
r
v
t
r
T
t
r
T
t
t
uC
t
r
u
t
r
T
02
2
0
3
3
3
2
01
1
0
3
1
3 1r
2
1
zt
T
2.5 The Method of Faugeras-Toscani – tz
𝑅 =
𝑟1
𝑟2
𝑟3
46. 46
Lecture 2: Camera Calibration
1 2 1 2
1 2 1 2
cos
sin
v v v v
v v v v
0
1
1
0
t
i j
t
i j
i j
i j
r r i j
r r i j
r r i j
r r i j
v
z
y
v
zz
z
u
z
x
u
zz
t
t
vC
t
r
v
t
r
T
t
r
T
t
t
uC
t
r
u
t
r
T
02
2
0
3
3
3
2
01
1
0
3
1
2.5 The Method of Faugeras-Toscani – Intrinsics
02
3313
0
3331
0
3
2121
·
· u
t
rr
t
r
t
r
u
t
r
t
r
t
r
t
r
u
t
r
TTTT
z
u
zzzzz
u
zz
t
2
1
zt
T
2
2
21
0
T
TT
u
t
47. 47
Lecture 2: Camera Calibration
1 2 1 2
1 2 1 2
cos
sin
v v v v
v v v v
0
1
1
0
t
i j
t
i j
i j
i j
r r i j
r r i j
r r i j
r r i j
v
z
y
v
zz
z
u
z
x
u
zz
t
t
vC
t
r
v
t
r
T
t
r
T
t
t
uC
t
r
u
t
r
T
02
2
0
3
3
3
2
01
1
0
3
1
2 31 2
0 02 2
2 2
1 2 2 3
2 2
2 2
tt
t t t t
u v
T TT T
u v
T T
T T T T
T T
2.5 The Method of Faugeras-Toscani – Intrinsics
48. 48
Lecture 2: Camera Calibration
1 2 1 2
1 2 1 2
cos
sin
v v v v
v v v v
0
1
1
0
t
i j
t
i j
i j
i j
r r i j
r r i j
r r i j
r r i j
v
z
y
v
zz
z
u
z
x
u
zz
t
t
vC
t
r
v
t
r
T
t
r
T
t
t
uC
t
r
u
t
r
T
02
2
0
3
3
3
2
01
1
0
3
1
2.5 The Method of Faugeras-Toscani – Extrinsics
tt
t
tt
t
u
z
z
u
zz
TT
T
T
TT
TTr
TTT
T
T
TT
TTr
t
u
t
r
Tr
t
r
u
t
r
T
21
2
2
2
21
211
221
2
2
2
2
21
2110
3
11
1
0
3
1
1
49. 49
Lecture 2: Camera Calibration
1 2 1 2
1 2 1 2
cos
sin
v v v v
v v v v
0
1
1
0
t
i j
t
i j
i j
i j
r r i j
r r i j
r r i j
r r i j
v
z
y
v
zz
z
u
z
x
u
zz
t
t
vC
t
r
v
t
r
T
t
r
T
t
t
uC
t
r
u
t
r
T
02
2
0
3
3
3
2
01
1
0
3
1
2 1 2
1 1 22
1 2 2
2 2 3
2 3 22
2 3 2
2
3
2
t
t t
t
t t
T T T
r T T
T T T
T T T
r T T
T T T
T
r
T
2 1 2
1 2
1 2 2
2 2 3
2 2
2 3 2
2
1
t
x t t
t
y t t
z
T T T
t C
T T T
T T T
t C
T T T
t
T
2.5 The Method of Faugeras-Toscani – Extrinsics
50. 50
Lecture 2: Camera Calibration
Contents
2. Camera Calibration
2.1 Calibration introduction
2.2 The pinhole model
2.3 The method of Hall
2.4 The method of Faugeras-Toscani – Modelling
2.5 The method of Faugeras-Toscani – Calibration
2.6 The method of Faugeras-Toscani with distortion
2.7 Experimental comparison of methods
51. 51
Lecture 2: Camera Calibration
Contents
2. Camera Calibration
2.1 Calibration introduction
2.2 The pinhole model
2.3 The method of Hall
2.4 The method of Faugeras-Toscani – Modelling
2.5 The method of Faugeras-Toscani – Calibration
2.6 The method of Faugeras-Toscani with distortion
2.7 Experimental comparison of methods
52. 52
Lecture 2: Camera Calibration
2.6 The Method of Faugeras-Toscani with distortion
• Method of Hall
– Lineal method
– Transformation matrix
• Method of Faugeras-Toscani
– Lineal method
– Obtaining camera parameters
• Method of Faugeras-Toscani with distortion
– Iterative method
– Radial distortion
• Method of Tsai
– Iterative method
– Radial distortion
– Focal distance estimation
• Method of Weng
– Iterative method
– Radial and tangential distortion
• … and many more
53. 53
Lecture 2: Camera Calibration
Camera
coordinate
system
World
coordinate
system
0 0,u v
f
CY
CX CZ
CO
WZ
WY WX
WO
wP
Image plane
{ }W
{ }C
uP
dP
IY
IXIO
{ }I
RY
RX { }R
RO
C
WK
Image
coordinate
system
Step 1
Step 2Step 3
Step 5
Step 4
2.6 The Method of Faugeras-Toscani with distortion
54. 54
Lecture 2: Camera Calibration
Ideal
projection
Observed
position
dr dt
Xr
Yr
dr: radial distortion
dt: tangential distortion
Pu
Pd
2.6 Lens Distortion
55. 55
Lecture 2: Camera Calibration
a
b
Radial distorsion effect Tangential distorsion effect
Xr
Axe of a
maximum
tangential
distortion
Axe of a
minimum
tangential
distortion
Radial distorsion is the most important and usually the only considered in
calibration.
2.6 Lens Distortion
56. 56
Lecture 2: Camera Calibration
X X Du d x Y Y Du d y
D X k rx d 1
2
D Y k ry d 1
2
r X Yd d 2 2
2 4
1 2
2 4
1 2
2 2
C
x d
C
y d
C C
d d
D X k r k r
D Y k r k r
r X Y
k1 is the most important component
and usuallly sufficient in most
applications.
2.6 Lens Distortion
Model of Faugeras-Toscani with distortion:
57. 57
Lecture 2: Camera Calibration
u
w
v
Yc
Xc
Zc
Oc
Oi
(u0, v0)
Pu
P
Camera
co-ordinate system
image
co-ordinate
system
f Pd
retinal
co-ordinate
system
Image plane
Retinal plane
Yr
Xr
Zr
X
f
P
P
u Xc
Zc
Y
f
P
P
u Yc
Zc
X X Du d x Y Y Du d y
D X k rx d 1
2
D Y k ry d 1
2
r X Yd d 2 2
X k Xp u d Y k Yp v d
X X ui p 0 Y Y vi p 0
2.6 The Method of Faugeras-Toscani with distortion
58. 58
Lecture 2: Camera Calibration
Camera
coordinate
system
World
coordinate
system
0 0,u v
f
CY
CX CZ
CO
WZ
WY WX
WO
wP
Image plane
{ }W
{ }C
uP
dP
IY
IXIO
{ }I
RY
RX { }R
RO
C
WK
Image
coordinate
system
Step 1
Step 2Step 3
Step 5
Step 4
2.6 The Method of Faugeras-Toscani with distortion
59. 59
Lecture 2: Camera Calibration
(Xw, Yw, Zw) 3D object point with respect to world co-ordinate system
Affine transformation.
Modelled parameters: R, T
(Xc, Yc, Zc) 3D object point with respect to camera co-ordinate system
Perspective transformation.
Modelled parameter: f
(Xu, Yu) Ideal projection on the retinal plane
Radial lens distortion.
Modelled parameter: k1
(Xd, Yd) Real projection on the retinal plane
Pixel adjustment
Modelled parameters: ku, kv
(Xp,Yp ) Real projection on the image plane
Adaptation to the computer image buffer
Modelled parameters: u0, v0
(Xi, Yi) Real projection on the image plane
2.6 The Method of Faugeras-Toscani with distortion
60. 60
Lecture 2: Camera Calibration
2
1
2
1
C
C Cw
d dC
w
C
C Cw
d dC
w
X
f X k r X
Z
Y
f Y k r Y
Z
0
0
I
dC
d
u
I
dC
d
v
X u
X
k
Y v
Y
k
1 1
C W
w w
C W
Cw w
WC W
w w
X X
Y Y
K
Z Z
r X Yd d 2 2
The model is NON-LINEAR
Iterative minimisation:
• Newton-Raphson
• Levenberg-Marquardt
2.6 The Method of Faugeras-Toscani with distortion
61. 61
Lecture 2: Camera Calibration
Contents
2. Camera Calibration
2.1 Calibration introduction
2.2 The pinhole model
2.3 The method of Hall
2.4 The method of Faugeras-Toscani – Modelling
2.5 The method of Faugeras-Toscani – Calibration
2.6 The method of Faugeras-Toscani with distortion
2.7 Experimental comparison of methods
62. 62
Lecture 2: Camera Calibration
Contents
2. Camera Calibration
2.1 Calibration introduction
2.2 The pinhole model
2.3 The method of Hall
2.4 The method of Faugeras-Toscani – Modelling
2.5 The method of Faugeras-Toscani – Calibration
2.6 The method of Faugeras-Toscani with distortion
2.7 Experimental comparison of methods
63. 63
Lecture 2: Camera Calibration
Hall Faugeras Faugeras
distorted
Tsai Weng
Transformation
matrix
Step 3
Lens
Distortion
Step 2
Projection
Step 1
World2camera
Transformation
with , , , tx, ty and tz
Projection with f
Radial distortion with k1
Undistorted
Multiple
distortion
k1, g1, g2, g3, g4
Transformation with
u0, v0 , ku and kv
Transformation
with u0, v0 and sx Transformation
with u0, v0,
ku and kv
Step 4
Camera2image
C C W C
w W w WP P T R
,
C C
C Cw w
u uC C
w w
X Y
X f Y f
Z Z
C C
w uP P
C C
u dP P
C I
d dP P
C C
u dP P
0
0
I C
d u d
I C
d v d
X k X u
Y k Y v
2 2
1
2 2
1
C C C C C
u d u u u
C C C C C
u d u u u
X X k X X Y
Y Y k Y X Y
1'
0
1
0
I C
d x x d
I C
d y d
X s d X u
Y d Y v
=
W C
w wP P
I W
d wP P A
2.7 Experimental Comparison - Methods
64. 64
Lecture 2: Camera Calibration
wP
Optical Ray
3Dd
2.7 Experimental Comparison - Accuracy Evaluation
• 3D Measurement
– Distance with respect to the optical ray
– Normalized Stereo Calibration Error
• 2D Measurement
– Accuracy of distorted image coordinates
– Accuracy of undistorted image
coordinates
1 22 2
2 2 2
1
ˆ ˆ
1
NSCE
ˆ 12
C C C C
n
w w w wi i i i
C
i w u vi
X X Y Y
n Z
Camera
coordinate
system
World
coordinate
system
f
CY
CX
WZ
WX
WO
wP
Image plane
{ }W
uP
dP
IX{ }I
RX
{ }R
RO
Image
coordinate
system
ˆ
uP
ˆ
dP
0 0,u v
WY
CZ
CO { }C
IY
IO
RY
uP
ˆ
uP
0 0,u v
dd
Observed Point
Linear Projection
- distortion
+ distortion
ud
ˆ
dP
dP
65. 65
Lecture 2: Camera Calibration
2.7 Experimental Comparison: Synthetic Images (I)
2D distorted image (pix.) 2D undistorted image (pix.)
Mean Standard
desviation
Max Min Mean Standard
desviation
Max Min
1 Hall 0.2676 0.1979 1.2701 0.0213 0.2676 0.1979 1.2701 0.0213
2 Faugeras 0.2689 0.1997 1.2377 0.0075 0.2689 0.1997 1.2377 0.0075
3 Faugeras with distortion 0.0840 0.0458 0.2603 0.0081 0.0834 0.0454 0.2561 0.0080
4 Tsai 0.0838 0.0457 0.2426 0.0035 0.0832 0.0453 0.2386 0.0035
5 Weng 0.0845 0.0455 0.2608 0.0019 0.0843 0.0443 0.2584 0.0129
2D distorted
0
0,05
0,1
0,15
0,2
0,25
0,3
1 2 3 4 5
pix.
Mean Standard deviation
2D undistorted
0
0,05
0,1
0,15
0,2
0,25
0,3
1 2 3 4 5
pix.
Mean Standard deviation
66. 66
Lecture 2: Camera Calibration
2.7 Experimental Comparison: Synthetic Images (II)
3D position (mm) NSCE
Mean Standard
desviation
Max Min
1 Hall 0.1615 0.1028 0.5634 0.0113 n/a
2 Faugeras 0.1811 0.1357 0.8707 0.0147 0.6555
3 Faugeras NR with distortion 0.0566 0.0307 0.1694 0.0055 0.2042
4 Tsai optimized 0.0565 0.0306 0.1578 0.0087 0.2037
5 Weng 0.0570 0.0305 0.1696 0.0088 0.2064
Normalized Stereo Calibration Error
Normalized Stereo Calibration Error
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
1 2 3 4 5
NSCE
3D position
0
0,05
0,1
0,15
0,2
1 2 3 4 5
mm.
Mean Standard deviation
67. 67
Lecture 2: Camera Calibration
Computing Time
160 punts 1800 punts
• Hall 1 ms 70 ms
• Faugeras 1 ms 70 ms
• Faugeras with distortion 10 ms 380 ms
• Tsai 10 ms 530 ms
• Weng 51 ms 4216 ms
Pentium III at 1 GHz.
2.7 Experimental Comparison: Synthetic Images (III)
68. 68
Lecture 2: Camera Calibration
2.7 Experimental Comparison: Real Images (I)
Camera
coordinate
system
CY
CX CZ
CO
{ }C
IY
IXIO
{ }I
RY
RX
RO
{ }R
World
coordinate
system
WZ
WY
WX
WO
{ }W
Reconstruction
Area
Image of the calibrating pattern
3D position (mm) NSCE
Mean Standard
desviation
Max Min
Hall 0.5219 0.2595 1.1370 0.0143 n/a
Faugeras 0.7782 0.4253 2.0210 0.0187 4.0649
Faugeras with distortion 0.4967 0.3367 1.5642 0.0094 2.5489
Tsai 0.4815 0.3023 1.4014 0.0093 2.4836
Weng 0.4740 0.2904 1.2669 0.0087 2.4556
69. 69
Lecture 2: Camera Calibration
2.7 Experimental Comparison: Real Images (II)
3D position (mm) NSCE
Mean Standard
desviation
Max Min
Hall 1.5698 0.9842 8.9249 0.0247 n/a
Faugeras 1.6187 0.9856 8.8812 0.0302 2.0175
Faugeras with distortion 0.9930 0.5660 3.2386 0.0154 0.9909
Tsai 0.9927 0.5655 3.2311 0.0153 0.9908
Weng 0.9896 0.5724 3.3526 0.0149 0.9869
Image of the calibration patternStereo camera over a mobile robot
70. 70
Lecture 2: Camera Calibration
2.7 Experimental Comparison - Conclusions
• Implementation of 5 of the most used camera calibration
methods
– Notation was unified
– The methods were compared in terms of model and
calibration
• The accuracy of non-linear methods is better than linear
methods
• Modelling of radial distortion is quite sufficient when high
accuracy is required
• Accuracy measuring methods obtain similar results if they are
relatively compared
Additional bibliography:
J. Salvi, X. Armangué and J. Batlle. A Comparative Review of Camera
Calibrating Methods with Accuracy Evaluation. Pattern Recognition,
PR, pp. 1617-1635, Vol. 35, Issue 7, July 2002.