This document discusses various optical and technical aspects of camera lenses, including:
1) It defines focal length as the distance between a lens and the point where light passing through converges, known as the focal point. Shorter focal lengths provide wide-angle views while longer focal lengths provide magnified close-up views.
2) F-number and f-stop are defined, with f-number indicating the maximum light a lens can admit and f-stop indicating light levels at smaller iris openings. Smaller f-numbers and f-stop numbers admit more light.
3) The relationship between aperture, focal length, and depth of field is explained. Smaller apertures provide deeper depth of field while
This document provides information about 4K lens specifications and performance. It discusses key optical parameters for 4K lenses such as sharpness, chromatic aberration, depth of field, and resolution. The document explains how 4K lenses are designed to minimize chromatic aberration and enhance modulation transfer function to improve image quality. It also describes the benefits of 4K lenses for wide color gamut and high dynamic range imaging applications. These benefits include reduced color fringing, flare, and black level for increased dynamic range. Examples are provided comparing image quality between 4K and HD lenses. The document concludes with information about Canon's cinema lens lineup and technologies.
This document provides definitions and explanations of various optical terminology related to light passing through a lens, including:
- Dispersion, refraction, diffraction, reflection, focal point, focal length, principal point, image circle, aperture ratio, numerical aperture, optical axis, and more. It discusses concepts such as entrance pupil, exit pupil, angular aperture, and how they relate to lens performance. The document also covers topics like vignetting, the cosine law, and flare. Overall, it serves as a comprehensive reference for understanding optical and photographic lens terminology.
This document discusses key elements that contribute to high quality image production, including spatial resolution, frame rate, dynamic range, color gamut, bit depth, and compression artifacts. It examines these elements in the context of 4K and 8K broadcast cameras and their advantages over HD. Factors like wider viewing angles, increased perceived motion, and benefits for nature documentaries are cited as motivations for 8K. Technical details covered include lens flange back distance, flare, shading, chromatic aberration, and testing procedures. Overall quality is represented as a function of these various image quality factors.
The document provides a history of the development of television technology from the late 1800s through the 1920s. Some key developments include:
- In 1873, experiments with selenium, which is light-sensitive and formed the basis for early televisions.
- In 1884, the Nipkow disk laid down many basic concepts like scanning and synchronization.
- In 1923, Vladimir Zworykin developed the Kinescope, which allowed television programs to be recorded on film.
- In 1924, John Logie Baird transmitted the first television image.
- In 1925, Vladimir Zworykin demonstrated 60-line television using a curved-line image structure typical of mechanical television at the time.
This document discusses emerging technologies and optimization techniques for media workflows and content management. It covers topics like virtual, augmented and mixed reality, 3D spatial audio, high dynamic range video, media over IP, object-based media, video compression techniques, and streaming. Specific technologies and standards discussed include 360-degree video, MPEG-H for 3D audio, HDR10, Dolby Vision, HLG, and SMPTE ST 2110 for media over IP. Applications and use cases are also presented for mixed reality, spatial computing, and next-generation audiovisual experiences.
This document provides information about quality control testing of audiovisual content. It discusses various quality control tests that can be performed, including tests for analogue frame synchronization errors, black bars, constant colour frames, flashing video, macroblocking, video deinterlacing artifacts, and digital tape dropouts. Examples are provided for how each test can be configured and what results might look like. The goal of the quality control tests is to help broadcasters optimize their automated quality control systems and cope with increasing amounts of digital content.
This document provides an overview of color video signals and color perception by the human visual system. It discusses:
1. The sensitivity of human cone cells to different wavelengths of light and how this determines color perception.
2. How color video signals like YUV, RGB, and composite video encode color and brightness information.
3. Standards for analog color television transmission including NTSC, PAL, and SECAM which differ in aspects like lines, frame rate, and color encoding.
This document discusses IP interfaces for video production and summarizes the benefits of IP-based systems compared to SDI. It provides examples of IP-enabled video switchers and control systems from Sony and Grass Valley. The rest of the document discusses standards organizations and specifications that enable IP interoperability such as SMPTE ST 2110, AES67, and AIMS. It also summarizes IP routing and processing platforms like Grass Valley's GV Node and control systems like Lawo's VSM.
This document provides information about 4K lens specifications and performance. It discusses key optical parameters for 4K lenses such as sharpness, chromatic aberration, depth of field, and resolution. The document explains how 4K lenses are designed to minimize chromatic aberration and enhance modulation transfer function to improve image quality. It also describes the benefits of 4K lenses for wide color gamut and high dynamic range imaging applications. These benefits include reduced color fringing, flare, and black level for increased dynamic range. Examples are provided comparing image quality between 4K and HD lenses. The document concludes with information about Canon's cinema lens lineup and technologies.
This document provides definitions and explanations of various optical terminology related to light passing through a lens, including:
- Dispersion, refraction, diffraction, reflection, focal point, focal length, principal point, image circle, aperture ratio, numerical aperture, optical axis, and more. It discusses concepts such as entrance pupil, exit pupil, angular aperture, and how they relate to lens performance. The document also covers topics like vignetting, the cosine law, and flare. Overall, it serves as a comprehensive reference for understanding optical and photographic lens terminology.
This document discusses key elements that contribute to high quality image production, including spatial resolution, frame rate, dynamic range, color gamut, bit depth, and compression artifacts. It examines these elements in the context of 4K and 8K broadcast cameras and their advantages over HD. Factors like wider viewing angles, increased perceived motion, and benefits for nature documentaries are cited as motivations for 8K. Technical details covered include lens flange back distance, flare, shading, chromatic aberration, and testing procedures. Overall quality is represented as a function of these various image quality factors.
The document provides a history of the development of television technology from the late 1800s through the 1920s. Some key developments include:
- In 1873, experiments with selenium, which is light-sensitive and formed the basis for early televisions.
- In 1884, the Nipkow disk laid down many basic concepts like scanning and synchronization.
- In 1923, Vladimir Zworykin developed the Kinescope, which allowed television programs to be recorded on film.
- In 1924, John Logie Baird transmitted the first television image.
- In 1925, Vladimir Zworykin demonstrated 60-line television using a curved-line image structure typical of mechanical television at the time.
This document discusses emerging technologies and optimization techniques for media workflows and content management. It covers topics like virtual, augmented and mixed reality, 3D spatial audio, high dynamic range video, media over IP, object-based media, video compression techniques, and streaming. Specific technologies and standards discussed include 360-degree video, MPEG-H for 3D audio, HDR10, Dolby Vision, HLG, and SMPTE ST 2110 for media over IP. Applications and use cases are also presented for mixed reality, spatial computing, and next-generation audiovisual experiences.
This document provides information about quality control testing of audiovisual content. It discusses various quality control tests that can be performed, including tests for analogue frame synchronization errors, black bars, constant colour frames, flashing video, macroblocking, video deinterlacing artifacts, and digital tape dropouts. Examples are provided for how each test can be configured and what results might look like. The goal of the quality control tests is to help broadcasters optimize their automated quality control systems and cope with increasing amounts of digital content.
This document provides an overview of color video signals and color perception by the human visual system. It discusses:
1. The sensitivity of human cone cells to different wavelengths of light and how this determines color perception.
2. How color video signals like YUV, RGB, and composite video encode color and brightness information.
3. Standards for analog color television transmission including NTSC, PAL, and SECAM which differ in aspects like lines, frame rate, and color encoding.
This document discusses IP interfaces for video production and summarizes the benefits of IP-based systems compared to SDI. It provides examples of IP-enabled video switchers and control systems from Sony and Grass Valley. The rest of the document discusses standards organizations and specifications that enable IP interoperability such as SMPTE ST 2110, AES67, and AIMS. It also summarizes IP routing and processing platforms like Grass Valley's GV Node and control systems like Lawo's VSM.
The document discusses video compression history and standards, including codecs such as H.261, H.262/MPEG-2, H.263, H.264/AVC, H.265/HEVC, and the roles of organizations like MPEG, VCEG, and ITU-T in developing video coding standards to ensure interoperability. It also covers video encoding and decoding principles, as well as common container formats and their applications in areas like broadcasting, streaming, and storage.
The document provides an overview of key elements and trends in high-quality image production, including spatial resolution, temporal resolution, dynamic range, color gamut, quantization, and related technologies. It discusses technologies like HD, UHD, HDR and WCG and how they improve the total quality of experience. Images and charts are included to illustrate comparisons of technologies and results from industry surveys on trends and commercial projects.
This document provides information about various camera settings and technologies for capturing clear images, including:
1. Clear Scan helps eliminate banding caused when a camera's frame rate does not match a CRT display's refresh rate.
2. Slow Shutter extends the camera's exposure time to produce blur effects or allow more light in low-light scenes.
3. Super Sampling uses a 1080p camera to produce sharper 720p images by maintaining higher frequency response.
4. Detail correction adds a spike-shaped detail signal to make edges appear sharper without degrading resolution. Settings like detail level and H/V ratio control the amount and balance of detail correction.
5. Other topics covered
Video Compression, Part 4 Section 1, Video Quality Assessment Dr. Mohieddin Moradi
This document provides an overview of video compression artifacts that can occur when video is compressed for streaming or storage. It discusses both spatial artifacts, such as blurring, blocking, ringing, and color bleeding, as well as temporal artifacts like flickering and mosquito noise. For each artifact, it describes the visual appearance and potential causes from factors like quantization during compression, motion compensation between frames, and chroma subsampling. The document aims to help understand how compression can degrade perceptual video quality and different types of artifacts that may be evaluated both objectively and subjectively.
Video Compression, Part 3-Section 2, Some Standard Video CodecsDr. Mohieddin Moradi
This document discusses MPEG-2 Transport Streams and Packetized Elementary Streams. It describes how MPEG-2 Transport Streams use fixed length 188 byte packets containing compressed video, audio or data from one or more programs identified by Packet IDs. These packets can contain Packetized Elementary Stream packets which contain compressed elementary streams with timestamps for synchronization. The document also discusses how Transport Streams allow for synchronous multiplexing of multiple programs from independent time bases into a single stream.
Hue refers to the dominant wavelength of light, which determines the color as perceived by the observer. Saturation refers to the purity of the hue, or the amount of white light mixed with it. Luminance refers to the brightness or intensity of the color.
The document discusses radiometry and photometry, which deal with measuring light across the electromagnetic spectrum and in the visible spectrum respectively. It defines terms like luminous flux, luminous intensity, illuminance, and luminance.
It also covers topics like additive and subtractive color mixing, primary and secondary colors, color spaces, and video signal formats like RGB, YUV, and YCbCr which are used to represent color images and video. Human cone sensitivity
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGESDr. Mohieddin Moradi
This document discusses elements of high-quality image production for television broadcasting such as spatial resolution, frame rate, dynamic range, color gamut, quantization, and total quality of experience. It outlines these elements and provides examples of their implementation in HD, UHD1, and UHD2 formats. Motivations for 8K and 4K broadcasting are discussed related to improved image quality, new applications, and bandwidth efficiency trends. Implementation examples of 4K and 8K broadcasting systems from Japan, Korea, Sweden, and the UK are also summarized.
The document discusses various networking protocols and standards related to professional media over IP, including:
- SMPTE ST 2110 standards that define carriage of uncompressed video, audio, and data over IP networks as separate elementary streams.
- AES67, which enables high-performance audio-over-IP streaming interoperability between different IP audio networking products.
- Other relevant standards and protocols like SMPTE ST 2022, AIMS recommendations, Video Services Forum TR-03/04, RTP, SDP, PTP, and IGMP.
- Considerations for designing IP infrastructures for media networks, including capacity, connectivity, timing, control, and redundancy.
This document outlines elements of high-quality image production, including spatial and temporal resolution, dynamic range, color gamut, bit depth, and coding. It discusses color gamut conversion, gamma correction, HDR and SDR mastering, tone mapping, and backwards compatibility. The document also covers HDR metadata standards and different distribution scenarios for HDR content.
1. The document discusses color temperature and how different light sources emit different color spectrums that video cameras must account for through color balancing. Color temperature is used as a reference to adjust the camera's color balance to match the light source.
2. After color temperature conversion optically or electronically, white balance is then used to precisely match the light source color temperature by adjusting the camera's video amplifiers.
3. Other topics covered include polarizers, neutral density filters, and technical aspects of video such as gamma correction and clipping levels.
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2Dr. Mohieddin Moradi
This document discusses high definition video standards including SMPTE 274M, 292M, 372M and dual link SDI formats. It provides details on:
- The HD-SDI standards that define 1080p and 720p video formats and carriage through 1.5Gb/s serial digital interface.
- The timing reference signal codes used in HD-SDI to identify lines and perform error checking.
- How a 12-bit color depth can be achieved within the dual link standard by mapping the additional bits across both links.
- The benefits of 3Gb/s SDI and dual link formats for working at higher resolutions and color spaces prior to finishing.
The document outlines topics related to video over IP infrastructure and standards. It discusses IP technology trends, networking basics, video and audio over IP standards, SMPTE ST 2110, NMOS, infrastructure considerations, timing issues, clean switching methods, compression, broadcast controller/orchestration, and case studies for migrating broadcast facilities to IP. The document provides an overview and outline for presenting on designing, integrating, and managing IP-based broadcast facilities and production workflows.
This document provides an overview of analog and digital triax systems used for video transmission. It discusses key aspects of triax cables such as their ability to transmit multiple signals simultaneously through bundled cables. Both analog and digital triax systems are described, with analog transmitting component signals on different carrier frequencies and digital transmitting signals in digital format. The document also covers triax cable specifications, common connectors types used for broadcasting applications from different standards, fiber optic cable types including single mode and multi-mode, and common fiber connectors. Transmission distances and electrical properties of triax cables are discussed.
The document discusses high dynamic range (HDR) imaging technologies including:
- Standards for HDR encoding like SMPTE ST 2084 (PQ) and ARIB/ITU-R BT.2100 (HLG)
- Opto-electronic transfer functions (OETFs) and electro-optical transfer functions (EOTFs) used in HDR systems
- The human visual system's sensitivity to luminance levels and how this relates to quantization in HDR images
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1Dr. Mohieddin Moradi
The document discusses standards for serial digital interface (SDI) video signals. It provides information on:
- Early SDI standards including SMPTE 259M for SD-SDI at 270Mbps and how they standardized a serial digital video connection.
- Video signal sampling structures and resolutions for SD, HD, and UHD formats.
- The development of higher data rate SDI standards up to 12G-SDI and 24G-SDI to support higher resolution video.
- Electrical parameters and cable distance limitations for different SDI data rates.
This document provides an overview of video standards and concepts related to standard definition television (SDTV) and high definition television (HDTV). It begins with definitions of key terms like interlacing, progressive scanning, and frame rates. It then covers standards for monochrome signals, including signal timings, synchronization pulses, and blanking intervals. Digital SDTV standards like line counts, field structures, and ancillary data space are also summarized. The document concludes with discussions of spatial resolution, optimal viewing distances, and different aspect ratios used in television.
This document provides an overview of high definition television (HDTV) standards and concepts such as color gamut, color bars test signals, colorimetry, chroma adjustment, and luminance adjustment. It discusses differences between standard definition (SDTV) and HDTV color bars, how wider color gamuts in HDTV allow for deeper colors, and how to use various elements of the color bars signal to properly adjust a display's color, brightness, contrast, and chroma. The document contains diagrams demonstrating color gamuts and examples of how objects appear within different gamuts.
This document provides an overview of color spaces and high dynamic range (HDR) technologies. It begins with definitions of color gamut and chromaticity coordinates. It then discusses several key color spaces including Rec.709, Rec.2020, DCI-P3, ACES, and S-Gamut3. It also covers HDR formats like PQ, HLG, and log encoding. The document aims to explain the essential aspects of different color spaces and HDR technologies used for digital cinema and television production.
This document provides an overview of high dynamic range (HDR) technology and workflows for HDR video production and mastering. It discusses HDR standards like SMPTE ST 2084 and ARIB STB-B67, camera log curves, luminance levels, and tools for setting up HDR monitoring including waveform monitors. Specific topics covered include HDR graticules, setting luminance levels for highlights and grey points, and using zebra patterns and zoom modes to evaluate highlight levels in HDR images.
This document discusses 3D television technology. It begins with a brief history of 3D content and then covers various depth cues and how binocular vision allows the brain to perceive 3D images. Key aspects of 3D technology discussed include parallax, stereopsis, and the need to direct different images to each eye to create the perception of depth. Challenges for developing 3D include reducing the need for glasses and creating natural depth cues without visual fatigue.
The document provides information about Rajanish Kumawat's practical training at Doordarshan Kendra in Jaipur. It discusses key details about Doordarshan, including that it is India's public service broadcaster, it was established in 1959, and currently operates 21 TV channels. It also provides specifics about DD Rajasthan, the state-owned channel broadcast from Doordarshan Kendra Rajasthan, including that it covers 79% of the state's population. The document then covers technical aspects of television cameras, lenses, apertures, and other camera functions.
Chapter 4
Lenses and Optics
CONTENTS
4.1 Overview
4.2 Lens Functions and Properties
4.2.1 Focal Length and Field of View
4.2.1.1 Field-of-View Calculations
4.2.1.1.1 Tables for Scene Sizes vs.
FL for 1/4-, 1/3-, and
1/2-Inch Sensors
4.2.1.1.2 Tables for Angular FOV
vs. FL for 1/4-, 1/3-, and
1/2-Inch Sensor Sizes
4.2.1.2 Lens and Sensor Formats
4.2.2 Magnification
4.2.2.1 Lens–Camera Sensor Magnification
4.2.2.2 Monitor Magnification
4.2.2.3 Combined Camera and Monitor
Magnification
4.2.3 Calculating the Scene Size
4.2.3.1 Converting One Format to Another
4.2.4 Calculating Angular FOV
4.2.5 Lens Finder Kit
4.2.6 Optical Speed: f-number
4.2.7 Depth of Field
4.2.8 Manual and Automatic Iris
4.2.8.1 Manual Iris
4.2.8.2 Automatic-Iris Operation
4.2.9 Auto-Focus Lens
4.2.10 Stabilized Lens
4.3 Fixed Focal Length Lens
4.3.1 Wide-Angle Viewing
4.3.2 Narrow-Angle Telephoto Viewing
4.4 Vari-Focal Lens
4.5 Zoom Lens
4.5.1 Zooming
4.5.2 Lens Operation
4.5.3 Optical Speed
4.5.4 Configurations
4.5.5 Manual or Motorized
4.5.6 Adding a Pan/Tilt Mechanism
4.5.7 Preset Zoom and Focus
4.5.8 Electrical Connections
4.5.9 Initial Lens Focusing
4.5.10 Zoom Pinhole Lens
4.5.11 Zoom Lens–Camera Module
4.5.12 Zoom Lens Checklist
4.6 Pinhole Lens
4.6.1 Generic Pinhole Types
4.6.2 Sprinkler Head Pinhole
4.6.3 Mini-Pinhole
4.7 Special Lenses
4.7.1 Panoramic Lens—360�
4.7.2 Fiber-Optic and Bore Scope Optics
4.7.3 Bi-Focal, Tri-Focal Image Splitting Optics
4.7.4 Right-Angle Lens
4.7.5 Relay Lens
4.8 Comments, Checklist and Questions
4.9 Summary
4.1 OVERVIEW
The function of the camera lens is to collect the reflected
light from a scene and focus it onto a camera sensor.
Choosing the proper lens is very important, since its choice
determines the amount of light received by the camera
sensor, the FOV on the monitor, and the quality of the
image displayed. Understanding the characteristics of the
lenses available and following a step-by-step design proce-
dure simplifies the task and ensures an optimum design.
A CCTV lens functions like the human eye. Both col-
lect light reflected from a scene or emitted by a lumi-
nous light source and focus the object scene onto some
receptor—the retina or the camera sensor. The human
eye has a fixed-focal-length (FFL) lens and variable iris
71
72 CCTV Surveillance
diaphragm, which compares to an FFL, automatic-iris
video lens. The eye has an iris that opens and closes just
like an automatic-iris camera lens and automatically adapts
to changes in light level. The iris—whether in the eye or in
the camera—optimizes the light level reaching the recep-
tor, thereby providing the best possible image. The iris in
the eye is a muscle-controlled membrane; the automatic
iris in a video lens is a motorized device.
Of the many different kinds of lenses used in video secu-
rity applications the most common is the FFL lens, which
is available in wide-angle (90�), medium-angle (40�), and
narrow-angle (5�) FOVs. To cover a wid ...
The document discusses video compression history and standards, including codecs such as H.261, H.262/MPEG-2, H.263, H.264/AVC, H.265/HEVC, and the roles of organizations like MPEG, VCEG, and ITU-T in developing video coding standards to ensure interoperability. It also covers video encoding and decoding principles, as well as common container formats and their applications in areas like broadcasting, streaming, and storage.
The document provides an overview of key elements and trends in high-quality image production, including spatial resolution, temporal resolution, dynamic range, color gamut, quantization, and related technologies. It discusses technologies like HD, UHD, HDR and WCG and how they improve the total quality of experience. Images and charts are included to illustrate comparisons of technologies and results from industry surveys on trends and commercial projects.
This document provides information about various camera settings and technologies for capturing clear images, including:
1. Clear Scan helps eliminate banding caused when a camera's frame rate does not match a CRT display's refresh rate.
2. Slow Shutter extends the camera's exposure time to produce blur effects or allow more light in low-light scenes.
3. Super Sampling uses a 1080p camera to produce sharper 720p images by maintaining higher frequency response.
4. Detail correction adds a spike-shaped detail signal to make edges appear sharper without degrading resolution. Settings like detail level and H/V ratio control the amount and balance of detail correction.
5. Other topics covered
Video Compression, Part 4 Section 1, Video Quality Assessment Dr. Mohieddin Moradi
This document provides an overview of video compression artifacts that can occur when video is compressed for streaming or storage. It discusses both spatial artifacts, such as blurring, blocking, ringing, and color bleeding, as well as temporal artifacts like flickering and mosquito noise. For each artifact, it describes the visual appearance and potential causes from factors like quantization during compression, motion compensation between frames, and chroma subsampling. The document aims to help understand how compression can degrade perceptual video quality and different types of artifacts that may be evaluated both objectively and subjectively.
Video Compression, Part 3-Section 2, Some Standard Video CodecsDr. Mohieddin Moradi
This document discusses MPEG-2 Transport Streams and Packetized Elementary Streams. It describes how MPEG-2 Transport Streams use fixed length 188 byte packets containing compressed video, audio or data from one or more programs identified by Packet IDs. These packets can contain Packetized Elementary Stream packets which contain compressed elementary streams with timestamps for synchronization. The document also discusses how Transport Streams allow for synchronous multiplexing of multiple programs from independent time bases into a single stream.
Hue refers to the dominant wavelength of light, which determines the color as perceived by the observer. Saturation refers to the purity of the hue, or the amount of white light mixed with it. Luminance refers to the brightness or intensity of the color.
The document discusses radiometry and photometry, which deal with measuring light across the electromagnetic spectrum and in the visible spectrum respectively. It defines terms like luminous flux, luminous intensity, illuminance, and luminance.
It also covers topics like additive and subtractive color mixing, primary and secondary colors, color spaces, and video signal formats like RGB, YUV, and YCbCr which are used to represent color images and video. Human cone sensitivity
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGESDr. Mohieddin Moradi
This document discusses elements of high-quality image production for television broadcasting such as spatial resolution, frame rate, dynamic range, color gamut, quantization, and total quality of experience. It outlines these elements and provides examples of their implementation in HD, UHD1, and UHD2 formats. Motivations for 8K and 4K broadcasting are discussed related to improved image quality, new applications, and bandwidth efficiency trends. Implementation examples of 4K and 8K broadcasting systems from Japan, Korea, Sweden, and the UK are also summarized.
The document discusses various networking protocols and standards related to professional media over IP, including:
- SMPTE ST 2110 standards that define carriage of uncompressed video, audio, and data over IP networks as separate elementary streams.
- AES67, which enables high-performance audio-over-IP streaming interoperability between different IP audio networking products.
- Other relevant standards and protocols like SMPTE ST 2022, AIMS recommendations, Video Services Forum TR-03/04, RTP, SDP, PTP, and IGMP.
- Considerations for designing IP infrastructures for media networks, including capacity, connectivity, timing, control, and redundancy.
This document outlines elements of high-quality image production, including spatial and temporal resolution, dynamic range, color gamut, bit depth, and coding. It discusses color gamut conversion, gamma correction, HDR and SDR mastering, tone mapping, and backwards compatibility. The document also covers HDR metadata standards and different distribution scenarios for HDR content.
1. The document discusses color temperature and how different light sources emit different color spectrums that video cameras must account for through color balancing. Color temperature is used as a reference to adjust the camera's color balance to match the light source.
2. After color temperature conversion optically or electronically, white balance is then used to precisely match the light source color temperature by adjusting the camera's video amplifiers.
3. Other topics covered include polarizers, neutral density filters, and technical aspects of video such as gamma correction and clipping levels.
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2Dr. Mohieddin Moradi
This document discusses high definition video standards including SMPTE 274M, 292M, 372M and dual link SDI formats. It provides details on:
- The HD-SDI standards that define 1080p and 720p video formats and carriage through 1.5Gb/s serial digital interface.
- The timing reference signal codes used in HD-SDI to identify lines and perform error checking.
- How a 12-bit color depth can be achieved within the dual link standard by mapping the additional bits across both links.
- The benefits of 3Gb/s SDI and dual link formats for working at higher resolutions and color spaces prior to finishing.
The document outlines topics related to video over IP infrastructure and standards. It discusses IP technology trends, networking basics, video and audio over IP standards, SMPTE ST 2110, NMOS, infrastructure considerations, timing issues, clean switching methods, compression, broadcast controller/orchestration, and case studies for migrating broadcast facilities to IP. The document provides an overview and outline for presenting on designing, integrating, and managing IP-based broadcast facilities and production workflows.
This document provides an overview of analog and digital triax systems used for video transmission. It discusses key aspects of triax cables such as their ability to transmit multiple signals simultaneously through bundled cables. Both analog and digital triax systems are described, with analog transmitting component signals on different carrier frequencies and digital transmitting signals in digital format. The document also covers triax cable specifications, common connectors types used for broadcasting applications from different standards, fiber optic cable types including single mode and multi-mode, and common fiber connectors. Transmission distances and electrical properties of triax cables are discussed.
The document discusses high dynamic range (HDR) imaging technologies including:
- Standards for HDR encoding like SMPTE ST 2084 (PQ) and ARIB/ITU-R BT.2100 (HLG)
- Opto-electronic transfer functions (OETFs) and electro-optical transfer functions (EOTFs) used in HDR systems
- The human visual system's sensitivity to luminance levels and how this relates to quantization in HDR images
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1Dr. Mohieddin Moradi
The document discusses standards for serial digital interface (SDI) video signals. It provides information on:
- Early SDI standards including SMPTE 259M for SD-SDI at 270Mbps and how they standardized a serial digital video connection.
- Video signal sampling structures and resolutions for SD, HD, and UHD formats.
- The development of higher data rate SDI standards up to 12G-SDI and 24G-SDI to support higher resolution video.
- Electrical parameters and cable distance limitations for different SDI data rates.
This document provides an overview of video standards and concepts related to standard definition television (SDTV) and high definition television (HDTV). It begins with definitions of key terms like interlacing, progressive scanning, and frame rates. It then covers standards for monochrome signals, including signal timings, synchronization pulses, and blanking intervals. Digital SDTV standards like line counts, field structures, and ancillary data space are also summarized. The document concludes with discussions of spatial resolution, optimal viewing distances, and different aspect ratios used in television.
This document provides an overview of high definition television (HDTV) standards and concepts such as color gamut, color bars test signals, colorimetry, chroma adjustment, and luminance adjustment. It discusses differences between standard definition (SDTV) and HDTV color bars, how wider color gamuts in HDTV allow for deeper colors, and how to use various elements of the color bars signal to properly adjust a display's color, brightness, contrast, and chroma. The document contains diagrams demonstrating color gamuts and examples of how objects appear within different gamuts.
This document provides an overview of color spaces and high dynamic range (HDR) technologies. It begins with definitions of color gamut and chromaticity coordinates. It then discusses several key color spaces including Rec.709, Rec.2020, DCI-P3, ACES, and S-Gamut3. It also covers HDR formats like PQ, HLG, and log encoding. The document aims to explain the essential aspects of different color spaces and HDR technologies used for digital cinema and television production.
This document provides an overview of high dynamic range (HDR) technology and workflows for HDR video production and mastering. It discusses HDR standards like SMPTE ST 2084 and ARIB STB-B67, camera log curves, luminance levels, and tools for setting up HDR monitoring including waveform monitors. Specific topics covered include HDR graticules, setting luminance levels for highlights and grey points, and using zebra patterns and zoom modes to evaluate highlight levels in HDR images.
This document discusses 3D television technology. It begins with a brief history of 3D content and then covers various depth cues and how binocular vision allows the brain to perceive 3D images. Key aspects of 3D technology discussed include parallax, stereopsis, and the need to direct different images to each eye to create the perception of depth. Challenges for developing 3D include reducing the need for glasses and creating natural depth cues without visual fatigue.
The document provides information about Rajanish Kumawat's practical training at Doordarshan Kendra in Jaipur. It discusses key details about Doordarshan, including that it is India's public service broadcaster, it was established in 1959, and currently operates 21 TV channels. It also provides specifics about DD Rajasthan, the state-owned channel broadcast from Doordarshan Kendra Rajasthan, including that it covers 79% of the state's population. The document then covers technical aspects of television cameras, lenses, apertures, and other camera functions.
Chapter 4
Lenses and Optics
CONTENTS
4.1 Overview
4.2 Lens Functions and Properties
4.2.1 Focal Length and Field of View
4.2.1.1 Field-of-View Calculations
4.2.1.1.1 Tables for Scene Sizes vs.
FL for 1/4-, 1/3-, and
1/2-Inch Sensors
4.2.1.1.2 Tables for Angular FOV
vs. FL for 1/4-, 1/3-, and
1/2-Inch Sensor Sizes
4.2.1.2 Lens and Sensor Formats
4.2.2 Magnification
4.2.2.1 Lens–Camera Sensor Magnification
4.2.2.2 Monitor Magnification
4.2.2.3 Combined Camera and Monitor
Magnification
4.2.3 Calculating the Scene Size
4.2.3.1 Converting One Format to Another
4.2.4 Calculating Angular FOV
4.2.5 Lens Finder Kit
4.2.6 Optical Speed: f-number
4.2.7 Depth of Field
4.2.8 Manual and Automatic Iris
4.2.8.1 Manual Iris
4.2.8.2 Automatic-Iris Operation
4.2.9 Auto-Focus Lens
4.2.10 Stabilized Lens
4.3 Fixed Focal Length Lens
4.3.1 Wide-Angle Viewing
4.3.2 Narrow-Angle Telephoto Viewing
4.4 Vari-Focal Lens
4.5 Zoom Lens
4.5.1 Zooming
4.5.2 Lens Operation
4.5.3 Optical Speed
4.5.4 Configurations
4.5.5 Manual or Motorized
4.5.6 Adding a Pan/Tilt Mechanism
4.5.7 Preset Zoom and Focus
4.5.8 Electrical Connections
4.5.9 Initial Lens Focusing
4.5.10 Zoom Pinhole Lens
4.5.11 Zoom Lens–Camera Module
4.5.12 Zoom Lens Checklist
4.6 Pinhole Lens
4.6.1 Generic Pinhole Types
4.6.2 Sprinkler Head Pinhole
4.6.3 Mini-Pinhole
4.7 Special Lenses
4.7.1 Panoramic Lens—360�
4.7.2 Fiber-Optic and Bore Scope Optics
4.7.3 Bi-Focal, Tri-Focal Image Splitting Optics
4.7.4 Right-Angle Lens
4.7.5 Relay Lens
4.8 Comments, Checklist and Questions
4.9 Summary
4.1 OVERVIEW
The function of the camera lens is to collect the reflected
light from a scene and focus it onto a camera sensor.
Choosing the proper lens is very important, since its choice
determines the amount of light received by the camera
sensor, the FOV on the monitor, and the quality of the
image displayed. Understanding the characteristics of the
lenses available and following a step-by-step design proce-
dure simplifies the task and ensures an optimum design.
A CCTV lens functions like the human eye. Both col-
lect light reflected from a scene or emitted by a lumi-
nous light source and focus the object scene onto some
receptor—the retina or the camera sensor. The human
eye has a fixed-focal-length (FFL) lens and variable iris
71
72 CCTV Surveillance
diaphragm, which compares to an FFL, automatic-iris
video lens. The eye has an iris that opens and closes just
like an automatic-iris camera lens and automatically adapts
to changes in light level. The iris—whether in the eye or in
the camera—optimizes the light level reaching the recep-
tor, thereby providing the best possible image. The iris in
the eye is a muscle-controlled membrane; the automatic
iris in a video lens is a motorized device.
Of the many different kinds of lenses used in video secu-
rity applications the most common is the FFL lens, which
is available in wide-angle (90�), medium-angle (40�), and
narrow-angle (5�) FOVs. To cover a wid ...
Deviprasad Goenka Management college of Media Studies
http://www.dgmcms.org.in/
Subject:Photography
Lesson 1:Types of Lens , Mega Pixel , Image Quality .
Faculty Name: Partha Pratim Samanta
This document discusses various lens operations and controls including focal length, f-stops, lens speed, iris, macro lenses and focus, zoom lenses, wide angle lenses, depth of field, angle of view, zoom and zoom ratio, filters, and image stabilization. Focal length determines the distance between the lens and its focus point. F-stops correspond to aperture settings and lens speed refers to maximum aperture diameter. Macro lenses can produce life-size images and zoom lenses allow smooth changes between long and close-up shots. Wide angle lenses exaggerate depth and size while depth of field is the range of focus. Angle of view depends on focal length and sensor size. Zoom and zoom ratio determine the visible scene area. Filters include
The document discusses key aspects of cinematography including lenses, focal lengths, f-stops, depth of field, and neutral density filters. It explains that lenses allow light to enter the camera and impact exposure. F-stops regulate the aperture opening to control the amount of light entering. Different focal lengths like wide angle, normal, and telephoto lenses impact the perspective and how subjects appear in the frame. Neutral density filters are also used to decrease the amount of light entering the camera. Understanding these visual language tools is essential for visual storytelling.
Focusing sharply attracts the eye to the point of interest. Depth of field extends about one-third in front of and two-thirds behind the plane of critical focus. There are three ways to control depth of field: aperture, focal length, and distance to subject. A smaller aperture, shorter focal length, or greater distance to the subject will increase depth of field. Zone focusing and focusing on the hyperfocal distance allow controlling depth of field without changing settings. Perspective shows depth through scale differences between foreground and background. Close-up photography requires a macro lens, extension tubes or bellows to increase the magnification ratio. Shallow depth of field requires careful focusing in close-ups.
The document provides an overview of key photography concepts including exposure, aperture, shutter speed, ISO, depth of field, focal length, and lens types. Exposure is determined by the amount of light reaching the image sensor, and can be controlled through aperture size and shutter speed settings. Aperture refers to the diameter of the lens opening while shutter speed is the duration that the camera's shutter is open. These settings, along with ISO, must be balanced to achieve proper exposure. Depth of field relates to the distance over which objects appear acceptably sharp, and lenses can be either prime lenses with a fixed focal length or zoom lenses with a variable focal length.
High speed cameras can capture events at over 1,000 frames per second, allowing finer details to be seen when played back at normal speed. They focus light onto an image sensor, converting the image into an electronic format. The document discusses the high speed camera available in the author's lab and its uses, including combustion research, microfluidics, and sports broadcasts. It also covers aperture, depth of field, CCD and CMOS image sensors used in high speed cameras.
1. SLR cameras allow for interchangeable lenses due to their construction which includes a mirror that reflects the image from the lens to the viewfinder. 2. Different focal lengths result in varying angles of view, depth of field, and perspective. Wider angles provide a broader view while longer focal lengths compress the scene. 3. Canon developed EF-S lenses specifically for cameras with smaller, APS-C sized sensors which provide a 1.6x focal length conversion and utilize a smaller image circle and short back focus for a more compact design.
1. SLR cameras allow for interchangeable lenses due to their construction which includes a mirror that reflects the image from the lens to the viewfinder. When the shutter button is pressed, the mirror flips up and the shutter opens to expose the image sensor or film.
2. Different focal length lenses capture images with varying angles of view, depth of field, and perspective. Wider focal lengths capture a larger field of view while longer focal lengths compress the scene.
3. Canon produces EF-S lenses specifically for APS-C sensor SLRs. They have a smaller image circle and shorter back focus to match the smaller sensor size. This allows for lighter, smaller lenses including wide-angle, standard
This document provides an introduction to basic photography concepts. It defines photography as the process of creating still pictures using light. A photograph is taken by recording an image on a light-sensitive material or digital sensor, and various camera parts like the aperture and shutter speed control the amount of light. Common terms like exposure, aperture, shutter speed, ISO, and focal length are explained. The document also covers different types of camera lenses, the difference between prime and zoom lenses, and the proper way to hold a camera.
This PowerPoint presentation is for Grade 10 students. I have included all the topics in this presentation. Here you can know about Light, Types of lenses, Some terms related to lens, Prism, Ray diagrams, Numerical problems related to this chapter, Laws of reflection, refraction, diseases related to eyes. I have briefly described as notes, some examples and illustrations, proper diagrams and so on.
Its a basic guide to photography by my friend Vivek Desai. The slides given within will provide better know how for beginners and amateurs and will help you know a DSLR camera. If you are a photography enthusiast, this guide is the right place to start with.
It will also help you better understand How to Use a DSLR before you spend bucks and own one.
You can connect with Vivek Desai @ https://www.facebook.com/VivekDesai88
Forensic photography requires specialized cameras and equipment. The ideal camera is a single-lens reflex camera due to its versatility and ability to accept interchangeable lenses and accessories. Forensic photographers must understand camera components like lenses and shutters, as well as concepts such as depth of field, focus, exposure, and filters in order to properly capture evidentiary photographs. Proper camera care, such as keeping the equipment clean and dry, is also important for forensic applications.
1) The document describes the basic operation of a digital single lens reflex (DSLR) camera. It explains how light enters the camera body through the lens and is reflected by a mirror to the viewfinder for composing shots.
2) It discusses the key variables that determine photographic exposure - aperture, shutter speed, and ISO sensitivity. Different combinations of these variables can produce the same exposure but result in different visual effects.
3) Manual control of aperture, shutter speed, and ISO allows photographers to manipulate these variables to achieve desired pictorial outcomes in terms of depth of field, motion blur, noise, and tone.
This document discusses various properties of cameras and the image formation process. It covers topics like interior orientation, pinhole cameras, lenses, perspective projection, image digitization, focal length, depth of field, geometric distortions, chromatic aberration, and spatial resolution. Camera calibration is presented as a way to model lens distortions and derive intrinsic camera parameters. The relationship between aperture size and the resulting airy disk is demonstrated through examples.
Basic principles of photography. David Capel. 346B IST.
Latin “Camera Obscura” = “Dark Room”
Light passing through a small hole produces an inverted image on the opposite wall
Dr. Mohieddin Moradi provides an outline on high dynamic range (HDR) technology. The 3-page document covers various topics related to HDR including different HDR technologies, tone mapping, color representation, and HDR standards. It discusses concepts such as scene-referred vs display-referred conversions, and direct mapping vs tone mapping when converting between HDR and SDR formats. The document also examines potential side effects when mixing different conversion techniques in a production workflow.
The document discusses high dynamic range (HDR) video technology including:
- Different HDR formats such as SMPTE ST 2084 (PQ), ARIB STB-B67/ITU-R BT.2100 (HLG)
- Code value ranges for 10-bit and 12-bit RGB and color difference signals in narrow and full ranges
- Recommendations for using narrow versus full signal ranges for PQ and HLG
- Transcoding concepts when converting between PQ and HLG formats
- Considerations for including standard dynamic range (SDR) content in HDR programs
HDR, wide color gamut, and higher frame rates are new technologies that can improve image quality for ultra high definition televisions. They provide benefits like more vivid colors, deeper blacks, better shadow detail, and a more immersive viewing experience. However, supporting these new features requires significantly more data bandwidth compared to legacy standards. Future video standards will need to efficiently support higher resolutions, wider color, high dynamic range, and high frame rates to deliver next-generation picture quality while still allowing content to be economically distributed.
This document outlines an educational course on audio and video over IP. The course covers IP networking fundamentals and standards including TCP/IP, OSI models, and SMPTE ST 2110. It also examines IP infrastructure, routing, timing issues, switching, compression techniques and case studies for broadcast facilities transitioning to IP. The document provides an in-depth outline of topics covered in each session, from IP basics to designing and integrating both hybrid and fully IP-based outside broadcast trucks. The goal is to educate on best practices for implementing audio and video over IP workflows and infrastructure.
The document provides an overview of key concepts in high definition television (HDTV) including:
- Standards and definitions for SDTV and HDTV
- Interlacing and de-interlacing techniques
- Video scaling, edge enhancement, and frame rate conversion
- Signal quality issues in HDTV production and broadcast
- Cables and connectors used for HDTV production
The document contains diagrams and explanations of topics like color bars, genlocking, sampling, interlacing, field order, and 3D video sampling structures. It compares progressive and interlaced scanning and discusses concepts such as the Nyquist frequency, aliasing, and field dominance.
This document provides an overview of sound and hearing, including:
1. It describes how the human ear works, from collecting sound waves through the outer ear and transmitting vibrations through the ossicles to the cochlea where hair cells detect different frequencies.
2. It discusses properties of sound like loudness, pitch, and timbre, and how they are perceived. Loudness depends on amplitude, pitch on frequency, and timbre on waveform complexity.
3. It explains characteristics of sound waves like wavelength, frequency, speed of sound, and the decibel scale used to measure sound intensity and pressure levels.
Video Compression, Part 4 Section 2, Video Quality Assessment Dr. Mohieddin Moradi
This document provides information on conducting subjective video quality assessments. It discusses different subjective assessment methods like double stimulus impairment scale (DSIS) and single stimulus continuous quality evaluation (SSCQE). It describes test parameters like number of observers, viewing conditions, grading scales and how to present the results. Guidelines are provided for tasks like screening observers, conducting test sessions, introducing impairments and collecting opinion scores to evaluate video coding standards and compression artifacts.
Video Compression, Part 3-Section 1, Some Standard Video CodecsDr. Mohieddin Moradi
- ISO/IEC JTC 1/SC 29 and ITU-T are the main organizations that develop video coding standards through working groups like MPEG and VCEG.
- Early standards include H.261 for video telephony and conferencing, and MPEG-1 for DVD quality video.
- Later standards like H.264/AVC, HEVC, and future VVC provide increasingly higher compression through use of block transforms, motion compensation, and entropy coding in a hybrid video codec framework.
- Key organizations periodically collaborate through joint teams like JVT and JCT-VC to develop standards like AVC and HEVC.
This document discusses video compression techniques. It begins by outlining the history of video compression and describing the basic components of a generic video encoder/decoder system. It then covers specific compression methods including differential pulse code modulation, transform coding using discrete cosine transform, quantization, and entropy coding. The document also discusses techniques for reducing both spatial and temporal redundancy in video, such as prediction coding. It provides examples of how quantization is used to control quality and compression ratio in both lossy and lossless compression systems.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Gas agency management system project report.pdfKamal Acharya
The project entitled "Gas Agency" is done to make the manual process easier by making it a computerized system for billing and maintaining stock. The Gas Agencies get the order request through phone calls or by personal from their customers and deliver the gas cylinders to their address based on their demand and previous delivery date. This process is made computerized and the customer's name, address and stock details are stored in a database. Based on this the billing for a customer is made simple and easier, since a customer order for gas can be accepted only after completing a certain period from the previous delivery. This can be calculated and billed easily through this. There are two types of delivery like domestic purpose use delivery and commercial purpose use delivery. The bill rate and capacity differs for both. This can be easily maintained and charged accordingly.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
AI for Legal Research with applications, toolsmahaffeycheryld
AI applications in legal research include rapid document analysis, case law review, and statute interpretation. AI-powered tools can sift through vast legal databases to find relevant precedents and citations, enhancing research accuracy and speed. They assist in legal writing by drafting and proofreading documents. Predictive analytics help foresee case outcomes based on historical data, aiding in strategic decision-making. AI also automates routine tasks like contract review and due diligence, freeing up lawyers to focus on complex legal issues. These applications make legal research more efficient, cost-effective, and accessible.
3. − Vertical and Horizontal Fields of View
− F-Stop, F-Number, T-Number, , Minimum Illumination and Sensitivity
− Color Temperature Adjustment and Color Conversion in Camera
− Camera Beam Splitter Structure and Related Issuers
− Depth of Field, Depth of Focus & Permissible Circle of Confusion
− Broadcast Zoom Lens Technology
− 4K Lens Critical Performance Parameter
− Optical Accessories and Optical Filters
Outline
3
5. 5
Types of Visible Perception Possible
− As move further from fovea, vision becomes more limited
− Colour vision only possible in central visual field
(Left eye)
6. 6
Vertical and Horizontal Fields of View
(Monocular Vision)
Visual
Limit
Right
Eye
(94°)
R
L
Normal
Viewing
Field
Normal
Viewing
Field
Horizontal Sight Line
(Binocular
Vision)
Word
Pattern
Recognition
7. 7
Horizontal field of view
• The central field of vision for most people covers an angle of
between 50° and 60° (objects are recognized).
• Within this angle, both eyes observe an object simultaneously.
• This creates a central field of greater magnitude than that
possible by each eye separately.
• This central field of vision is termed the 'binocular field' and within
this field
images are sharp
depth perception occurs
colour discrimination is possible
Vertical and Horizontal Fields of View
Visual Limit
of Left Eye
Visual Limit
of Right Eye
Central Field of Vision
Vertical Field of View
• The typical line of sight is considered horizontal or 0 °.
• A person’s natural or normal line of sight is normally a 10 ° cone of
view below the horizontal and, if sitting, approximately 15 °.
Limit of Color
Discrimination
Limit of Color
Discrimination
Normal Sight Line
whilst Standing
Normal Sight Line
whilst Seated
Visual Limit
of Eye
Visual Limit
of Eye
Normal Line
of Sight
9. − A certain range of the image that is captured by the camera and displayed on the picture monitor.
• The angle of view is measured by the angle between the center axis of the lens to the edges of the
image in the horizontal, vertical, and diagonal directions. Respectively, these are called the horizontal
angle of view, vertical angle of view, and diagonal angle of view.
Angle Of View
9
10. − Angle of view can be calculated from the following equation:
𝑤: Angle of view
𝑦: Image size on imager sensor (in horizontal, vertical and diagonal directions)
𝑓: lens focal length
𝑤 = 2 tan−1
𝑦
2𝑓
Angle Of View
10
𝑦
𝑓
ൗ
𝒘
𝟐
ൗ
𝒘
𝟐
11. Calculating from
W : Angle of View
L : Object Distance
Y: Object Dimension
Calculating from
L : Object Distance
f : Focal Length
Y’ : Image Size
Calculation of the Object Dimensions to Fill the Image format
11
𝑌 = 𝑌′
𝐿
𝑓
𝑌 = 2𝐿 tan
𝑊
2
Y: Object Dimension Y’ : Image Size
W : Angle of View
L : Object Distance Image Distance=
Approximately f
12. I. Angle of view becomes narrow when a telephoto lens is used.
II. In contrast, it becomes wider with a wide-angle lens.
• Consequently, the wider the angle of view, the wider the area of the image captured.
III. A camera’s angle of view also varies depending on the size of the imager.
• This means that 2/3-inch type CCD cameras and 1/2-inch type CCD cameras offer different angles of
view for lenses with the same focal lengths.
𝑤 = 2 tan−1
𝑦
2𝑓
Angle Of View, Image Circle and Image Size
Image Sizes for Television and Film (Actual Size) 12
15. Aperture
− In general, the word aperture refers to an opening, a hole, or any other type of narrow opening.
• When used in relation to the mechanism of a lens, it stands for the size of the lens’s opening that
determines the amount of light directed to the camera’s imager.
• The diameter of the lens aperture can be controlled by the lens iris.
• Iris consists of a combination of several thin diaphragms.
15
چشم مردمک
(
حدقیه
)
عنبیه
Diaphragm
Aperture
16. Iris
− The amount of light captured and directed to a camera’s imager is adjusted by a combination of
diaphragms integrated in the lens (This mechanism is called the lens iris).
• The Iris works just like the pupil of the human eye.
• By opening and closing these diaphragms, the diameter of the opening (also called aperture)
changes, thus controlling the amount of light that passes through it.
The amount of the iris opening is expressed by its F-stop.
16
چشم مردمک
(
حدقیه
)
عنبیه
18. Auto Iris
− Auto iris is a convenient function that detects the amount of light entering the lens and automatically
opens or closes the iris to maintain appropriate exposure.
• Auto iris is especially useful in situations where manual iris adjustment can be difficult, such as in ENG
applications.
• Auto iris lenses control the iris aperture by detecting and analyzing the amplitude of the video signal
produced in the camera.
• An iris control signal is generated according to the amplitude of this video signal, to either open or
close the iris for correct exposure.
18
19. Focal Length
– The focal length describes the distance between a lens and the point where light passing through it
converges on the optical axis. This point is where images captured by the lens are in focus and is called
the focal point.
19
Single Lens Compound Lens
Focal Length Focal Length
Focal Point
Principal Point
20. A lens with a short focal length:
– Captures a large area of the subject to provide a wide angle view.
– Amount of light entering the lens is that reflected from a large area of the
subject.
A lens with a long focal length:
– Captures only a small area of the subject to provide a magnified or close-up
view of the subject .
– Only the light reflected from a small area of the subject enters the lens,
resulting in a darker image.
The longer the focal length, the less light that enters the lens.
Focal Length
20
21. − It describes how bright a lens is, or, more simply,
The maximum amount of light a lens can direct to the camera’s image sensor.
F-number
𝑭 − 𝒏𝒖𝒎𝒃𝒆𝒓 =
𝒇 (𝑭𝒐𝒄𝒂𝒍 𝑳𝒆𝒏𝒈𝒕𝒉)
𝑫 𝑴𝒂𝒙𝒊𝒎𝒖𝒎 𝑰𝒓𝒊𝒔 𝑶𝒑𝒆𝒏𝒊𝒏𝒈 𝑫𝒊𝒂𝒎𝒆𝒕𝒆𝒓
21
22. This amount of light is determined by two factors:
I. The widest iris opening that the lens allows or its maximum aperture
– A wider iris opening (aperture diameter) simply means more light passing through the Lens (bigger 𝑫).
II. The focal length of the lens
– The longer the focal length, the less light that enters the lens (smaller 𝒇).
F-number
𝑭 − 𝒏𝒖𝒎𝒃𝒆𝒓 =
𝒇 (𝑭𝒐𝒄𝒂𝒍 𝑳𝒆𝒏𝒈𝒕𝒉)
𝑫 𝑴𝒂𝒙𝒊𝒎𝒖𝒎 𝑰𝒓𝒊𝒔 𝑶𝒑𝒆𝒏𝒊𝒏𝒈 𝑫𝒊𝒂𝒎𝒆𝒕𝒆𝒓
22
23. – Interestingly, in this definition, brighter lenses are described with smaller F-numbers. This can be
understood by substituting a shorter focal length and larger maximum iris opening in the equation.
– A lens’s F-number is usually labeled on its front.
– Since zoom lenses offer a variable focal length, these are described with an F-number range across the
entire zoom range (e.g., F2.8 - F4.0).
– While F-number is strictly used to describe a lens’s brightness performance, a parameter often mixed up
with this is F-stop.
F-number
𝑭 − 𝒏𝒖𝒎𝒃𝒆𝒓 =
𝒇 (𝑭𝒐𝒄𝒂𝒍 𝑳𝒆𝒏𝒈𝒕𝒉)
𝑫 𝑴𝒂𝒙𝒊𝒎𝒖𝒎 𝑰𝒓𝒊𝒔 𝑶𝒑𝒆𝒏𝒊𝒏𝒈 𝑫𝒊𝒂𝒎𝒆𝒕𝒆𝒓
23
24. F-stop
− F-number indicate the maximum amount of incident light with the lens iris fully opened.
− F-stop indicates (opposite f-number):
The amount of incident light at smaller iris openings.
Notes:
• F-stops are calibrated from the lens’s widest iris opening to its smallest using the same above equation as
F-number, however the diameter (D) being that for the given iris opening.
• The most important difference to note is that F-stops are a global reference for judging the amount of light
that should be allowed through the lens during a camera shoot.
𝑭 − 𝒔𝒕𝒐𝒑 =
𝒇 (𝑭𝒐𝒄𝒂𝒍 𝑳𝒆𝒏𝒈𝒕𝒉)
𝑫 (𝑨𝒑𝒆𝒓𝒂𝒕𝒖𝒓𝒆 𝑫𝒊𝒂𝒎𝒆𝒕𝒆𝒓)
More F-stop More Light Stop (Less Light Transmission)
24
Diaphragm
Aperture
25. F - 1.4
F - 2
F - 2.8
F - 4
F - 5.6
F - 8
F - 11
F - 16
F - 22
Some lenses are Faster than others
Zoom lenses are Slower than Prime lenses
• Smaller numbers let in more light.
• The lower the number the “FASTER” the lens (Recall Shutter)
• Bigger numbers let in less light.
• The higher the number the “Slower” the lens (Recall Shutter)
Each stop lets in half as much light as the one before it.
F-stop and Optical Speed
𝑭 − 𝒔𝒕𝒐𝒑 =
𝒇 (𝑭𝒐𝒄𝒂𝒍 𝑳𝒆𝒏𝒈𝒕𝒉)
𝑫 (𝑨𝒑𝒆𝒓𝒂𝒕𝒖𝒓𝒆 𝑫𝒊𝒂𝒎𝒆𝒕𝒆𝒓)
25
27. F-stop and Depth of Field
− It is also important to note that F-stop is a key factor that affects depth of field.
The smaller the F-stop, the shallower the depth of field, and vice versa..
27
Wide Aperture Small Aperture
Shallow Depth of Field Deep Depth of Field
More Light Reaching Image Sensor Less Light Reaching Image Sensor
FOCUS
29. F-stop and Depth of Field
29
Aperture and Depth of Field
A dot of light
from the subject
Large Aperture
Small Aperture
Light Sensor
Narrow
Depth of Field
Deep
Depth of Field
Lens
Focus Distance
Focus Distance
Focus Distance
Depth of Field Range
Depth of Field Range
Depth of Field Range
F1.4
F5.6
F22
30. − From the viewpoint of characteristics of lenses, shooting with the aperture set in range of f-4 to f-8 Is
generally recommended for good quality picture.
− Set FILTER control to bring the aperture setting into that range.
− However, this may not apply when special composition is desired.
F-stop
𝑭 − 𝑺𝒕𝒐𝒑 =
𝒇 (𝑭𝒐𝒄𝒂𝒍 𝑳𝒆𝒏𝒈𝒕𝒉)
𝑫 (𝑨𝒑𝒆𝒓𝒂𝒕𝒖𝒓𝒆 𝑫𝒊𝒂𝒎𝒆𝒕𝒆𝒓)
30
31. − F-stops are a global reference for judging the amount of light that should be allowed through the lens
during a camera shoot.
− F-stop calibrations increase by a factor of root 2, such as 1.4, 2, 2.8, 4, 5.6, 8, 11, 16, and 22.
− As the value of the F-stop increments by one step (e.g., 5.6 to 8.0), the amount of light passing through the
lens decreases by one half.
− This relation is due to the fact that F-stop is a function of the iris diameter, while incident light is a function
of the square of the diameter.
F-stop
𝑭 − 𝑺𝒕𝒐𝒑 =
𝒇 (𝑭𝒐𝒄𝒂𝒍 𝑳𝒆𝒏𝒈𝒕𝒉)
𝑫 (𝑨𝒑𝒆𝒓𝒂𝒕𝒖𝒓𝒆 𝑫𝒊𝒂𝒎𝒆𝒕𝒆𝒓)
31
32. To calculate the steps in a full stop (1 EV(Exposure Value)) one could use
20×0.5, 21×0.5, 22×0.5, 23×0.5, 24×0.5 etc. (𝑨𝑽 = 𝟎, 𝟏, 𝟐, 𝟑, 𝟒 … 𝒐𝒓 𝑨𝑽 = 𝑲)
The steps in a half stop (1/2 EV) series would be
20/2×0.5, 21/2×0.5, 22/2×0.5, 23/2×0.5, 24/2×0.5 etc. (𝑨𝑽 = 𝟎, 𝟎. 𝟓, 𝟏 , 𝟏. 𝟓, 𝟐, … 𝒐𝒓 𝑨𝑽 = 𝑲/𝟐)
The steps in a third stop (1/3 EV) series would be
20/3×0.5, 21/3×0.5, 22/3×0.5, 23/3×0.5, 24/3×0.5 etc. (𝑨𝑽 = 𝟎, 𝟏/𝟑 , 𝟐/𝟑, 𝟏, 𝟒/𝟑 … 𝒐𝒓 𝑨𝑽 = 𝑲/𝟑)
The steps in a quarter stop (1/4 EV) series would be
20/4×0.5, 21/4×0.5, 22/4×0.5, 23/4×0.5, 24/4×0.5 etc. (𝑨𝑽 = 𝟎, 𝟎. 𝟐𝟓, 𝟎. 𝟓, 𝟎. 𝟕𝟓, 𝟏 … 𝒐𝒓 𝑨𝑽 = 𝑲/𝟒)
Fractional Stops
0 0.25 0.5 0.75 1 1.25 1.5 1.75 2
1.0 1.1 1.2 1.3 1.4 1.5 1.7 1.8 2
𝟏. 𝟎𝟎 𝟏. 𝟎𝟗 𝟏. 𝟏𝟖 𝟏. 𝟐𝟗 𝟏. 𝟒𝟏 𝟏. 𝟓𝟒 𝟏. 𝟔𝟖 𝟏. 𝟖𝟑 𝟐. 𝟎𝟎
Full-stop
One-half-stop
1/2 , light reduction
AV
1/4 , light reduction
F-stop
Calculated
The one-stop unit is also known
as the EV (Exposure Value) unit.
𝑓 − 𝑠𝑡𝑜𝑝 = 2𝐴𝑉 = 2𝐴𝑉×0.5
𝐴𝑉: 𝐴𝑝𝑒𝑟𝑡𝑢𝑟𝑒 𝑉𝑎𝑙𝑢𝑒
1/8 , light reduction
One-quarter-stop
32
33. To calculate the steps in a full stop (1 EV(Exposure Value)) one could use
20×0.5, 21×0.5, 22×0.5, 23×0.5, 24×0.5 etc. (𝑨𝑽 = 𝟎, 𝟏, 𝟐, 𝟑, 𝟒 … 𝒐𝒓 𝑨𝑽 = 𝑲)
The steps in a half stop (1/2 EV) series would be
20/2×0.5, 21/2×0.5, 22/2×0.5, 23/2×0.5, 24/2×0.5 etc. (𝑨𝑽 = 𝟎, 𝟎. 𝟓, 𝟏 , 𝟏. 𝟓, 𝟐, … 𝒐𝒓 𝑨𝑽 = 𝑲/𝟐)
The steps in a third stop (1/3 EV) series would be
20/3×0.5, 21/3×0.5, 22/3×0.5, 23/3×0.5, 24/3×0.5 etc. (𝑨𝑽 = 𝟎, 𝟏/𝟑 , 𝟐/𝟑, 𝟏, 𝟒/𝟑 … 𝒐𝒓 𝑨𝑽 = 𝑲/𝟑)
The steps in a quarter stop (1/4 EV) series would be
20/4×0.5, 21/4×0.5, 22/4×0.5, 23/4×0.5, 24/4×0.5 etc. (𝑨𝑽 = 𝟎, 𝟎. 𝟐𝟓, 𝟎. 𝟓, 𝟎. 𝟕𝟓, 𝟏 … 𝒐𝒓 𝑨𝑽 = 𝑲/𝟒)
Fractional Stops
0 0.25 0.3 0.5 0.7 0.75 1 1.25 1.3 1.5 1.7 1.75 2
1.0 1.1 1.1 1.2 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.8 2
𝟏. 𝟎𝟎 𝟏. 𝟎𝟗 1.10 𝟏. 𝟏𝟖 1.27 𝟏. 𝟐𝟗 𝟏. 𝟒𝟏 𝟏. 𝟓𝟒 1.56 𝟏. 𝟔𝟖 1.08 𝟏. 𝟖𝟑 𝟐. 𝟓𝟓
Full-stop
One-half-stop
1/2 , light reduction
AV
1/4 , light reduction
1/8 , light reduction
F-stop
Calculated
1/6 , light reduction
0.33 0.66 1.66
1.33
One-third-stop
One-quarter-stop
33
𝑓 − 𝑠𝑡𝑜𝑝 = 2𝐴𝑉 = 2𝐴𝑉×0.5
𝐴𝑉: 𝐴𝑝𝑒𝑟𝑡𝑢𝑟𝑒 𝑉𝑎𝑙𝑢𝑒
35. 𝟏𝟎𝟒
Real Word
1.6 billions 𝒄𝒅/𝒎𝟐
1𝟎𝟎 𝐦𝐢𝐥𝐥𝐢𝐨𝐧 𝒄𝒅/𝒎𝟐
1 𝐦𝐢𝐥𝐥𝐢𝐨𝐧 𝒄𝒅/𝒎𝟐
1𝟎𝟎𝟎𝟎 𝒄𝒅/𝒎𝟐
1𝟎𝟎 𝒄𝒅/𝒎𝟐
1 𝒄𝒅/𝒎𝟐
0. 𝟎𝟏 𝒄𝒅/𝒎𝟐
0. 𝟎𝟎𝟎𝟎𝟏 𝒄𝒅/𝒎𝟐
0. 𝟎𝟎𝟎𝟎𝟎𝟏 𝒄𝒅/𝒎𝟐
0 𝒄𝒅/𝒎𝟐
Direct Sunlight
Sky
Interior Lighting
Moonlight
Starlight
Absolute
Darkness
Cinema SDR TV HDR TV
Adjustment by the
human eye
Adjustment
range
of
the
human
eye
(flexible)
What We See
Visible Light
𝟏𝟎−𝟏
The dynamic range of
a typical camera and
lens system is typically
𝟏𝟎𝟓
with a fixed iris.
35
Real-world Luminance Levels and the High-level Functionality of the HVS
37. − As many people know, movie camera lenses are rated by a T- number instead of an F-number.
− The F-number expresses the speed of the lens on the assumption that lens transmits 100% of the incident
light.
− In reality, different lenses have different transmittance, so two lenses with the same F-number may actually
have different speed.
− The T-number solves this problem by taking both the diaphragm diameter and transmittance into account.
− Two lenses with the same T-Number will always give the same image brightness.
T-number
𝑇 − 𝑛𝑢𝑚𝑏𝑒𝑟 =
𝐹 − 𝑛𝑢𝑚𝑏𝑒𝑟
𝑇𝑟𝑎𝑛𝑠𝑚𝑖𝑡𝑎𝑛𝑐𝑒(%)
× 10
37
38. − If you have zoomed with a zoom lens open to full aperture, you may have noted a drop in video level at
the telephoto end. This is called the F drop.
− F drop is a major determinant of the value of zoom lenses used in live on-site sports broadcasts, which
require a long focal length and must frequently contend with twilight or inadequate artificial illumination.
F-Drop
38
39. − The entrance pupil of a zoom lens changes in diameter as the focal length is changed.
− As you zoom toward the telephoto end, the entrance pupil gradually enlarges. When the entrance pupil
diameter is equal to the diameter of the focusing lens group, it cannot become any larger, so the F-
number drops. That is the reason for the F drop.
− If entrance pupil (effective aperture) diameter > front lens diameter, then F-Number drops.
− To eliminate F drop completely, the focusing lens group has to be larger than the entrance pupil at the
telephoto end of the zoom. It has to be at least equal to the focal length at the telephoto end divided by
the F-number.
− To reduce the size and weight of a zoom lens, it is common to allow a certain amount of F drop. For better
composition effect, however, in some studio zoom lenses the focusing group is made large enough that no
F drop occurs.
− F drop is a major determinant of the value of zoom lenses used in live on-site sports broadcasts, which
require a long focal length and must frequently contend with twilight or inadequate artificial illumination.
F-Drop
39
40. Pedestal/Master Black
Pedestal or Master Black
Sut-up Level
Absolute black level or
the darkest black that can
be reproduced by the
camera.
40
Black Level
Blank Level
41. − Pedestal, also called master black, refers to the absolute black level or the darkest black that can be
reproduced by the camera.
− The pedestal can be adjusted as an offset to the set-up level.
− Since pedestal represents the lowest signal level available, it is used as the base reference for all other
signal levels.
If the pedestal level is set too low due to improper adjustment, the entire image will appear darker
than it should be (the image will appear blackish and heavier).
If the pedestal level is set too high, the image will look lighter than it should be (the image will look
foggy with less contrast).
− By adjusting the pedestal level, it is possible to intentionally increase the clearness of an image
• when shooting a foggy scene
• when shooting subjects through a window simply by lowering it
Pedestal/Master Black
41
42. Dynamic Range
− In general, dynamic range indicates the difference or ratio of the smallest and largest amount of
information an electrical device can handle.
− For a camera, “Dynamic Range” indicates:
− The range between the smallest and largest amount of light that can be handled.
• The native dynamic range of high performance SDR (Standard Dynamic Range) video cameras is still
in the range of merely 600% (in HDR (High Dynamic Range) video it is more than 1000%).
• This 600% dynamic range implies that the camera’s CCD can generate a video signal six times larger
in amplitude than the 1.0 V video standard.
42
43. − Many methods have been developed to get around this and enable more effective light handling.
− These include :
• Automatic Gain Control
• The electronic shutter
• ND filters
Dynamic Range
43
44. Minimum Illumination
− Minimum illumination indicates the minimum amount of light required for shooting with a camera,
particularly in the dark. It is expressed in Lux.
− When comparing minimum illumination specifications, it is important to consider the conditions they were
measured at.
• Cameras provide a gain up function that amplifies the signal when a sufficient level is not obtained.
Although convenient for shooting under low light, gain up also boosts the signal’s noise level.
− Minimum illumination is usually measured with the highest gain up setting provided by the camera, and
therefore does not represent the true sensitivity of the camera.
• Simply put in mind, minimum illumination specifications should be evaluated with this gain up setting.
44
45. Sensitivity
− The sensitivity of a camera indicates its ability to shoot in low-light areas without noise being introduced.
(It defines a camera’s raw response to light)
− Sensitivity is sometimes confused with minimum Illumination ,but there is a significant difference between
the two.
• Minimum illumination describes the lowest light level in which a camera can capture images without
taking noise factors into account.
− For this reason, to determine a camera’s true performance in low-light shooting, it is better to refer to
sensitivity specifications first.
45
46. − The camera’s sensitivity is measured by opening the camera’s lens iris from its closed position until the
white area of the grayscale chart reaches 100% video level (on a waveform monitor).
The lens’s F-stop reading at this state is the camera’s sensitivity.
− In CCD cameras, sensitivity is largely governed by:
• The aperture ratio (size) of the photosensitive sites
• On- Chip Lens structure
− The more light gathered onto each photo-sensor, the larger the CCD output and the higher the sensitivity.
Sensitivity
46
47. − Sensitivity is described using the camera lens F-stop number
• Camera A: Sensitivity: f11 at 2000 lx (3200K, 89.9% reflectance)
• Camera B: Sensitivity: f8 at 2000 lx (3200K, 89.9% reflectance)
− The larger the F-number indication (Camera A), the higher the sensitivity
• To make a fair comparison between cameras, sensitivity specifications are indicated with the
conditions that were used to measure them.
• In the above two cases, a 2000 lx/3200K illuminant was used to light a grayscale chart that reflects
89.9% of the light hitting its surface.
Sensitivity
47
51. – On the Kelvin scale, zero degrees K (0 K) is defined as “absolute zero” temperature.
– This is the temperature at which molecular energy or molecular motion no longer exists.
– Since heat is a result of molecular motion, temperatures lower than 0 K do not exist.
Kelvin is calculated as:
K = Celsius + 273.15 ºC
Color Temperature
51
52. Color Temperature
– The spectral distribution of light emitted from a piece of carbon (a
black body that absorbs all radiation without transmission and
reflection) is determined only by its temperature.
– When heated above a certain temperature, carbon will start
glowing and emit a color spectrum particular to that temperature.
– This discovery led researchers to use the temperature of heated
carbon as a reference to describe different spectrums of light.
– This is called Color temperature.
52
Transmission, Reflection, Absorb
57. – Our eyes are adaptive to changes in light source colors – i.e., the color of a particular object will always
look the same under all light sources: sunlight, halogen lamps, candlelight, etc.
– However, with color video cameras this is not the case, bringing us to the definition of “color temperature.”
– When shooting images with a color video camera, it is important for the camera to be color balanced
according to the type of light source (or the illuminant) used.
Color Temperature
57
This is because different light source types emit different colors of light
(known as color spectrums) and video cameras capture this difference.
58. The camera color temperature is lower
than environment color temperature
The camera color temperature is upper
than environment color temperature
Color Temperature
58
59. – In video technology, color temperature is used to describe the spectral distribution of light emitted from a
light source.
– The cameras do not automatically adapt to the different spectrums of light emitted from different light
source types.
– In such cases, color temperature is used as a reference to adjust the camera’s color balance to match the
light source used.
• For example, if a 3200K (Kelvin) light source is used, the camera must also be color balanced at
3200K.
Color Temperature
59
60. Color Temperature Conversion
– All color cameras are designed to operate at a certain color temperature .
– For example, Sony professional video cameras are designed to be color balanced at 3200K, meaning that
the camera will reproduce colors correctly provided that a 3200K illuminant is used.
– This is the color temperature for indoor shooting when using common halogen lamps.
60
61. Cameras must also provide the ability to shoot under illuminants with color temperatures other than 3200K.
– For this reason, video cameras have a number of selectable color conversion filters placed before the
prism system.
– These filters optically convert the spectrum distribution of the ambient color temperature (illuminant) to
that of 3200K, the camera’s operating temperature.
– For example, when shooting under an illuminant of 5600K, a 5600K color conversion filter is used to convert
the incoming light’s spectrum distribution to that of approximately 3200K.
Color Temperature Conversion
61
63. – When only one optical filter wheel is available within the camera, this allows all filters to be Neutral Density
types providing flexible exposure control.
– The cameras also allow color temperature conversion via electronic means.
– The Electronic Color Conversion Filter allows the operator to change the color temperature from 2,000K to
20,000K as typical.
Color Temperature Conversion
63
65. − “Why do we need color conversion filters if we can correct the change of color temperature electrically
(white balance)?".
• White balance electrically adjusts the amplitudes of the red (R) and blue (B) signals to be equally
balanced to the green (G) by use of video amplifiers.
• We must keep in mind that using electrical amplification will result in degradation of signal-to-noise ratio.
• Although it may be possible to balance the camera for all color temperatures using the R/G/B amplifier
gains, this is not practical from a signal-to-noise ratio point of view, especially when large gain up is
required.
The color conversion filters reduce the gain adjustments required to achieve correct white balance.
Color Temperature Conversion
65
66. Variable Color Temperature
− The Variable Color Temp. Function allows the operator to change the color temperature from 20,000K to
2,000K
66
67. Preset Matrix Function
– Preset for 3 Matrices can be set.
– The Matrix level can be preset to different lightings.
– The settings can be easily controlled by the control panel.
67
69. The different light source types emit different colors of light (known as color spectrums) and video cameras capture this difference.
White Balance & Color Temperature
69
Daylight Incandescent Fluorescent
Halogen Cool White LED Warm White LED
Wavelength (nm)
Wavelength (nm)
Wavelength (nm)
Wavelength (nm) Wavelength (nm) Wavelength (nm)
Intensity
Intensity
Intensity
Intensity
Intensity
Intensity
70. − The video cameras are not adaptive to the different spectral distributions of each light source type.
• In order to obtain the same color reproduction under different light sources, color temperate
variations must be compensated by converting the ambient color temperature to the camera’s
operating color temperature (Optically or Electrically).
• Once the incoming light’s color temperature is converted to the camera’s operating color
temperature (Optically or Electrically), this conversion alone does not complete color balancing of
the camera, therefore more precise color balancing adjustment must be made .
White Balance
70
A second adjustment must be made to precisely match the incoming light’s color temperature to
that of the camera known as “white balance”
71. White Balance
White balance refers to shooting a pure white object, or a grayscale chart, and adjusting the camera’s video amplifiers so
the Red, Green, and Blue channels all output the same video level.
71
73. White Balance
− Why by Performing this adjustment for the given light
source, we ensure that the color “white” and all other
colors are correctly reproduced?
• The color “white” is reproduced by combining Red,
Green, and Blue with an equal 1:1:1 ratio.
• White Balance adjusts the gains of the R/G/B video
amplifiers to provide this output ratio for a white
object shot under the given light source type.
• Once these gains are correctly set for that light
source, other colors are also output with the correct
Red, Green, and Blue ratios.
73
(SDTV)
Y=0.11B+0.3R+0.59G
74. White Balance
– For example, when a pure yellow object is shot, the
outputs from the Red, Green, and Blue video amplifiers
will have a 1:1:0 ratio (yellow is combined by equally
adding Red and Green).
– In contrast, if the White Balance is not adjusted, and
the video amplifiers have incorrect gains for that light
source type, the yellow color would be output
incorrectly with, for example, a Red, Green, and Blue
channel ratio of 1:0.9:0.1.
– Note: White balance must be readjusted after
changing lens.
74
(SDTV)
Y=0.11B+0.3R+0.59G
75. 75
White Balance – Camera Shading
– Even brightness white source
• Ambi-Illuminator
– Often the center can be brighter than the edges
– Measure light output with a luminance spot meter
– Set camera gain to 0dB & camera controls to zero
– Set camera F-stop between f4 to f5.6
• Adjust distance of camera to source
– Defocus Camera
76. 76
White Balance – Camera Shading
– Select WFM display and configure for RGB parade.
– No color hue should be present
– Red, green, blue channels must be balanced
– Ideally RGB should be at same level and flat
Original RGB parade waveform After white shading adjustment
77. 77
White Balance with the Vector Display
Monochrome image should be centered tightly on
the vector graticule
Off-center ovular shape
indicates shading error
Use gain controls on the vector display to confirm
correct white balance
78. 78
− A neutral gray scale with the color balance
skewed toward warm light.
• Notice how the trace on the vectorscope
is pulled toward red/orange.
− The same chart with color balance skewed
toward blue.
− Notice how the trace on the vectorscope
trace is pulled toward blue.
− The gray scale with neutral color balance —
the vectorscope shows a small dot right in the
center, indicating that there is no color at all:
zero saturation.
Color Balancing with Vectorscope
79. 79
− Parade view on the waveform monitor clearly shows the incorrect color balance of what should be a
neutral gray chart.
− On the waveform, the Red channel is high, while Green is a bit lower and Blue is very low (top end of
each channel is circled in this illustration).
− This is why so many colorists and DITs say that they “live and die by parade view.”
Color Balancing with Waveform Monitor
80. Preset White
– Preset White is a white-balance selection used in shooting scenarios
• When the white balance cannot be adjusted
• Or when the color temperature of the shooting environment is already known (3200K or 5600K
for instance).
– This means that by simply choosing the correct color conversion filter, optical or electronic, the
approximate white balance can be achieved.
– It must be noted however, that this method is not as accurate as when taking white balance.
o By selecting Preset White, the
R/G/B amplifiers used for white-
balance correction are set to
their center values.
Center Values
80
81. AWB (Auto White Balance)
− Unlike the human eye, cameras are not adaptive to different color temperatures of different light source
types or environments.
• This means that the camera must be adjusted each time a different light source is used, otherwise the
color of an object will not look the same when the light source changes.
• This is achieved by adjusting the camera’s white balance to make a ‘white’ object always appear
white.
• Once the camera is adjusted to reproduce white correctly, all other colors are also reproduced as
they should be.
81
82. AWB (Auto White Balance)
− The AWB is achieved by framing the camera on a white object – typically a piece of white paper/clothe
or grayscale chart – so it occupies more than 70% of the display.
− Then pressing the AWB button on the camera body instantly adjusts the camera white balance to match
the lighting environment.
Macbeth Chart
82
83. ATW (Auto Tracing White Balance)
– The AWB is used to set the correct color balance for one particular shooting environment or color
temperature.
– The ATW continuously adjusts camera color balance in accordance with any change in color
Temperature.
• For example, imagine shooting a scene that moves from indoors to outdoors. Since the color
temperature of the indoor lighting and outdoor sunlight are very different, the white balance must be
changed in real time in accordance with the ambient color temperature.
83
84. Black Balance
− To ensure accurate color reproduction throughout all
video levels, it is important that the red, green, and
blue channels are also in correct balance when there
is no incoming light.
− When there is no incoming light, the camera’s red,
green, and blue outputs represent the “signal floors”
of the red, green, and blue signals, and unless these
signal floors are matched, the color balance of other
signal levels will not match either.
84
85. Black Balance
− It is necessary when:
• Using the camera for the first time
• Using the camera after a significant perid out of use
• Sudden change in temperature
– Without this adjustment, the red, green, and blue color
balance cannot be precisely matched even with
correct white balance adjustments.
85
88. Using a zoom lens correctly requirements:
• Flange back adjustment
• White balance adjustment, White shading adjustment
• Cleaning
88
Basic Composition of Beam Splitter and Image Sensor
91. Prism and Dichroic Layers
91
Transmittance of dichroic coating
Spectral characteristic of blue-reflecting
dichroic coating
Spectral characteristic of an entire
color separation system
93. – The dichroic layer is used to reflect one specific color
while passing other colors through itself.
– The three-color prisms use a combination of total
reflection layers and color selective reflection layers to
confine a certain color.
– For example, the blue prism will confine only the blue
light, and will direct this to the blue imager.
– White shading is seen in cameras that adopt a dichroic
layer in their color separation system.
Prism and Dichroic Layers
93
94. Characteristic Variations due to Polarization
94
– Light can be thought of as a mixture if transverse waves, some oscillating
perpendicular to the plane of incidence (S components) and some
oscillating parallel to it (P components).
– Natural light contains an equal mixture of S and P components, but light
reflected from a glossy surface is polarized, because the S components
are reflected more strongly than the P components.
– A dichroic coating has different characteristics for S polarized light and P
polarized light. The color of polarized light is therefore different from its
original.
– This effect can be prevented by placing a quarter-wave plate in front of
the prism to change the plane polarization of incident light to circular
polarization.
• A quartz filter used as a quater-wave plate can almost
completely eliminate the polarization effect.
• A disadvantage is the high cost of the filter material.
Polarization characteristics ofcolorseparationprism
Correction of polarization by a quartz filter
95. Quarter-Wave Plate
95
– A quarter-wave plate has an internal optic axis.
– It generates a quarter-wave phase difference between light polarized in the plane parallel to the optic axis
and light polarized in the plane perpendicular to the optic axis.
– Circularly polarized light can be thought of as a composition of two components that are polarized in
perpendicular planes and are one-quarter wavelength out of phase.
– Wave plate therefore has the following
properties:
• It changes circularly polarized light to light
polarized in a plane 45 degree to its optic
axis.
• It changes light polarized in a plane 45
degree to its optic axis into circularly
polarized light.
96. Quarter-Wave Plate
96
– A quartz plate is double-refractive, with different indices of refraction for ordinary rays and extraordinary
rays.
– If the refractive index for ordinary rays is 𝒏𝒐, the refractive index for extraordinary rays is 𝒏𝒆, and the
thickness of the quartz plate is 𝒅, then the plate is a quarter-wave plate for wavelengths 𝜆 satisfying the
equation:
𝑁 +
1
4
𝜆 = 𝑛𝑜 − 𝑛𝑒 𝑑
𝑁: 𝑖𝑛𝑡𝑒𝑔𝑒𝑟
𝑛𝑜 = 1.5443
𝑛𝑒 = 1.5534
99. – Flange-back is an important specification to keep in mind when choosing a lens.
– Flange-back describes the distance from the camera’s lens-mount plane (ring surface or flange) to the
imager’s surface.
– In other words, flange-back is the distance that the mounted lens must correctly frame images on the
camera’s image sensor.
– Therefore, it is necessary to select a lens that matches the flange-back specifications of the given
camera.
Back Focal Length
– Similar to flange-back is back focal length, which describes the distance from the very end of the lens (the
end of the cylinder that fits into the camera mount opening) to the imager’s surface.
– The back focal length of the camera is slightly shorter than its flange-back.
Flange-Back/Back Focal Length
99
100. Flange-back is measured differently depending on whether the camera uses a three-chip or one-chip
imaging system
– The flange-back of a one-chip camera is simply:
The distance between the lens mount plane and the imager’s surface.
– The flange-back of a three-chip camera additionally includes:
• The distance that light travels through the prism system used to separate it into R, G, and B color
components.
• The distance that light travels through this glass material is converted to the equivalent distance if it
had traveled through air.
− If a glass block of thickness d (mm) and refractive index n is inserted behind the lens, the flange-back is
affected according to the formula:
Flange-Back
100
𝑭𝑩 𝒊𝒏 𝒂𝒊𝒓 = 𝑭𝑩 𝒂𝒄𝒕𝒖𝒂𝒍 − (𝟏 −
𝟏
𝒏
) × 𝒅
101. Flange-Back
− In today’s cameras, flange-back is determined by the lens-mount system that the camera uses.
• Three-chip cameras use the bayonet mount system
• One-chip security cameras use either the C-Mount or CS-Mount system.
• The flange-back of the C-Mount and CS-Mount systems is standardized as 17.526 mm and 12.5 mm,
respectively.
• There are three flange-back standards for the bayonet mount system: 35.74 mm, 38.00 mm, and 48.00
mm.
101
102. Flange-Back Adjustment
F.B adjustment
“To fit the flange back of zoom lens to the flange back of camera”
• Without it ,focus is change during focusing.
Tracking Adjustment
“F.B adjustment for R,G,B channels”
• Tracking adjustment is not needed in CCD/CMOS camera because the fixation positions of CCDs and
CMOSes are standardized in accordance with the longitudinal chromatic aberration of lens.
102
104. Flange-Back Adjustment Procedure
Sony Instruction
1. Set the iris control to manual, and open the iris fully.
2. Place a flange focal length adjustment chart approximately 3 meters from
the camera and adjust the lighting to get an appropriate video output
level.
3. Loosen the Ff (flange focal length) ring lock screw.
4. With either manual or power zoom, set the zoom ring to telephoto.
5. Aim at the flange focal length adjustment
6. Set the zoom ring to wide angle.
7. Turn the Ff ring to bring the chart into focus. Take care not to move the
distance ring.
8. Repeat steps 4 through 7 until the image is in focus at both telephoto and
wide angle.
9. Tighten the Ff ring lock screw.
Place a Siemens star chart at an 3m for a studio
or ENG lens, and 5 to 7 m for an outdoor lens
104
105. Flange-Back Adjustment Procedure
105
Canon Instruction (Back Focus Adjustment)
− If the relation between the image plane of the lens and the image plane of the camera is incorrect, the
object goes out of focus at the time of zooming operation. Follow the procedure below to adjust the back
focus of the lens.
1. Select an object at an appropriate distance (1.6 to 3m recommended).
Use any object with sharp contrast to facilitate the adjustment work.
2. Set the iris fully open.
3. Set the lens to the telephoto angle by turning the zoom ring.
4. Bring the object into focus by turning the focus ring.
5. Set the lens to the widest angle by turning The zoom ring.
6. Loosen the flange back lock screw, and turn the flange back adjusting
ring to bring the object into focus.
7. Repeat steps 3 to 6 a few times until the object is brought into focus at
both the widest angle and telephoto ends.
8. Tighten the flange back lock screw.
106. Flare
– Flare is caused by numerous diffused (scattered) reflections of the incoming light within the camera lens.
– This results in the black level of each red, green, and blue channel being raised, and/or inaccurate color
balance between the three channels.
106
R channel G channel B channel
Inaccuracy of color in darker regions of the grayscale
Pedestal level balance incorrect due to the flare effect (B
channel pedestal higher than R channel and G channel)
108. CCD Imager WF Monitor
Iris
H
H
Ideal Lens
Real Lens
Volt
Volt
Flare
108
109. – On a video monitor, flare causes the picture to appear as a misty (foggy) image, sometimes with a color
shade.
– In order to minimize the flare effect:
A flare adjustment function is pprovided, which optimizes the pedestal level and corrects the balance
between the three channels electronically.
Test card for overall flare measurement Test card for localized flare measurement
109
Flare
110. Master Flare Function
− The Master FLARE function enables one VR to control the level of the master FLARE with keeping the
tracking of all R/G/B channels.
− This feature makes it possible to control during operation since the color balance is never off.
110
111. 111
Lens Flare
− Lens flare is the light scattered in lens systems.
− Flare manifests itself as swift in black levels with a change light level.
Camera Alignment with Diamond Display
112. 112
Blacks Lifted
Slightly Cool
Green-Blue White Points slightly Blue
Green-Red White Points slightly Green
Green-Blue
White Point
Green-Red
White Point
Camera Alignment with Diamond Display
Flare Adjustment
• Iris down the camera
• Set black level to 0mv
• Adjust Iris so white chip is 1 to
2 f-stop above 700mv
• Adjust the flares for black chip
to 0mv
Black
Lift
Chip Chart
113. White Shading
Shading: Any horizontal or vertical non-linearity introduced during the image capture.
White shading: It is a phenomenon in which a green or magenta cast appears on the upper and lower parts
of the screen, even when white balance is correctly adjusted in the screen center.
113
114. – Due to differences in the angle of incidence of light on the dichroic coating's, when the white balance is
correct at the center of the image, the upper and lower edges may have a green or magenta cast.
– A dichroic coating exploits the interference of light. Different angles of incidence result in different light
paths in a multilayer coating, causing variations in the color separation characteristic. As a general rule,
the larger the angle of incidence, the more the characteristic is shifted in the short-wavelength direction.
White Shading
114
Incidence characteristic of a blue-reflecting dichroic coating
115. Relation between Exit Pupil and White Shading
– The exit pupil refers to the (virtual) image of the diaphragm formed by the lenses behind the diaphragm.
– A pencil of rays exiting from a zoom lens diverges from a point on the exit pupil, so the rays directed
toward the upper and lower edges of the image strike the dichroic coating at different angles, as can be
seen in Figure. The resulting differences in characteristics shade the upper and lower edges of the image
toward magenta or green.
White Shading
115
Entrance
Pupil
Exit
Pupil
Diaphragm
116. White Shading
116
Relation between Exit Pupil and White Shading
– Due to vignetting, when the lens is zoomed or stopped down, the exit pupil changes slightly, causing
changes in the shading.
– Use of an extender also causes shading effects by changing the exit pupil.
– The amount of shading is related to the exit pupil of the lens, so white shading has to be readjusted when
a lens is replaced by a lens with a different exit pupil distance.
Vignetting
117. Color Shading of Defocused Images
– This effect is not present when the image is in focus, but when the subject has depth, so that part of it is
defocused, the colors of the defocused part are shaded in the vertical direction.
– As with white shading, the cause is the difference in spectral characteristics at different angles of
incidence on the dichroic coating.
White Shading
117
– Because rays a and b strike the dichroic
coating at different angles, ray a is
transmitted as magenta light and ray b as
closer to green.
• When the image is in focus, both rays arrive at
the same point, and their colors average out
so that no shading occurs.
• When the image is out of focus, however, part
of it looks magenta and part of it looks green.
This effect is difficult to correct electronically.
118. – The color-filtering characteristics of each prism slightly change according to the angle that the light enters
each reflection layer (incident angle).
– Different incident angles cause different light paths in the multilayer-structured dichroic coating layer,
resulting in a change of the prism’s spectral characteristics.
– This effect is seen as the upper and lower parts of the screen having a green or magenta cast, even with
the white balance correctly adjusted in the center.
White Shading, Type 1
118
119. − Another type of white shading is also caused by a lens’s uneven transmission characteristics.
• In this case, it is observed as the center of the image being brighter than the edges.
• This can be corrected by applying a parabolic correction signal to the video amplifiers used for white
balance.
− Another cause of White shading is uneven sensitivity of the photo sensor in the imager array.
• In this case, the white shading phenomenon is not confined in the upper and lower parts of the
screen.
White Shading, Type 2 and 3
119
Prism
121. Ideal Light Box Real Lens
Volts
Horizontal
52 u Sec
Volts
20 m Sec
Vertical
Lens’s uneven transmission characteristics (Type 2)
121
A B
C
D
A B C D
123. – The exit pupil refers to the (virtual) image of the diaphragm formed by the lenses behind the diaphragm.
– The amount of shading is related to the exit pupil of the lens, so white shading has to be readjusted when
a lens is replaced by a lens with a different exit pupil distance.
– An extender also changes the exit pupil, hence the shading.
White Shading Adjustment Note
123
Entrance
Pupil
Exit
Pupil
Diaphragm
124. V modulation is a type of white shading that occurs when there is a vertical disparity in the center of the lens
and prism optical axis.
– This causes the red and blue light components to be projected ‘off center’ of their associated imagers,
which results in green and magenta casts to appear on the top and bottom of the picture frame.
– V modulation is caused by
• the different characteristics of each lens and/or
• the different optical axis of each zoom position
– It can be compensated for in the camera.
• Since this compensation data directly relates to the lens, it is automatically stored/recalled as part of the Lens File.
V Modulation
124
125. Off Center Projection on R and B Imagers
When the red and blue light components to be projected 'off center' of their associated imager sensors,
green and magenta casts are appeared on the top and bottom of the picture frame.
off center
on center
Green Imager Blue Imager Red Imager
125
126. Black Shading
– Black shading is a phenomenon observed as unevenness in dark areas of the image due to dark current
noise of the imaging device.
– A black shading adjustment function is available to suppress this phenomenon to a negligible level.
Dark Current Noise:
− The noise induced in an imager by unwanted electric currents generated by various secondary factors,
such as heat accumulated within the imaging device.
126
127. Registration
127
– Registration means aligning the three images formed on the red, green, blue channels so that they
overlap precisely.
– With three pick-up camera it is necessary before using camera.
– In a CCD camera it is so stable that the adjustment is not necessary.
Registration Examination
130. 130
Circle of Confusion and Permissible Circle of Confusion
Circle of Confusion
– Since all lenses contain a certain amount of spherical aberration and astigmatism, they cannot perfectly
converge rays from a subject point to form a true image point (i.e., an infinitely small dot with zero area).
– In other words, images are formed from a composite of dots (not points) having a certain area, or size.
– Since the image becomes less sharp as the size of these dots
increases, the dots are called “circles of confusion.”
– Thus, one way of indicating the quality of a lens is by the
smallest dot it can form, or its “minimum circle of confusion.”
Permissible Circle of Confusion
– The maximum allowable dot size in an image is called the
“permissible circle of confusion.” (The largest circle of
confusion which still appears as a “point” in the image)
131. Permissible Circle of Confusion & Effect of the Image Sensor
– The Permissible Circle of Confusion is re-defined by the sampling of the image sensor.
– The permissible Circle of Confusion is the distance between two sampling lines.
– For the Super 35mm lens, the vertical height is 13.8 mm.
– For the 2/3” lens, the vertical height is 5.4 mm.
131
106
5.4 mm / 2160 vertical pixels = 0.0025mm
13.8 mm / 2160 vertical pixels = 0.0064mm
Super35mm
PixelSize
6.4 x 6.4 um
2/3-inch 4K
PixelSize
2.5 x 2.5 um
Permissible CoC isconstrained
134. 134
Image
Sensor
Focal
Plane
Depth of Field
Depth of Focus
Circle of Confusion
Circle of Confusion
Depth of Field, Depth of Focus & Permissible Circle of Confusion
Permissible
Circle
of
Confusion
135. – In optics, a circle of confusion is an optical spot caused by a cone of light rays from a lens not coming to a
perfect focus when imaging a point source.
– If an image is out of focus by less than the “Permissible Circle of Confusion”, the out-of-focus is
undetectable.
Depth of Field, Depth of Focus & Permissible Circle of Confusion
135
Maximum non-convergance
allowed to be in focus
Permissible Circle of
Confusion (CoC)
Film/Sensor
Where the light is recorded
Depth of Field
Range that is focus
Focus Point
Near limit of Focus
Far limit of Focus
136. Permissible Circle of Confusion
136
53
Perfect Focus
Acceptable
Focus
Unacceptable
Focus
Assumption: Permissible
Circle of Confusion
137. Permissible Circle of Confusion
137
Permissible Circle
of Confusion
Depth of Field (DoF)
Focused Plane
138. Depth of Field
138
Sensor
Depth of Field is greater behind the subject than in front.
f=Focal Length
𝐝𝟏: Far limit of depth of field (Front depth of field) Depth of focus
Sensor
𝐝𝟐: Near limit of depth of field (Rear depth of field ) Depth of focus
𝜹: permissible circle of confusion (CoC) diameter
𝒍: subject distance (distance from the first
principal point to subject)
𝒅𝟏 =
𝜹 × 𝑭𝑵𝑶 × 𝒍𝟐
𝒇𝟐 − 𝜹 × 𝑭𝑵𝑶 × 𝒍
𝒅𝟐 =
𝜹 × 𝑭𝑵𝑶 × 𝒍𝟐
𝒇𝟐 + 𝜹 × 𝑭𝑵𝑶 × 𝒍
139. Depth of Field
− When focusing a lense on an object, there is a certain distance range in front of and behind the focused
object that also comes into focus.
− Depth of field indicates the distance between the closest and furthest object that are in focus.
• When this distance is long ,the depth of field is deep.
• When this distance is short ,the depth of field is shallow.
139
140. It is governed by the three following factors:
I. The larger the iris (bigger F-number & F-stop), the deeper the depth of field (smaller aperture).
II. The shorter the lens’s focal length, the deeper the depth of field.
III. The further the distance between the camera and the subject, the deeper the depth of field.
– Depth of field can therefore be controlled by changing these factors, allowing camera operators
creative expression.
– For example: A shallow depth of field is used for shooting portraits, where the subject is highlighted and
the entire background is blurred.
Depth of Field
140
145. Depth of Field Is Influenced by the Aperture Setting
145
Aperture and Depth of Field
A dot of light
from the subject
Large Aperture
Small Aperture
Light Sensor
Narrow
Depth of Field
Deep
Depth of Field
Lens
Lens
Large
Aperture
Small
Aperture
𝑪𝟎: Permissible
Circle of Confusion
Large Aperture
Small Aperture
Focus Plane
Focus Plane
DOF
DOF
146. 146
Depth of Field Is Influenced by the Aperture Setting
Permissible Circle
of Confusion
Permissible Circle
of Confusion
Circle of Least Confusion
Circle of Least Confusion
Focal Plane
Focal Plane
Depth of Field
Depth of Field
Depth of Focus
Depth of Focus
Circle of Confusion (CoC)
Circle of Confusion (CoC)
Sensor
Sensor
Aperture
Aperture
Optical
Axis
Optical
Axis
Far
Focus
Far
Focus
Near
Focus
Near
Focus
Limit
Limit
Limit
Limit
147. 147
Depth of Field Is Influenced by the Aperture Setting
Depth of Focus
Depth of Focus
Depth of Focus
149. Depth of Field Is Influenced by the Focal Length of the Lens
149
𝒇
𝒇
Permissible Circle of
Confusion
Permissible Circle of
Confusion
Depth of field
Depth of field
Longer focal length means
smaller depth of field range
150. Depth of Field Is Influenced by the Focal Length of the Lens
150
Depth of Field
Depth of Focus
Depth of Focus
Depth of Field
Lenses set for
Sharpest Focus
Scene Image Sensor
Far
Limit
Near
Limit
Permissible
Circle
of
Confusion
Permissible
Circle
of
Confusion
Far
Limit
Near
Limit
151. Depth of Field Is Influenced by the Focal Length of the Lens
151
𝒇𝟏
𝒇𝟐
𝑫𝟏
𝑫𝟐
𝑰𝒎𝒂𝒈𝒆 𝒐𝒏 𝑺𝒆𝒏𝒔𝒐𝒓
𝑰𝒎𝒂𝒈𝒆 𝒐𝒏 𝑺𝒆𝒏𝒔𝒐𝒓
152. Depth of Field is influenced by the Subject to Camera Distance
152
Permissible Circle of
Confusion
Permissible Circle of
Confusion
Depth of field
Depth of field
Longer subject distances means
larger depth of field range