This document discusses modeling optical scattering in the Advanced LIGO gravitational wave detector. It describes calibrating cameras used to monitor scattered light by relating pixel intensity to incident power. Photodiodes along beam tube baffles measure scattered power during interferometer alignments. The bidirectional reflectance distribution function models total scatter based on incident and scattered power measurements. Images and photodiode data are analyzed to model scattering from test masses and simulate the stationary interferometer. Future work includes comparing model predictions to measured data.
This document describes the design and operation of a DIY spectrometer built using an improvised concave diffraction grating. Light enters through a slit and is separated by wavelength when reflected off the grating. Each wavelength is focused onto a linear sensor array, which records the intensity. An Arduino microcontroller interprets the sensor data and sends it to a computer program that displays a spectrum graph in real-time. The spectrometer can characterize various light sources across ultraviolet to infrared wavelengths, including LEDs, fluorescent lamps, and white light sources. Further improvements are suggested to refine the device and add calibration features.
This document discusses approaches for improving nonuniformity correction (NUC) for resistive array infrared scene projectors. Current NUC schemes treat each pixel independently, but scene-based effects like power drops and thermal crosstalk across the array are important to consider. The document examines potential problems with scene-based correction and discusses algorithms that could be used, such as accounting for far-field diffraction effects between pixels. It also describes the current sparse array approach used to measure each pixel's response and derive nonuniformity correction curves.
This document describes the design of a compact X-band linear accelerator (linac) structure for a medical CyberKnife project conducted by Idaho State University, the Korean Atomic Energy Research Institute, and Radiation Technology eXcellence. The linac structure was designed using 2D SUPERFISH and 3D CST MICROWAVE STUDIO electromagnetic simulation programs. Key aspects of the design include optimizing a 15-cell π-mode standing wave linac structure and RF coupler to operate at 9.3 GHz, with the goal of achieving a compact, lightweight X-band linac that can be attached to a robotic arm for precise cancer treatment.
The document describes the development of an automatic MATLAB-based tool for measuring beam emittance at the Idaho Accelerator Center. An optical transition radiation screen and camera were installed to capture beam images during a quadrupole scan. MATLAB codes were developed to extract beam sizes from the images, perform a polynomial fit to determine emittance, and control the scan automatically via EPICS and MATLAB Channel Access. The tool was tested by measuring the emittance of the HRRL accelerator, reducing measurement time and error compared to manual methods.
Compressive Light Field Photography using Overcomplete Dictionaries and Optim...Ankit Thiranh
In this paper, a design is proposed for a compressive light field camera which will allow to recover light fields with higher resolution from a single image. Also, various other useful applications for light field atoms are discussed, including 4D light field compression and denoising.
This document summarizes the commissioning of the pencil beam dose calculation algorithm for treatment planning with the TrueBeam STx linear accelerator at Nova Scotia Cancer Centre. Key points include:
- The pencil beam algorithm models dose distribution as the convolution of pre-calculated pencil beam dose kernels and primary photon fluence for the treatment field.
- The algorithm was commissioned for the TrueBeam STx linac, which features a high-definition MLC and is integrated with the ExacTrac IGRT system for precise stereotactic radiotherapy.
- Parameters of the algorithm including pencil beam kernels, primary photon fluence modeling, and total dose calculation were described. The algorithm was commissioned to accurately calculate dose distributions for
The document discusses superresolution technology that can improve the resolution of infrared camera images. It begins by explaining the basic problem that small objects may be invisible or measured incorrectly in infrared images due to pixel size limitations. It then describes how superresolution works by using multiple images and deconvolution algorithms to effectively decrease pixel pitch by 1.6x and increase usable resolution also by 1.6x compared to normal images. Experimental results show that superresolution detects spatial frequencies about 50% higher than the camera's detector cutoff and improves temperature measurement accuracy compared to interpolation. The technology will be available as a software update for all current Testo infrared cameras.
Backtracking Search Optimization for Collaborative Beamforming in Wireless Se...TELKOMNIKA JOURNAL
Due to energy limitation and constraint in communication capabilities, the undesirable high
battery power consumption has become one of the major issues in wireless sensor network (WSN).
Therefore, a collaborative beamforming (CB) method was introduced with the aim to improve the radiation
beampattern in order to compensate the power consumption. A CB is a technique which can increase the
sensor node gain and performance by aiming at the desired objectives through intelligent capabilities. The
sensor nodes were located randomly in WSN environment. The nodes were designed to cooperate among
each other and act as a collaborative antenna array. The configuration of the collaborative nodes was
modeled in circular array formation. The position of array nodes was determined by obtaining the optimum
parameters pertaining to the antenna array which implemented by using Backtracking Search Optimization
Algorithm (BSA). The parameter considered in the project was the side-lobe level minimization. It was
observed that, the suppression of side-lobe level for BSA was better compared to the radiation
beampattern obtained for conventional uniform circular array.
This document describes the design and operation of a DIY spectrometer built using an improvised concave diffraction grating. Light enters through a slit and is separated by wavelength when reflected off the grating. Each wavelength is focused onto a linear sensor array, which records the intensity. An Arduino microcontroller interprets the sensor data and sends it to a computer program that displays a spectrum graph in real-time. The spectrometer can characterize various light sources across ultraviolet to infrared wavelengths, including LEDs, fluorescent lamps, and white light sources. Further improvements are suggested to refine the device and add calibration features.
This document discusses approaches for improving nonuniformity correction (NUC) for resistive array infrared scene projectors. Current NUC schemes treat each pixel independently, but scene-based effects like power drops and thermal crosstalk across the array are important to consider. The document examines potential problems with scene-based correction and discusses algorithms that could be used, such as accounting for far-field diffraction effects between pixels. It also describes the current sparse array approach used to measure each pixel's response and derive nonuniformity correction curves.
This document describes the design of a compact X-band linear accelerator (linac) structure for a medical CyberKnife project conducted by Idaho State University, the Korean Atomic Energy Research Institute, and Radiation Technology eXcellence. The linac structure was designed using 2D SUPERFISH and 3D CST MICROWAVE STUDIO electromagnetic simulation programs. Key aspects of the design include optimizing a 15-cell π-mode standing wave linac structure and RF coupler to operate at 9.3 GHz, with the goal of achieving a compact, lightweight X-band linac that can be attached to a robotic arm for precise cancer treatment.
The document describes the development of an automatic MATLAB-based tool for measuring beam emittance at the Idaho Accelerator Center. An optical transition radiation screen and camera were installed to capture beam images during a quadrupole scan. MATLAB codes were developed to extract beam sizes from the images, perform a polynomial fit to determine emittance, and control the scan automatically via EPICS and MATLAB Channel Access. The tool was tested by measuring the emittance of the HRRL accelerator, reducing measurement time and error compared to manual methods.
Compressive Light Field Photography using Overcomplete Dictionaries and Optim...Ankit Thiranh
In this paper, a design is proposed for a compressive light field camera which will allow to recover light fields with higher resolution from a single image. Also, various other useful applications for light field atoms are discussed, including 4D light field compression and denoising.
This document summarizes the commissioning of the pencil beam dose calculation algorithm for treatment planning with the TrueBeam STx linear accelerator at Nova Scotia Cancer Centre. Key points include:
- The pencil beam algorithm models dose distribution as the convolution of pre-calculated pencil beam dose kernels and primary photon fluence for the treatment field.
- The algorithm was commissioned for the TrueBeam STx linac, which features a high-definition MLC and is integrated with the ExacTrac IGRT system for precise stereotactic radiotherapy.
- Parameters of the algorithm including pencil beam kernels, primary photon fluence modeling, and total dose calculation were described. The algorithm was commissioned to accurately calculate dose distributions for
The document discusses superresolution technology that can improve the resolution of infrared camera images. It begins by explaining the basic problem that small objects may be invisible or measured incorrectly in infrared images due to pixel size limitations. It then describes how superresolution works by using multiple images and deconvolution algorithms to effectively decrease pixel pitch by 1.6x and increase usable resolution also by 1.6x compared to normal images. Experimental results show that superresolution detects spatial frequencies about 50% higher than the camera's detector cutoff and improves temperature measurement accuracy compared to interpolation. The technology will be available as a software update for all current Testo infrared cameras.
Backtracking Search Optimization for Collaborative Beamforming in Wireless Se...TELKOMNIKA JOURNAL
Due to energy limitation and constraint in communication capabilities, the undesirable high
battery power consumption has become one of the major issues in wireless sensor network (WSN).
Therefore, a collaborative beamforming (CB) method was introduced with the aim to improve the radiation
beampattern in order to compensate the power consumption. A CB is a technique which can increase the
sensor node gain and performance by aiming at the desired objectives through intelligent capabilities. The
sensor nodes were located randomly in WSN environment. The nodes were designed to cooperate among
each other and act as a collaborative antenna array. The configuration of the collaborative nodes was
modeled in circular array formation. The position of array nodes was determined by obtaining the optimum
parameters pertaining to the antenna array which implemented by using Backtracking Search Optimization
Algorithm (BSA). The parameter considered in the project was the side-lobe level minimization. It was
observed that, the suppression of side-lobe level for BSA was better compared to the radiation
beampattern obtained for conventional uniform circular array.
Evaluation of a Multipoint Equalization System based on Impulse Responses Pro...a3labdsp
This document evaluates a multipoint equalization algorithm that combines fractional octave smoothing of impulse responses measured in multiple locations in a room or car. It investigates how the equalization performance is affected by varying parameters like the number of measurement positions and equalization zone size. The algorithm extracts a representative prototype response and inverse filter from the smoothed impulse responses. Tests show the proposed approach achieves better spectral deviation and equalization than a single-point method, and performance decreases but remains effective as the equalization zone is expanded to more distant positions.
This paper presents a technique for denoising digital radiographic images using a wavelet-based hidden Markov model. The method first applies the Anscombe transformation to adjust for Poisson noise, then uses the dual-tree complex wavelet transform for decomposition. A hidden Markov tree model is used to capture correlations between wavelet coefficients across scales. Two correction functions are applied to shrink coefficients before inverse transformation. Evaluation on phantom and clinical images showed the method outperforms Gaussian filtering in terms of noise reduction, detail quality and bone sharpness, though some edges had artifacts.
IRJET- A Comparative Analysis of various Visibility Enhancement Techniques th...IRJET Journal
This document provides a summary and analysis of various single image defogging techniques. It begins with an abstract that outlines how different fog removal algorithms detect and remove fog to improve image visibility. It then reviews several fog removal techniques from research papers. These include using fog density perception to estimate transmission maps, enhancing contrast using dark channel priors, combining dark channel priors with fuzzy logic for efficiency, using dark channel priors and guided filters to extract transmission maps and enhance images. The document aims to analyze and compare different techniques for efficiently removing fog from digital images.
Monte carlo Technique - An algorithm for Radiotherapy CalculationsSambasivaselli R
Monte Carlo techniques are used to simulate particle transport through complex geometries to calculate dose distributions. The key steps are: (1) sampling the distance to the next interaction, interaction type, and energy/direction of secondary particles, (2) tracking particle histories through condensed histories or splitting/Russian roulette, and (3) calculating dose deposition in voxels. While fully accurate, Monte Carlo is statistically limited by the number of histories. Variance reduction techniques increase efficiency but introduce weighting factors. Overall uncertainty is typically within 3% given proper commissioning and cross-section libraries.
The document discusses affine transforms and their applications in image processing. It provides definitions and examples of different types of affine transforms including translation, rotation, and scaling. It also discusses logical operators that can be applied to images like AND, OR, XOR. Additional topics covered include sources of noise in digital images and sample code for applying affine transforms and filters to an image.
Analysis of Image Compression Using WaveletIOSR Journals
Recently, wavelet has a powerful tool for image compression. This paper analysis the mean square
error, peak signal to noise ratio and bit-per-pixel ratio of compressed image with different decomposition level
by using wavelet
The exposure index (EI) measures the amount of exposure received by the image receptor and is dependent on mAs, detector area, and beam attenuation. It indicates image quality, with manufacturers recommending an optimal EI range. EI can be compared to film speed and blackening in film-screen radiography. Different manufacturers calculate EI differently, resulting in variable ranges and definitions that are not comparable across systems. Standardization efforts aim to define EI proportional to air kerma exposure.
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...ijistjournal
Images are often degraded by atmospheric haze , a phenomenon due to the particles in the air that scatter light. Haze induces a loss of contrast,its visual effect is blurring of distant objects. This paper presents a novel algorithm for improving the visibility of an image degraded by haze. The proposed method uses a cost function based on human visual model to estimate airlight map. It employs Artificial Bee Colony optimization (ABC) as the optimization technique for estimating air light map. Image is dehazed by removing the estimatedairlight from the degraded image. The performance of the algorithm is tested and compared with various other dehazing methods and the proposed algorithm dehazes the image effectively outperforming other methods.
Variable radiation pattern from co axial probe fed rectangular patch antenna ...eSAT Journals
Abstract The idea of obtaining variable radiation patterns from the same antenna is important aspect in achieving the adaptive antenna systems. In the EM signal processing the change of radiation signifies the information to be transmitted, its rate of transmission, the geographical changes and direction to transmit etc. i.e. each time when the requirement arises to change the radiation pattern it has to be done to satisfy the conditions. Electronically steerable antennas were used where the antenna radiation will be altered by varying the feed and similar case is applied for shaped patterns from array antenna where the feed to be given will be calculated and given accordingly. In the present concept the metamaterials are used to obtain different radiation patterns occurred at different operating frequencies using the same antenna without changing the antenna physically are varying its feed. Key Words: Inductance, capacitance, operating frequency, variable radiation, enhancement, radiation cancellation.
This document provides an overview of wavelet-based image fusion techniques. It discusses wavelet transform theory, including continuous and discrete wavelet transforms. For discrete wavelet transforms, it describes decimated, undecimated, and non-separated approaches. It explains how wavelet transforms can extract detail information from images at different resolutions, which can then be combined to create a fused image containing the best characteristics of the original images. While providing improved results over traditional fusion methods, wavelet-based approaches still have limitations such as artifact introduction that researchers continue working to address.
Enhancing the Radiation Pattern of Phase Array Antenna Using Particle Swarm O...IOSR Journals
The document describes a study that uses particle swarm optimization to enhance the radiation pattern of a phase array antenna by minimizing sidelobe levels. It first provides background on issues with high sidelobes in phase array antennas, such as power losses and interference. It then summarizes previous research using techniques like genetic algorithms for antenna array optimization. The study models the radiation pattern of linear arrays with different element numbers and calculates gain, finding that gain increases with more elements. However, sidelobe levels also increase relatively. Therefore, the study proposes using particle swarm optimization to optimize current excitation and control sidelobe levels while maintaining a narrow beamwidth.
DETECTION OF POWER-LINES IN COMPLEX NATURAL SURROUNDINGScsandit
Power transmission line inspection using Unmanned Aerial Vehicles (UAV) is taking off as an
exciting solution due to advances in sensors and flight technology. Extracting power-lines from
aerial images, taken from the UAV, having complex natural surroundings is a critical task in the
above problem. In this paper we propose an approach for suppressing natural surrounding that
leads to power line detection. The results of applying our method on real-life video frames taken
from a UAV demonstrate that our approach is very effective. We believe that our approach can
be easily used for line detection in any other real outdoor video as well.
Feature based ghost removal in high dynamic range imagingijcga
This paper presents a technique to reduce the ghost artifacts
in a high dynamic range (HDR) image. In HDR
imaging, we need to detect the motion between multiple exp
osure images of the same scene in order to
prevent the ghost artifacts
. First, w
e
establish
correspondences between the aligned reference image and the
other exposure images using the zero
-
mean normalized cross correlation (ZNCC
).
T
hen
, we
find object
moti
on regions
using
adaptive local thresholding of ZNCC feature maps and motion map clustering. In this
process, we focus on finding accurate motion regions and on reducing false detection in order to minimize
the side effects as well.
Through
experiments wit
h several sets of
low dynamic range
images captured with
different exposures, we show that the proposed method can remove the ghost artifacts better than existing
methods
.
The document summarizes the optical requirements for a low earth orbit surveillance satellite. It describes selecting components like a charge coupled device (CCD) and mirror that meet requirements for high resolution photos while being cost efficient. A Kodak CCD and BK7 glass mirror were chosen. Calculations were shown for the mirror size/shape, electrical power needs met by solar panels and batteries, and ensuring the mirror's optical power matches the CCD's needs. The design was found to meet constraints like a resolution under 2.5m while keeping costs low through choices like the affordable Kodak CCD.
Estimation of Separation and Location of Wave Emitting Sources : A Comparison...sipij
This document compares the Burg method and Fourier transform method for estimating the separation and location of wave emitting sources. The Burg method provides higher resolution than the Fourier transform method and can better resolve adjacent sources. Simulation results show the Burg method more accurately estimates source separation and location, especially when the separation distance is small or noise is present. The percentage error is smaller for the Burg method compared to the Fourier transform method.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document discusses echo cancellation using adaptive combination of normalized subband adaptive filters (NSAFs). It presents the following:
1. Fullband adaptive filters can have slow convergence due to correlated speech input and long echo path impulse responses. Subband adaptive filters (SAFs) address this by using individual adaptive filters in spectral subbands.
2. Adaptive combination of SAFs provides a way to achieve both fast convergence and small steady-state error. It independently adapts filters with different step sizes, then combines them using a mixing parameter adapted by stochastic gradient descent.
3. The proposed method adaptively combines NSAFs in subbands. It uses a large step size filter for fast convergence and a
Design a mobile telephone system in a certain cityeSAT Journals
Abstract
A mobile telephone system is designed to facilitate better communication among 8000 users in the centre of Roma. The network
performance prediction tool was used to determine the best location for deployment of the base station. It was based on some
parameters or predictions which were obtained by calculating needed data to construct the base station.
Theoretically call planning, cell splitting, frequency re-use, offered traffic etc. techniques are used to obtain a good coverage for
a certain city. In order to find the best cell cluster, many calculations were also implemented.
The objective achieved was designing a network system in city centre of Roma. The 7 cell cluster and 12 cell cluster was selected
for its overall efficiency. In addition to this, co-channel interference, best server, maximum received power etc. were examined for
deployment of the base station with help of Winprop.
Key Words:Mobile telephone system, Cell cluster, Frequency re-use, Network performance, Power,Winprop.
The document describes an Auto Signal Source Locator (ASSL) feature of the Sage Instruments 8901 UCTT that allows users to locate a signal source by driving around with the UCTT connected to GPS and RF antennas mounted on their vehicle. As measurements are taken during the drive, the ASSL algorithm estimates the source location and displays it as a red circle on a map. With more data points, the estimated location becomes more accurate. The feature provides a preliminary location that can then be refined using directional antennas on foot.
The document discusses using electromagnetically induced transparency (EIT) to increase the sensitivity of gravitational wave interferometers. It explores EIT properties through modeling and simulation to determine parameters that theoretically allow high transmission and narrow linewidths suitable for filtering squeezed light. Experimentally, the largest contrast observed was 3.9% with a linewidth of 657 Hz, while the narrowest linewidth was 202 Hz with 0.84% contrast. Both simulation and experimental results are presented and compared.
This document provides strategies for teaching physics to English learners. It recommends reducing cognitive, cultural, and language loads by pre-teaching vocabulary with worksheets, using total physical response and multiple intelligences activities, diagramming abstract concepts, using units as cues, and employing AVID vocabulary tables. The goal is to give students tools to succeed incrementally and build confidence over time rather than searching for a perfect strategy.
This document discusses using electromagnetically induced transparency (EIT) to increase the quantum limited sensitivity of gravitational wave interferometers. It first provides background on gravitational wave detection and EIT. It then describes simulations of EIT using two modeling programs which varied parameters like drive intensity and detuning. Experimental methods are also described where an EIT system using rubidium vapor was tested while varying parameters like cell temperature, drive intensity, and beam size. Results from both simulation and experiment are presented and show agreement, with the best EIT contrast and linewidth achieved. Further improvements are suggested by using lower temperatures and drive intensities.
Evaluation of a Multipoint Equalization System based on Impulse Responses Pro...a3labdsp
This document evaluates a multipoint equalization algorithm that combines fractional octave smoothing of impulse responses measured in multiple locations in a room or car. It investigates how the equalization performance is affected by varying parameters like the number of measurement positions and equalization zone size. The algorithm extracts a representative prototype response and inverse filter from the smoothed impulse responses. Tests show the proposed approach achieves better spectral deviation and equalization than a single-point method, and performance decreases but remains effective as the equalization zone is expanded to more distant positions.
This paper presents a technique for denoising digital radiographic images using a wavelet-based hidden Markov model. The method first applies the Anscombe transformation to adjust for Poisson noise, then uses the dual-tree complex wavelet transform for decomposition. A hidden Markov tree model is used to capture correlations between wavelet coefficients across scales. Two correction functions are applied to shrink coefficients before inverse transformation. Evaluation on phantom and clinical images showed the method outperforms Gaussian filtering in terms of noise reduction, detail quality and bone sharpness, though some edges had artifacts.
IRJET- A Comparative Analysis of various Visibility Enhancement Techniques th...IRJET Journal
This document provides a summary and analysis of various single image defogging techniques. It begins with an abstract that outlines how different fog removal algorithms detect and remove fog to improve image visibility. It then reviews several fog removal techniques from research papers. These include using fog density perception to estimate transmission maps, enhancing contrast using dark channel priors, combining dark channel priors with fuzzy logic for efficiency, using dark channel priors and guided filters to extract transmission maps and enhance images. The document aims to analyze and compare different techniques for efficiently removing fog from digital images.
Monte carlo Technique - An algorithm for Radiotherapy CalculationsSambasivaselli R
Monte Carlo techniques are used to simulate particle transport through complex geometries to calculate dose distributions. The key steps are: (1) sampling the distance to the next interaction, interaction type, and energy/direction of secondary particles, (2) tracking particle histories through condensed histories or splitting/Russian roulette, and (3) calculating dose deposition in voxels. While fully accurate, Monte Carlo is statistically limited by the number of histories. Variance reduction techniques increase efficiency but introduce weighting factors. Overall uncertainty is typically within 3% given proper commissioning and cross-section libraries.
The document discusses affine transforms and their applications in image processing. It provides definitions and examples of different types of affine transforms including translation, rotation, and scaling. It also discusses logical operators that can be applied to images like AND, OR, XOR. Additional topics covered include sources of noise in digital images and sample code for applying affine transforms and filters to an image.
Analysis of Image Compression Using WaveletIOSR Journals
Recently, wavelet has a powerful tool for image compression. This paper analysis the mean square
error, peak signal to noise ratio and bit-per-pixel ratio of compressed image with different decomposition level
by using wavelet
The exposure index (EI) measures the amount of exposure received by the image receptor and is dependent on mAs, detector area, and beam attenuation. It indicates image quality, with manufacturers recommending an optimal EI range. EI can be compared to film speed and blackening in film-screen radiography. Different manufacturers calculate EI differently, resulting in variable ranges and definitions that are not comparable across systems. Standardization efforts aim to define EI proportional to air kerma exposure.
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...ijistjournal
Images are often degraded by atmospheric haze , a phenomenon due to the particles in the air that scatter light. Haze induces a loss of contrast,its visual effect is blurring of distant objects. This paper presents a novel algorithm for improving the visibility of an image degraded by haze. The proposed method uses a cost function based on human visual model to estimate airlight map. It employs Artificial Bee Colony optimization (ABC) as the optimization technique for estimating air light map. Image is dehazed by removing the estimatedairlight from the degraded image. The performance of the algorithm is tested and compared with various other dehazing methods and the proposed algorithm dehazes the image effectively outperforming other methods.
Variable radiation pattern from co axial probe fed rectangular patch antenna ...eSAT Journals
Abstract The idea of obtaining variable radiation patterns from the same antenna is important aspect in achieving the adaptive antenna systems. In the EM signal processing the change of radiation signifies the information to be transmitted, its rate of transmission, the geographical changes and direction to transmit etc. i.e. each time when the requirement arises to change the radiation pattern it has to be done to satisfy the conditions. Electronically steerable antennas were used where the antenna radiation will be altered by varying the feed and similar case is applied for shaped patterns from array antenna where the feed to be given will be calculated and given accordingly. In the present concept the metamaterials are used to obtain different radiation patterns occurred at different operating frequencies using the same antenna without changing the antenna physically are varying its feed. Key Words: Inductance, capacitance, operating frequency, variable radiation, enhancement, radiation cancellation.
This document provides an overview of wavelet-based image fusion techniques. It discusses wavelet transform theory, including continuous and discrete wavelet transforms. For discrete wavelet transforms, it describes decimated, undecimated, and non-separated approaches. It explains how wavelet transforms can extract detail information from images at different resolutions, which can then be combined to create a fused image containing the best characteristics of the original images. While providing improved results over traditional fusion methods, wavelet-based approaches still have limitations such as artifact introduction that researchers continue working to address.
Enhancing the Radiation Pattern of Phase Array Antenna Using Particle Swarm O...IOSR Journals
The document describes a study that uses particle swarm optimization to enhance the radiation pattern of a phase array antenna by minimizing sidelobe levels. It first provides background on issues with high sidelobes in phase array antennas, such as power losses and interference. It then summarizes previous research using techniques like genetic algorithms for antenna array optimization. The study models the radiation pattern of linear arrays with different element numbers and calculates gain, finding that gain increases with more elements. However, sidelobe levels also increase relatively. Therefore, the study proposes using particle swarm optimization to optimize current excitation and control sidelobe levels while maintaining a narrow beamwidth.
DETECTION OF POWER-LINES IN COMPLEX NATURAL SURROUNDINGScsandit
Power transmission line inspection using Unmanned Aerial Vehicles (UAV) is taking off as an
exciting solution due to advances in sensors and flight technology. Extracting power-lines from
aerial images, taken from the UAV, having complex natural surroundings is a critical task in the
above problem. In this paper we propose an approach for suppressing natural surrounding that
leads to power line detection. The results of applying our method on real-life video frames taken
from a UAV demonstrate that our approach is very effective. We believe that our approach can
be easily used for line detection in any other real outdoor video as well.
Feature based ghost removal in high dynamic range imagingijcga
This paper presents a technique to reduce the ghost artifacts
in a high dynamic range (HDR) image. In HDR
imaging, we need to detect the motion between multiple exp
osure images of the same scene in order to
prevent the ghost artifacts
. First, w
e
establish
correspondences between the aligned reference image and the
other exposure images using the zero
-
mean normalized cross correlation (ZNCC
).
T
hen
, we
find object
moti
on regions
using
adaptive local thresholding of ZNCC feature maps and motion map clustering. In this
process, we focus on finding accurate motion regions and on reducing false detection in order to minimize
the side effects as well.
Through
experiments wit
h several sets of
low dynamic range
images captured with
different exposures, we show that the proposed method can remove the ghost artifacts better than existing
methods
.
The document summarizes the optical requirements for a low earth orbit surveillance satellite. It describes selecting components like a charge coupled device (CCD) and mirror that meet requirements for high resolution photos while being cost efficient. A Kodak CCD and BK7 glass mirror were chosen. Calculations were shown for the mirror size/shape, electrical power needs met by solar panels and batteries, and ensuring the mirror's optical power matches the CCD's needs. The design was found to meet constraints like a resolution under 2.5m while keeping costs low through choices like the affordable Kodak CCD.
Estimation of Separation and Location of Wave Emitting Sources : A Comparison...sipij
This document compares the Burg method and Fourier transform method for estimating the separation and location of wave emitting sources. The Burg method provides higher resolution than the Fourier transform method and can better resolve adjacent sources. Simulation results show the Burg method more accurately estimates source separation and location, especially when the separation distance is small or noise is present. The percentage error is smaller for the Burg method compared to the Fourier transform method.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document discusses echo cancellation using adaptive combination of normalized subband adaptive filters (NSAFs). It presents the following:
1. Fullband adaptive filters can have slow convergence due to correlated speech input and long echo path impulse responses. Subband adaptive filters (SAFs) address this by using individual adaptive filters in spectral subbands.
2. Adaptive combination of SAFs provides a way to achieve both fast convergence and small steady-state error. It independently adapts filters with different step sizes, then combines them using a mixing parameter adapted by stochastic gradient descent.
3. The proposed method adaptively combines NSAFs in subbands. It uses a large step size filter for fast convergence and a
Design a mobile telephone system in a certain cityeSAT Journals
Abstract
A mobile telephone system is designed to facilitate better communication among 8000 users in the centre of Roma. The network
performance prediction tool was used to determine the best location for deployment of the base station. It was based on some
parameters or predictions which were obtained by calculating needed data to construct the base station.
Theoretically call planning, cell splitting, frequency re-use, offered traffic etc. techniques are used to obtain a good coverage for
a certain city. In order to find the best cell cluster, many calculations were also implemented.
The objective achieved was designing a network system in city centre of Roma. The 7 cell cluster and 12 cell cluster was selected
for its overall efficiency. In addition to this, co-channel interference, best server, maximum received power etc. were examined for
deployment of the base station with help of Winprop.
Key Words:Mobile telephone system, Cell cluster, Frequency re-use, Network performance, Power,Winprop.
The document describes an Auto Signal Source Locator (ASSL) feature of the Sage Instruments 8901 UCTT that allows users to locate a signal source by driving around with the UCTT connected to GPS and RF antennas mounted on their vehicle. As measurements are taken during the drive, the ASSL algorithm estimates the source location and displays it as a red circle on a map. With more data points, the estimated location becomes more accurate. The feature provides a preliminary location that can then be refined using directional antennas on foot.
The document discusses using electromagnetically induced transparency (EIT) to increase the sensitivity of gravitational wave interferometers. It explores EIT properties through modeling and simulation to determine parameters that theoretically allow high transmission and narrow linewidths suitable for filtering squeezed light. Experimentally, the largest contrast observed was 3.9% with a linewidth of 657 Hz, while the narrowest linewidth was 202 Hz with 0.84% contrast. Both simulation and experimental results are presented and compared.
This document provides strategies for teaching physics to English learners. It recommends reducing cognitive, cultural, and language loads by pre-teaching vocabulary with worksheets, using total physical response and multiple intelligences activities, diagramming abstract concepts, using units as cues, and employing AVID vocabulary tables. The goal is to give students tools to succeed incrementally and build confidence over time rather than searching for a perfect strategy.
This document discusses using electromagnetically induced transparency (EIT) to increase the quantum limited sensitivity of gravitational wave interferometers. It first provides background on gravitational wave detection and EIT. It then describes simulations of EIT using two modeling programs which varied parameters like drive intensity and detuning. Experimental methods are also described where an EIT system using rubidium vapor was tested while varying parameters like cell temperature, drive intensity, and beam size. Results from both simulation and experiment are presented and show agreement, with the best EIT contrast and linewidth achieved. Further improvements are suggested by using lower temperatures and drive intensities.
The document provides details of four recording sessions to record various instruments for a folk band performing the song "Bok-Espok". Due to issues with availability of musicians, some instruments needed to be recorded separately or rerecorded. Microphones were carefully selected and positioned for each instrument. Mixing involved editing tracks, panning instruments, and EQing different tracks to balance the levels and frequencies. The goal was to capture each part with clarity while maintaining the live feel of the full band performance.
Modeling of Optical Scattering in Advanced LIGOHunter Rew
This document summarizes modeling of optical scattering in the Advanced LIGO gravitational wave detector. It describes measuring scattering using cameras and photodiodes, and calibrating the camera using known incident light levels. Scattering was measured from various test masses, with a particle on one test mass accounting for over 50% of scattering. Scatter data from photodiodes was also analyzed. A simulation tool called Static Interferometer Simulation was used to model scattering, showing good agreement with measurements. Ongoing work involves improving the simulation by accounting for alignment changes and different scatter sources.
What is Policy?
Why create a Policy?
How to make a Policy?
What is Sustainability?
How does Energy Use Impact Sustainability?
What is Climate Change?
How does Energy Use Contribute to Climate Change?
The Energy Challenge?
Some thoughts on the Way Forward
This document summarizes a presentation on low energy architecture given by Ar. Dr. Lim Chin Haw. It discusses the drivers for low energy architecture like increasing population and energy demand. It covers passive design strategies like building orientation, shading, natural ventilation and insulation. Active systems using renewable energy are also discussed. Computer simulations tools for analyzing building energy performance are mentioned. Case studies of low energy buildings in Malaysia are presented. Near future trends in low energy architecture are outlined.
Mohammed Rabah is a fire and safety engineer seeking a position in safety management. He has a B.Sc. in Fire and Safety Engineering and over 10 safety certifications. His experience includes projects in risk analysis, safety audits, and business continuity planning. He is also proficient in Arabic and English with volunteer and work experience in security and emergency response organizations.
This document discusses orbital angular momentum (OAM) modes in Bessel-Gauss beams and how atmospheric turbulence can cause crosstalk between modes. It describes how Bessel-Gauss beams are generated using axicons and modulated with OAM phases. Experiments were conducted using spatial light modulators to generate multiplexed OAM modes and measure the influence of simulated turbulence on power transmission and spreading between modes. Future work is proposed to further study turbulence effects over longer propagation distances and use mode sorting techniques.
1839 - Sir William Grove, first electrochemical H2/O2
reaction to generate energy
• 1950s - GE developed the solid-ion exchange H2 fuel cell
used by NASA
• 1960s- GE produced the fuel cell-based electrical power
system for NASA Gemini and Apollo space capsules
• 1960s other fuel cells discovered – phosphoric acid, SOFC,
molten carbonate
• 1970s – Vehicle manufacturers began to experiment FCEV.
• 1990 – The California Air Resource Board introduced the
Zero Emission Vehicle (ZEV) Mandate.
• 2000 – Fuel cell buses were deployed as part of the
HyFleet/CUTE project
• 2007 – fuel cell started to be sold commercially as APU
• 2008 – Honda begins leasing the FCX fuel cell electric
vehicle.
• 2009 – Large scale of residential CHP programme in Japan.
Eksperimen ini menunjukkan bagaimana detergen dapat membunuh bebek jika minyak di tubuhnya terkena detergen, karena detergen akan membuat minyak larut dan bebek tak dapat mengambang. Detergen yang terbuang ke sungai dapat mencemari perairan dan menumpuk di tubuh ikan, menyebabkan berbagai masalah lingkungan dan kesehatan.
The document describes the Fluorescence detector Array of Single-pixel Telescopes (FAST) project. FAST aims to build an array of low-cost single-pixel fluorescence detectors spaced over large areas to study ultra-high energy cosmic rays (UHECR). Each FAST station would have 12 telescopes with 4 PMTs each, covering a 30°x360° field of view. An array of 500 stations over 150,000 km2 could achieve an exposure over 12 times that of the Pierre Auger Observatory. Simulations show FAST may achieve 10% energy resolution and 35 g/cm2 Xmax resolution for cosmic rays above 1019.5 eV. The full-scale FAST prototype has been constructed and work is ongoing to develop
Charge Sharing Suppression in Single Photon Processing Pixel Arrayijeei-iaes
This paper proposes a mechanism for suppression of charge sharing in single photon processing pixel array by introducing additional circuit. The idea of the proposed mechanism is that in each pixel only analog part will introduced, the digital part is shared between each four pixels, this leads to reduce the number of transistors (area). By having communication pixels, a decision that which one of the neighboring pixels shall collect the distributed charges is taken. The functionality, which involves analog and digital behaviors, is modeled in VHDL.
This document describes a laser distance measurement system using a webcam. It consists of a laser transmitter and webcam receiver. The laser pulse is reflected off an object and received by the webcam. Software calculates the distance based on the time of flight. The system achieves high accuracy of ±3cm. It calibrates the system using test measurements to determine the relationship between pixel location of the laser dot and actual distance. This allows accurate distance measurements within a few percent of error out to over 2 meters. Potential improvements discussed are using a laser line instead of dot for more data points and a green laser for better visibility.
The document discusses optical detectors used in fiber optic communications systems. It describes the functioning of PIN photodetectors and avalanche photodetectors (APDs). PIN photodetectors convert received light photons into an electric current through the photoelectric effect. Their performance is characterized by quantum efficiency and responsivity. APDs have higher gain than PIN photodiodes through impact ionization, but also higher noise. Both device types aim to optimize sensitivity while minimizing noise.
This document discusses the application of photon counting multibin detectors to spectral CT. It describes how basis decomposition and energy weighting techniques can be used with multibin detectors to overcome limitations of conventional CT like beamhardening artifacts, non-standardized CT numbers, suboptimal contrast-to-noise ratio, and contrast cancellation. Basis decomposition involves representing the linear attenuation coefficient as a combination of energy dependent basis functions, while energy weighting optimizes contrast-to-noise by assigning different weights to different energy bins. These techniques enable spectral CT to perform quantitative imaging and low-dose scanning.
The document summarizes an experiment to measure the quantum efficiency (QE) of a KAF-0402 CCD image sensor using a liquid crystal filter. Key steps included: (1) Taking images of the sensor illuminated by the filter across wavelengths from 400-750nm and converting the results to electrons/second/cm^2, (2) Measuring the corresponding photon flux with a photodiode, (3) Calculating QE by dividing the sensor output by the photon flux, and (4) Finding the results matched well with the manufacturer's reported QE curve after applying calibration coefficients to account for the photodiode's wavelength dependence.
The document discusses various techniques for measuring properties of optical fibers, including:
- Attenuation measurement using the cut-back method to determine loss per unit length.
- Absorption and scattering loss measurement using temperature rise calculations and comparing scattered light.
- Dispersion measurement in the time domain using pulse broadening or in the frequency domain using spectral broadening.
- Refractive index profiling using interferometry of fiber slices or near-field scanning of light distributions.
- Numerical aperture determination by measuring far-field emission patterns or trigonometric calculations from patterns.
- Diameter measurement using laser scanning of fiber shadows or analysis of far-field scattering patterns.
svk.ppt final powerrr pointttt presentationsrajece
This document discusses various techniques for measuring properties of optical fibers, including:
- Attenuation measurement using the cut-back method to determine loss per unit length.
- Absorption loss measurement using a temperature measurement setup to separate out absorption contributions.
- Scattering loss measurement using a scattering cell to compare scattered and total power.
- Dispersion measurement using either time domain analysis of broadened pulses or frequency domain analysis of the fiber's transmission spectrum.
- Refractive index profile measurement using interferometry of a fiber slice or near-field scanning of light intensities.
Development of Compact P-Band Vector Reflectometer IJECEIAES
A compact and low cost portable vector reflectometer is designed for a reliable measurement of reflection coefficient, S . This reflectometer focuses on return loss measurement of frequency ranges from 450 MHz to 550 MHz. The detection of magnitude and phase is based on the utilization of surface mount Analog Devices AD8302 gain/phase detector. The data acquisition is controlled by using Arduino-Nano 3.0 microcontroller, with the use of two analog to digital converter (ADC) and a digital to analog converter (DAC). One port (Open, short and matched load) calibration technique is used to eliminate systematic errors prior to data acquisition. The evaluation of the reflectometer is done by comparing the result of the measurement to that of vector network analyzer. 11
This document discusses various techniques for measuring key optical fiber parameters. It describes methods for measuring total fiber attenuation using cut-back or substitution techniques. It also outlines approaches for measuring specific loss mechanisms like absorption and scattering loss. Methods covered for other fiber characteristics include dispersion measurement in time or frequency domains, refractive index profiling using interferometry or near-field scanning, numerical aperture determination, and diameter measurement of the fiber core and outer dimensions.
This document proposes a design for an optical transceiver section for wireless sensor nodes that uses visible light communication instead of radio frequency. It describes a cubic transceiver section with light emitting diodes and photodiodes arranged on the faces to allow full duplex communication in all directions. The design uses low-power, high-luminosity LEDs and analyzes the optical link between nodes to ensure sufficient illumination and a signal-to-noise ratio above the threshold for error-free communication. Simulation results show the design provides over 450 lux of illumination across an indoor area and a signal-to-noise ratio above 19.6 dB between nodes up to 2 meters apart.
Medical Physics Imaging PET CT SPECT CT LectureShahid Younas
The document discusses attenuation correction techniques in single photon emission computed tomography (SPECT). It describes how attenuation causes decreased counts from activity deeper in the body, leading to apparent decreases in activity toward image centers. It covers methods to correct for this, including using transmission scans to create attenuation maps for use in reconstruction. It also addresses other factors like spatial resolution, magnification, center of rotation alignment, and uniformity and techniques to evaluate and correct them.
The aim of this paper is to present the essential elements of the electro-optical imaging system EOIS for space applications and how these elements can affect its function. After designing a spacecraft for low orbiting missions during day time, the design of an electro-imaging system becomes an important part in the satellite because the satellite will be able to take images of the regions of interest. An example of an electro-optical satellite imaging system will be presented through this paper where some restrictions have to be considered during the design process. Based on the optics principals and ray tracing techniques the dimensions of lenses and CCD (Charge Coupled Device) detector are changed matching the physical satellite requirements. However, many experiments were done in the physics lab to prove that the resizing of the electro optical elements of the imaging system does not affect the imaging mission configuration. The procedures used to measure the field of view and ground resolution will be discussed through this work. Examples of satellite images will be illustrated to show the ground resolution effects.
This document summarizes research on modeling optical scattering in the Laser Interferometer Gravitational-Wave Observatory (LIGO) using the Bidirectional Reflectance Distribution Function (BRDF). LIGO contains laser interferometers with test masses that scatter a small amount of light due to imperfections. The BRDF was measured for LIGO's test masses and compared to scattering models. It was found that scattering from the installed test masses was larger than models predicted, likely due to alterations during installation. More work is needed to update models to match the conditions of the installed test masses.
A combined spectrum sensing method based DCT for cognitive radio system IJECEIAES
In this paper a new hybrid blind spectrum sensing method is proposed. The method is designed to enhance the detection performance of Conventional Energy Detector (CED) through combining it with a proposed sensing module based on Discrete Cosine Transform (DCT) coefficient’s relationship as operation mode at low Signal to Noise Ratio (SNR) values. In the proposed sensing module a certain factor called Average Ratio (AR) represent the ratio of energy in DCT coefficients is utilized to identify the presence of the Primary User (PU) signal. The simulation results show that the proposed method improves PU detection especially at low SNR values.
Timing-pulse measurement and detector calibration for the OsteoQuant®.Binu Enchakalody
The document describes calibration procedures for an OsteoQuant pQCT scanner. It discusses:
1) Measuring motor and detector timing pulses using a USB counter to synchronize data collection with position. Measurements were accurate to within 0.13%.
2) Correcting for detector dead time using a polynomial model to linearize photon counts versus tube current data. Corrections were stable to within 0.5% error.
3) Correcting for beam hardening effects using polynomial and bimodal energy models to linearize projection values with absorber thickness. A secondary correction further improved stability of different-date corrections to below 1% error.
ANALYTICAL PERFORMANCE EVALUATION OF A MIMO FSO COMMUNICATION SYSTEM WITH DIR...optljjournal
MIMO FSO correspondence is examined as of late to build up a hearty correspondence connects within the sight of atmospheric turbulence. In this paper an analytical approach is developed to assess the impact of atmospheric turbulence on BER performance of a MIMO FSO communication system with Q-ary Pulse Position Modulation (QPPM). Examination is exhibited to discover flag to clamor proportion at the yield of an immediate location collector with optical power modulator under strong turbulent condition which is modeled as gamma-gamma distribution. The outcomes demonstrate that the BER performance is emphatically debased because of the impact of atmospheric turbulence. In any case, the execution can be enhanced by expanding the quantity of transmitters, beneficiaries and request of Q in PPM. Results demonstrate that the FSO MIMO framework with M=8, N=4 Q=4 gives the 22 dB improvement at BER of 10-9 .
This document discusses modeling photodetector sensitivity in OptiSystem. It provides four examples of setting up and measuring receiver sensitivity for different photodetector configurations: 1) an ideal PIN photodetector limited by shot noise, 2) a PIN photodetector limited by thermal noise, 3) an APD photodetector with both thermal and shot noise, and 4) a PIN photodetector with optical pre-amplification where the dominant noise is signal-ASE beat noise. For each example, the document outlines the system parameters and reference, and how OptiSystem can be used to analyze eye diagrams and calculate sensitivity metrics like bit error rate.
ANALYTICAL PERFORMANCE EVALUATION OF A MIMO FSO COMMUNICATION SYSTEM WITH DIR...optljjournal
MIMO FSO correspondence is examined as of late to build up a hearty correspondence connects within the
sight of atmospheric turbulence. In this paper an analytical approach is developed to assess the impact of
atmospheric turbulence on BER performance of a MIMO FSO communication system with Q-ary Pulse
Position Modulation (QPPM). Examination is exhibited to discover flag to clamor proportion at the yield
of an immediate location collector with optical power modulator under strong turbulent condition which is
modeled as gamma-gamma distribution. The outcomes demonstrate that the BER performance is
emphatically debased because of the impact of atmospheric turbulence
Similar to Modeling of Optical Scattering in Advanced LIGO (20)
ANALYTICAL PERFORMANCE EVALUATION OF A MIMO FSO COMMUNICATION SYSTEM WITH DIR...
Modeling of Optical Scattering in Advanced LIGO
1. Modeling of Optical Scattering in Advanced LIGO
Hunter Rew, Mentor: Joseph Betzwieser
November 3, 2014
Abstract
In a quantum limited sensitivity interferometer such as LIGO, light which scatters from
an optic can introduce noise in the phase measurement at the antisymmetric port, as well as
become a significant source of power loss. By measuring power seen by a camera or photodiode
in a well defined position, outside of the beam path, one can use the incident light in order
to model the total amount of light scattered from the optic. We have used the bidirectional
reflectance distribution function on data obtained from photodiodes along the beam tubes baffles
to model scattering from LIGOs test masses during each alignment since the first transmission
of Advanced LIGO. This data acquisition and analysis has been automated for ease of future
analysis. We have also experimentally determined 8 and 12 bit monochromatic count to Watt
conversion factors for the Basler Ace 100gm cameras, currently used to monitor light within the
interferometer, so that archived images may be used as a data source for the model.
LIGO-P1400197 1
3. 1 Introduction
test mass
test mass
test mass
test mass
light storage arm
photodetector
laser
beam
splitter
light storage arm
Figure 1: Diagram of a laser interferometer.
The Laser Interferometer Gravitational-Wave Observatory (LIGO) contains two Fabry-Perot cavities
as seen in Figure 1 (labeled ”light storage arms”). As light reflects between two test masses, a small
percentage is scattered in an unintended direction due to surface imperfections. This scattering
results in power loss and noise in the interferometer (IFO) which must be either accounted for or
corrected [6]. Measuring optical scattering requires a power measurement of the light scattered to
a known area, distance, and angle. This can be accomplished directly either through the use of a
camera with a known conversion from power incident on the lens to intensity of light in the image
or with a photodiode (PD). The scatter is measured by the Bidirectional Reflectance Distribution
Function (BRDF) and is given by [7]:
BRDF =
Ps
Ω × Pi × cos(θs)
(1)
It is a measure of the ratio of power which is scattered (Ps) per solid angle (Ω) to power incident
(Pi) on the reflecting surface. The solid angle is calculated as:
Ω =
A
L
(2)
Where A is the area of the surface and L is the distance from the point of scatter. This is a measure
of the relative size of the area as seen from the point of scatter. Figure 2 shows how light reflects
from an imperfect surface, as well as the scattered beams that the BRDF represents.
LIGO-P1400197 3
4. Figure 2: How light scatters from an imperfect surface [1].
2 Camera CCD Calibrations
There are cameras attached to viewports throughout the IFO aimed at specific optics such as the input
mode cleaner (IMC), input test masses (ITMs), and end test masses (ETMs). One can determine
where light is being scattered through analysis of the images from these cameras, however, the images
alone can not tell how much light is being shown. To calculate this, one must know how the cameras
CCD converts incident light to a pixel value at a given exposure. Then a conversion factor can be
written in the form
Power(W) × Exposure(µs)
Intensity
= constant (3)
where Intensity is the sum of pixel values. Thus, the power incident on the CCD can be calculated
as some constant multiplied by the total light intensity per microsecond of exposure.
3 Calibration Measurement Equipment and Procedures
To begin, we installed a camera facing the Y-arm ETM with a beamsplitter to reflect 50% of the
incoming light towards a PDA100A, connected to channel L1:LSC-Y EXTRA AI 2, in order to mea-
sure the power reaching the CCD [2]. This, however, had inherent problems, since the area of the
photodetector (PD) is much (roughly 7 times) larger than the area of the CCD. In the end, this
merely gave us an upper limit on the incident power, as we knew the PD was at least receiving as
much light as the CCD, but could potentially receive several times more.
Our experimental setup is illustrated in Figures 3 and 4. When the beam reaches the first lens,
it diverges to spread out the power. A pinhole sized beam is then allowed through the iris before
reaching the converging lens, where it will be focused to a point on both the CCD and PD. We used
this setup to start at a 4µs exposure, then increasing exposure in intervals until the image was nearly
saturated. At this point we would decrease the power supplied to the laser in order to continue to
longer exposures. Once we reached the point where the lasers power couldn’t be reduced further, the
ND filter was added to reduce the power to the CCD and PD by a factor of 10. With this method
we were able to span from 4 to 1200µs, at powers from 8.2 to 1.6µW. Each power reading carried
an uncertainty of ±100nW, which also limited how far we could reduce the power. 12 bit and 8 bit
monochromatic data were taken for each exposure and power setting.
LIGO-P1400197 4
5. Figure 3: Infrared laser with two mirrors to control its path towards the camera.
We built a new setup in the optics lab, as shown in figures 1 and 2, consisting of the following:
• Innolight Prometheus 50NE laser
• Innolight Prometheus line power supply
(not pictured)
• 2 infrared mirrors
• 30 mm diverging IR lens
• 65 mm converging IR lens
• lever controlled iris
• NE10A absorptive ND filter
• Thor Labs CM1-BS015 beam splitter
• Ophir NOVA laser power meter
• Basler acA640-100gm camera.
Figure 4: From front to back: camera, power meter,
beamsplitter, 65mm lens, iris, -30mm lens, ND filter
LIGO-P1400197 5
6. 4 Calibration Results and Analysis
Figure 5: 8 bit calibrations vs exposure on a logarithmic scale. Fluctuations in the calibration at lower
exposures could be the result of noise fluctuations, as the sum of the pixel values is smaller compared to the
noise.
Because the ratio between exposure and pixel sum is constant at a given power, we were able to
remove noise from each image by choosing a reference image at each power, and subtracting it from
all other images at that power. A remaining image now represents an exposure equal to the difference
of the images which it came from, with less noise than either. Plots of the 12 and 8 bit calibrations,
calculated from Equation 3, are given in figures 5 and 6. For the 8 bit images, the mean calibration is
9.8 × 10−10
W · µs · p−1
i , where pi is the sum of the pixel values, or intensities. The standard deviation
is 0.5 × 10−10
W · µs · p−1
i . This is within the upper limit of the values measured by our camera and
PDA100A, 1.4 ± 0.2 × 10−9
W · µs · p−1
i , where the uncertainty is the total fluctuation seen across all
measurements. The mean calibration for the 12 bit images is 5.9 × 10−11
W · µs · p−1
i with a standard
deviation of 0.3 × 10−11
W · µs · p−1
i , roughly 16 times smaller than the 8 bit calibration, as would be
expected. Error bars in the plots are presented as 20% due to differences in readings from different
power meter heads.
LIGO-P1400197 6
7. Figure 6: 12 bit calibrations vs exposure on a logarithmic scale.
5 Baffle Photodiodes
Advanced LIGO contains structures called baffles between the test masses designed to block light
scattered at specific angles from recombining with the main beam. There are 16 PDs within the
beam tube baffles, 4 on each baffle, in order to aid in the initial alignment of the IFO as well as
measure light scattered from TMs during alignment. The PDs are located around each baffle hole,
directly in front of each TM within the Fabry-Perot cavity. A reading on a PD during a full lock
gives the scattered power, Ps, which is calculated as:
Ps =
Vs
R × T × G
(4)
where Vs is the voltage induced on the PD, R is the responsivity of the PD in A/W, T is the
transimpedance of the PD in Ohms, and G is the dimensionless gain of the PD. By knowing the
scattered power, the solid angle of the PD, Ω, its angle relative to the incident beam, θs, as well as
the power of the incident beam, Pi, one can calculate the BRDF from Equation 1.
LIGO-P1400197 7
8. 6 Baffle Photodiode Data Acquisition Procedures
6.1 Power Scattered to Photodiodes
PD voltages are taken from channels of the form L1 : AOS − ∗TM ∗ BAFFLEPD ∗ V OLTS,
qhere ∗TM∗ is the optic the baffle is attached to, including ITMX, ETMX, ITMY, and ETMY.
The second ∗ can have a value of 1,2,3, or 4, indicating the PD. These channels output Volts. The
power scattered to the PD, Ps, is calculated from the median voltage scattered to the PD as shown
in Equation 4. Gain is taken from channels of the form L1 : AOS −∗TM ∗ BAFFLEPD ∗ GAIN.
Transimpedance is 20 kΩ, and responsivity is 0.25 ± 0.05 (A/W) [3]. Uncertainty in voltage for each
lock is defined as the standard deviation of voltages throughout the lock. Uncertainty in the power
scattered to each PD is calculated as a function of the uncertainties in responsivity and voltage as
(∆Ps)2
= (
1
R × T × G
)2
(∆Vs)2
+ (
Vs
R2 × T × G
)2
(∆R)2
(5)
The voltage offset is taken as the channel value at least 8 minutes before each lock.
6.2 Power Build Up in the Fabry-Perot Cavities
Power incident on each optic is calculated from the median value from the L1 : LSC−POP A LF OU-
TPUT channel during each lock. This channel is the pick off from the power recycling cavity before
the beam splitter and outputs Watts. After the laser passes through the beamsplitter, 50% of this
power reaches each ITM. Power build up in each Fabry-Perot cavity is calculated as [6]
Pcavity = PITM ×
2F
π
(6)
where F = 416 is the cavity finesse. The offset is taken as the value of L1 : LSC−POP A LF OFFSET,
about 3.129 W. The uncertainty in cavity power for a lock is defined as the standard deviation of
values during the lock.
6.3 When Data is Taken
We’ve defined the IFO to be in full lock (more or less arbitrarily) when the normalized output of the
power recycling cavity, taken from the L1 : LSC −POP A LF NORM MON channel, surpasses 20
for a duration of at least 30 minutes. This provides 84 locks. Reducing this duration to 15 minutes
results in 155 locks with larger standard deviations.
6.4 Calculation of the BRDF
Solid angle is calculated from Equation 2 with A = 100mm2
[3] as the area of the PD and L =
4000mm as the approximate distance from the PD to the optic it is facing. Scatter angles are
calculated from the measurements given in DCC document D1200296-v5 [5]. Uncertainty in the
BRDF values is calculated as a function of the uncertainties in scattered power and incident power
as:
(∆BRDF)2
= (
1
Ω × Pi × cos(θs)
)2
(∆Ps)2
+ (
Ps
Ω × P2
i cos(θs)
)2
(∆Pi)2
(7)
LIGO-P1400197 8
12. ITMY
0 10 20 30 40 50 60 70 80 90
−1000
0
1000
2000
3000
4000
Pd1
Lock Number
BRDF
+ Uncertainty
− Uncertainty
Calculated Value
−500 0 500 1000 1500 2000 2500
0
5
10
15
BRDF
Count
Pd1mean: 1160±40
std: 680.574998
stde: 74.256820
0 10 20 30 40 50 60 70 80 90
−10
0
10
20
30
40
Pd2
Lock Number
BRDF
+ Uncertainty
− Uncertainty
Calculated Value
0 2 4 6 8 10 12 14 16
0
5
10
15
20
BRDF
Count
Pd2
mean: 8.0±0.4
std: 3.331214
stde: 0.372441
0 10 20 30 40 50 60 70 80 90
−20
0
20
40
60
Pd3
Lock Number
BRDF
+ Uncertainty
− Uncertainty
Calculated Value
0 5 10 15 20 25 30 35 40
0
5
10
15
BRDF
Count
Pd3
mean: 14.5±0.6
std: 10.654292
stde: 1.191186
0 10 20 30 40 50 60 70 80 90
−1000
0
1000
2000
3000
4000
Pd4
Lock Number
BRDF
+ Uncertainty
− Uncertainty
Calculated Value
0 50 100 150 200 250 300 350
0
2
4
6
8
10
BRDF
Count
Pd4
mean: 170±20
std: 75.113810
stde: 8.560013
Figure 10: ITMY BRDFs and distribution by lock
Figures 7 through 10 show the BRDF values with uncertainty, as calculated from Equation 6, for
each of the 84 locks in the left columns, and the distribution of values across all locks in the right
columns. Standard deviations reach as high as 108% (Figure 9: ETMY PD2), while uncertainties
in nearly all measurements remain below 20%. This could indicate systematic error or an unknown
noise source. Figure 11 shows the cavity power for each arm during lock, along with uncertainties.
Sudden peaks and troughs in this data tends to correlate with major fluctuations in the BRDFs,
indicating that the PD voltages did not change with the power as expected. This could mean that
these changes in power are not actually occurring.
LIGO-P1400197 12
13. 0 10 20 30 40 50 60 70 80 90
−1
0
1
2
3
4
5
6
x 10
4
Lock Number
CavityPower(W)
Cavity Power By Lock
+ Uncertainty
− Uncertainty
Median During Full Lock
Figure 11: Cavity power for each lock as calculated from the power recycling pick off.
8 Modeling of Total Optical Scatter
8.1 Analysis of ETMY Image
ETMY Particle Zoom
Pixels
1 2 3 4 5 6 7 8 9
Pixels
2
4
6
8
10
12
20
40
60
80
100
120
140
Figure 12: Zoom in of a particle seen on ETMY [4].
LIGO-P1400197 13
14. Figure 12 shows scatter from a particle on ETMY captured by a Basler ACE 100gm camera installed
in a viewport of the IFO. The camera is roughly 5.8m from ETMY and at an angle of 10◦
, the lens
diameter is 45mm, and the image was taken at an exposure of 4µs [4]. From Equation 2, this gives a
solid angle of 0.001 steradians. Using the 8-bit calibration factor of 1×10−9
W ·µ·pi
−1
, 1.3×10−6
W are
incident on the lens. Taking the cavity power to be 2500W and plugging these values into Equation
1, the BRDF is 1.1 × 10−5
Ω−1
. Assuming 10◦
is a large enough angle to approximate the BRDF as
constant, the total loss at that angle is simply given by π × BRDF [4]. This yields 3.6 × 10−5
or 36
ppm.
8.2 Stationary Interferometer Simulations
The path and power distribution (W × m−2
) of the light in the IFO can be modeled using the Static
Interferometer Simulation (SIS) package, which uses phase maps of the TMs taken before installation.
Figure 13: Model of power scattered to ITMY
(looking towards the ITM).
Distance from optic center (m)
Distancefromopticcenter(m)
Power Distribution on ITMY (clean)
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
Figure 14: 3 Dimensional view of the model of
power scattered to ITMY.
Figure 13 shows a head on view of this power distribution in log scale at ITMY. The features
of peaks and troughs in the scattered power suggest that power measured by a PD could depend
heavily on the alignment of the TMs. This might explain the large standard deviations seen in the
PD BRDF measurements in Figures 7 through 10.
8.3 Comparison of Measured Data to the Model
Figure 15 shows how the measurements compare to the SIS modeled values for scatter towards ITMY.
There are two important points to note from this plot:
• The measurements are comparable with the model
• Scatter from the installed TMs is larger than the model for all angles
The latter indicates that the fine structure of the TMs was altered during installation, likely from
accumulation of particles.
LIGO-P1400197 14
15. 4 5 6 7 8 9 10 11 12
x 10
−5
0
0.2
0.4
0.6
0.8
1
1.2
x 10
−8
Scatter angle (radians)
Fractionoftotalpowerscatteredtoangle
Comparison of Measured and Modeled Scatter by Angle (ITMY)
Measured scatter
Modeled scatter
Figure 15: Measured and modeled fraction of power scattered to PDs.
In principle, the phase maps used by SIS can be altered to include any changes to the TM
surfaces since installation. This, however, has proven very difficult to achieve and work is on going.
Upon producing this updated model, one could integrate the power distribution across all angles to
calculate the total power loss in each cavity, as well as where the power is going.
9 Conclusions and Future Work
Due to the sheer amount of samples and low uncertainties in measurements, we are confident that our
findings do reflect the true mean BRDFs for each PD. We have begun comparing these measurements
to models of optical scatter in the Fabry-Perot cavities based on phase maps of the clean TMs, using
SIS. We have found that our measurements from the installed TMs are reasonable compared to
the model. Our goal is to add modifications to these phase maps in an attempt to duplicate the
particles that have accumulated on the TMs since installation. This will provide a model for the
total scattering as it is today.
LIGO-P1400197 15
16. 10 References
[1] Bsdf. http://en.wikipedia.org/wiki/File:BSDF05_800.png, 2006. File: BSDF05 800.png,
GNU Free Documentation License.
[2] Joseph Betzwieser and Hunter Rew. Added temporary gige camera with power meter. https:
//alog.ligo-la.caltech.edu/aLOG/index.php?callRep=13151. LLO Logbook, Accessed: 07-
07-2014.
[3] EG&G Photon Devices. YAG Series.
[4] Hunter Rew, Joseph Betzwieser, and David Feldbaum. Analysis of etmy scatter. https://alog.
ligo-la.caltech.edu/aLOG/index.php?callRep=13414. LLO Logbook, Accessed: 07-07-2014.
[5] Manuel Ruiz and Tim Nguyen. ACB 1 HOLE RIGHT QPD SKIN (w PD). LIGO DCC, October
2012.
[6] Peter R. Saulson. Fundamentals of Interferometric Gravitational Wave Detectors. World Scientific
Publishing, 1994.
[7] John C. Stover. Optical Scattering: Measurement and Analysis. The International Society for
Optical Engineering, second edition, 1995.
LIGO-P1400197 16
17. 11 Acknowledgments
This research was conducted under Caltech’s Summer Undergraduate Research Fellowship program
at the Ligo Livingston Observatory and was funded by the National Science Foundation. Special
thanks to Hiro Yamamoto of Caltech for his work on SIS and help with utilizing it for this research.
I’d also like to thank my mentor, Joe, for constructing this project, as well as everyone at LLO who
guided me through it.
LIGO-P1400197 17