“Trade-Off between Detection and Resolution of Two Point Objects Under Various Conditions of Imaging Situations: Part-I: Mathematical Formulation of the Problem”
It is a well-experienced fact that whenever one tries to detect a weak object point in the vicinity of an intense point object, viz., a binary star-SIRUS and its companion weak satellite star, there is always loss of resolution of the optical system. In other words, one wants to improve the defectively of the system, there is always a loss of resolution capabilities of the system. Thus, there is a trade-off between Detection and Resolution of optical systems under various imaging situations. In this first paper of discussion of this trade-off, we have derived the Fourier analytical formulation of this problem. This formulation will be used to find out a compatible trade-off between Detection and Resolution in our further publications
The document discusses using a probabilistic neural network (PNN) to analyze seismic data and well logs to identify physical attributes, describing the layers and processing of the PNN model as well as examples of preprocessing seismic data and attributes to train the PNN to accurately predict properties like porosity and hydrocarbon volume. The PNN is trained on normalized seismic attribute data and well logs then applied to the full 3D seismic volume to generate property predictions across the area.
Liu Natural Scene Statistics At Stereo FixationsKalle
We conducted eye tracking experiments on naturalistic stereo images presented through a haploscope, and found that fixated
luminance contrast and luminance gradient were generally higher than randomly selected luminance contrast and luminance gradient, which agrees with previous literature. However we also found that the fixated disparity contrast and disparity gradient were generally lower than randomly selected disparity contrast and disparity gradient. We discuss the implications of this remarkable
result.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Blind Source Separation of Super and Sub-Gaussian Signals with ABC AlgorithmIDES Editor
Recently, several techniques have been presented
for blind source separation using linear or non-linear mixture
models. The problem is to recover the original source signals
without knowing apriori information about the mixture model.
Accordingly, several statistic and information theory-based
objective functions are used in literature to estimate the
original signals without providing mixture model. Here,
swarm intelligence played a major role to estimate the
separating matrix. In our work, we have considered the recent
optimization algorithm, called Artificial Bee Colony (ABC)
algorithm which is used to generate the separating matrix in
an optimal way. Here, Employee and onlooker bee and scout
bee phases are used to generate the optimal separating matrix
with lesser iterations. Here, new solutions are generated
according to the three major considerations such as, 1) all
elements of the separating matrix should be changed according
to best solution, 2) individual element of the separating matrix
should be changed to converge to the best optimal solution, 3)
Random solution should be added. These three considerations
are implemented in ABC algorithm to improve the
performance in Blind Source Separation (BSS). The
experimentation has been carried out using the speech signals
and the super and sub-Gaussian signal to validate the
performance. The proposed technique was compared with
Genetic algorithm in signal separation. From the result, it
was observed that ABC technique has outperformed existing
GA technique by achieving better fitness values and lesser
Euclidean distance.
The document discusses image restoration techniques to remove blur and noise from photographs. It begins by defining different types of blur that can degrade images, such as defocus blur, motion blur, and noise. It then describes how restoration aims to estimate the blurring function and undo the effects of blur to restore the original sharp image. The document provides several examples of noise and techniques for noise reduction, such as filtering methods to address salt and pepper noise, Gaussian noise, and periodic noise. It emphasizes that the goal of restoration is to objectively reconstruct the original image by modeling and inverting the degradation process.
Effect of kernel size on Wiener and Gaussian image filteringTELKOMNIKA JOURNAL
In this paper, the effect of the kernel size of Wiener and Gaussian filters on their image restoration qualities has been studied and analyzed. Four sizes of such kernels, namely 3x3, 5x5, 7x7 and 9x9 were simulated. Two different types of noise with zero mean and several variances have been used: Gaussian noise and speckle noise. Several image quality measuring indices have been applied in the computer simulations. In particular, mean absolute error (MAE), mean square error (MSE) and structural similarity (SSIM) index were used. Many images were tested in the simulations; however the results of three of them are shown in this paper. The results show that the Gaussian filter has a superior performance over the Wiener filter for all values of Gaussian and speckle noise variances mainly as it uses the smallest kernel size. To obtain a similar performance in Wiener filtering, a larger kernel size is required which produces much more blur in the output mage. The Wiener filter shows poor performance using the smallest kernel size (3x3) while the Gaussian filter shows the best results in such case. With the Gaussian filter being used, similar results of those obtained with low noise could be obtained in the case of high noise variance but with a higher kernel size.
Shift Invarient and Eigen Feature Based Image Fusion ijcisjournal
Image fusion is a technique of fusing multiple images for better information and more accurate image
compared input images. Image fusion has applications in biomedical imaging, remote sensing, pattern
recognition, multi-focus image integration, and modern military. The proposed methodology uses benefits
of Stationary Wavelet Transform (SWT) and Principal Component Analysis (PCA) to fuse the two images.
The obtained results are compared with exiting methodologies and shows robustness in terms of entropy,
Peak Signal to Noise Ratio (PSNR) and standard deviation.
RADAR Images are strongly preferred for analysis of geospatial information about earth surface to assesse envirmental conditions radar images are captured by different remote sensors and that images are combined together to get complementary information. To collect radar images SAR(Synthetic Aperture Radar) sensors are used which are active sensors and can gather information during day and night without affecting weather conditions. We have discussed DCT and DWT image fusion methods,which gives us more informative fused image simultaneously we have checked performance parameters among these two methods to get superior method from these two techniques
The document discusses using a probabilistic neural network (PNN) to analyze seismic data and well logs to identify physical attributes, describing the layers and processing of the PNN model as well as examples of preprocessing seismic data and attributes to train the PNN to accurately predict properties like porosity and hydrocarbon volume. The PNN is trained on normalized seismic attribute data and well logs then applied to the full 3D seismic volume to generate property predictions across the area.
Liu Natural Scene Statistics At Stereo FixationsKalle
We conducted eye tracking experiments on naturalistic stereo images presented through a haploscope, and found that fixated
luminance contrast and luminance gradient were generally higher than randomly selected luminance contrast and luminance gradient, which agrees with previous literature. However we also found that the fixated disparity contrast and disparity gradient were generally lower than randomly selected disparity contrast and disparity gradient. We discuss the implications of this remarkable
result.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Blind Source Separation of Super and Sub-Gaussian Signals with ABC AlgorithmIDES Editor
Recently, several techniques have been presented
for blind source separation using linear or non-linear mixture
models. The problem is to recover the original source signals
without knowing apriori information about the mixture model.
Accordingly, several statistic and information theory-based
objective functions are used in literature to estimate the
original signals without providing mixture model. Here,
swarm intelligence played a major role to estimate the
separating matrix. In our work, we have considered the recent
optimization algorithm, called Artificial Bee Colony (ABC)
algorithm which is used to generate the separating matrix in
an optimal way. Here, Employee and onlooker bee and scout
bee phases are used to generate the optimal separating matrix
with lesser iterations. Here, new solutions are generated
according to the three major considerations such as, 1) all
elements of the separating matrix should be changed according
to best solution, 2) individual element of the separating matrix
should be changed to converge to the best optimal solution, 3)
Random solution should be added. These three considerations
are implemented in ABC algorithm to improve the
performance in Blind Source Separation (BSS). The
experimentation has been carried out using the speech signals
and the super and sub-Gaussian signal to validate the
performance. The proposed technique was compared with
Genetic algorithm in signal separation. From the result, it
was observed that ABC technique has outperformed existing
GA technique by achieving better fitness values and lesser
Euclidean distance.
The document discusses image restoration techniques to remove blur and noise from photographs. It begins by defining different types of blur that can degrade images, such as defocus blur, motion blur, and noise. It then describes how restoration aims to estimate the blurring function and undo the effects of blur to restore the original sharp image. The document provides several examples of noise and techniques for noise reduction, such as filtering methods to address salt and pepper noise, Gaussian noise, and periodic noise. It emphasizes that the goal of restoration is to objectively reconstruct the original image by modeling and inverting the degradation process.
Effect of kernel size on Wiener and Gaussian image filteringTELKOMNIKA JOURNAL
In this paper, the effect of the kernel size of Wiener and Gaussian filters on their image restoration qualities has been studied and analyzed. Four sizes of such kernels, namely 3x3, 5x5, 7x7 and 9x9 were simulated. Two different types of noise with zero mean and several variances have been used: Gaussian noise and speckle noise. Several image quality measuring indices have been applied in the computer simulations. In particular, mean absolute error (MAE), mean square error (MSE) and structural similarity (SSIM) index were used. Many images were tested in the simulations; however the results of three of them are shown in this paper. The results show that the Gaussian filter has a superior performance over the Wiener filter for all values of Gaussian and speckle noise variances mainly as it uses the smallest kernel size. To obtain a similar performance in Wiener filtering, a larger kernel size is required which produces much more blur in the output mage. The Wiener filter shows poor performance using the smallest kernel size (3x3) while the Gaussian filter shows the best results in such case. With the Gaussian filter being used, similar results of those obtained with low noise could be obtained in the case of high noise variance but with a higher kernel size.
Shift Invarient and Eigen Feature Based Image Fusion ijcisjournal
Image fusion is a technique of fusing multiple images for better information and more accurate image
compared input images. Image fusion has applications in biomedical imaging, remote sensing, pattern
recognition, multi-focus image integration, and modern military. The proposed methodology uses benefits
of Stationary Wavelet Transform (SWT) and Principal Component Analysis (PCA) to fuse the two images.
The obtained results are compared with exiting methodologies and shows robustness in terms of entropy,
Peak Signal to Noise Ratio (PSNR) and standard deviation.
RADAR Images are strongly preferred for analysis of geospatial information about earth surface to assesse envirmental conditions radar images are captured by different remote sensors and that images are combined together to get complementary information. To collect radar images SAR(Synthetic Aperture Radar) sensors are used which are active sensors and can gather information during day and night without affecting weather conditions. We have discussed DCT and DWT image fusion methods,which gives us more informative fused image simultaneously we have checked performance parameters among these two methods to get superior method from these two techniques
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...ijistjournal
The SAR and SAS images are perturbed by a multiplicative noise called speckle, due to the coherent nature of the scattering phenomenon. If the background of an image is uneven, the fixed thresholding technique is not suitable to segment an image using adaptive thresholding method. In this paper a new Adaptive thresholding method is proposed to reduce the speckle noise, preserving the structural features and textural information of Sector Scan SONAR (Sound Navigation and Ranging) images. Due to the massive proliferation of SONAR images, the proposed method is very appealing in under water environment applications. In fact it is a pre- treatment required in any SONAR images analysis system. The results obtained from the proposed method were compared quantitatively and qualitatively with the results obtained from the other speckle reduction techniques and demonstrate its higher performance for speckle reduction in the SONAR images.
The document describes a method for estimating parameters of an imploded shell target based on analysis of x-ray radiographs. X-ray radiography is used to measure the target's optical thickness, related to density, by analyzing the shadow cast on a radiograph. A model treats the target as a spherical shell to estimate the inner radius, outer radius, and central optical thickness based on the radiograph's intensity profile. Measurement uncertainties are accounted for using a weighted least-squares method to minimize error and determine parameter estimate uncertainties from the model fit to the data. Choosing an optimal backlighter photon energy can provide a radiograph profile that yields the most precise parameter estimates.
iaetsd Image fusion of brain images using discrete wavelet transformIaetsd Iaetsd
1) The document discusses using discrete wavelet transform to fuse MRI and CT brain images. This allows physicians to view soft tissue details from MRI and bone details from CT in a single fused image.
2) Discrete wavelet transform decomposes images into different frequency bands, allowing salient features like edges to be separated. It is proposed to fuse MRI and CT brain images using discrete wavelet transform to reduce noise and computational load compared to other methods.
3) Fusing the images provides advantages for physicians by having both soft tissue and bone details in a single image, reducing storage costs compared to viewing images separately.
IRJET- Brain Tumor Detection using Digital Image ProcessingIRJET Journal
This document discusses techniques for detecting brain tumors using digital image processing of MRI scans. It begins with an introduction to brain anatomy and tumors. The methodology section then outlines the steps used: 1) Preprocessing images using median filtering to reduce noise, 2) Segmenting images using techniques like k-means clustering, fuzzy c-means, and watershed to separate tumor regions, 3) Extracting features from segmented regions, and 4) Classifying images using the features to detect the presence of tumors. The goal is to develop an automated system to help doctors diagnose brain tumors more accurately from MRI scans.
Performance analysis of image filtering algorithms for mri imageseSAT Publishing House
This document analyzes the performance of three image filtering algorithms (median filter, Wiener filter, and center weighted median filter) at removing noise from MRI images. The algorithms are tested on MRI images corrupted with different noise types. The Wiener filter is found to reconstruct images with the highest quality according to measurements of mean square error and peak signal-to-noise ratio. The study concludes the Wiener filter provides the best denoising of MRI images compared to the other algorithms tested.
This article describes two experiments using single photons to determine the index of refraction and thickness of a microscope coverslip. In the first experiment, transmission of single photons through the coverslip at various angles is measured to determine the index of refraction by fitting the data to Fresnel equations. In the second experiment, photons pass through the coverslip in an interferometer to measure changes in optical path length, allowing the thickness to be calculated using the known index from the first experiment. The results from both single-photon experiments agree well with theoretical models.
Review of Classification algorithms for Brain MRI imagesIRJET Journal
1) The document reviews various classification algorithms that have been used to classify brain MRI images as normal or abnormal. It discusses techniques like decision trees, neural networks, fuzzy logic, and clustering that have been applied.
2) It provides examples of several studies that first performed preprocessing tasks like feature extraction on MRI images before applying classification algorithms like naive Bayes, decision trees, and probabilistic neural networks to classify images with accuracies ranging from 88% to 100%.
3) Boosting and ensemble techniques like combining multiple weak learners into a strong learner are mentioned as ways to improve classification accuracy and response times. The document concludes by surveying different algorithms and their performance on classifying brain tumor MRI images.
Novel adaptive filter (naf) for impulse noise suppression from digital imagesijbbjournal
In general, it is known that an adaptive filter adjusts its parameters iteratively such as size of the working
window, decision threshold values used in two stage detection-estimation based switching filters, number of
iterations etc. It is also known that nonlinear filters such as median filters and its several variants are
popularly known for their ability in dealing with the unknown circumstances. In this paper an efficient and
simple adaptive nonlinear filtering scheme is presented to eliminate the impulse noise from the digital images with an impulsive noise detection and reduction scheme based on adaptive nonlinear filter techniques. The proposed scheme employs image statistics based dynamically varying working window and an adaptive threshold for noise detection with a Noise Exclusive Median (NEM) based restoration. The intensity value of the Noise Exclusive Median (NEM) is derived from the processed pixels in local
neighborhood of a dynamically adaptive window. In the proposed scheme use of an adaptive threshold value derived from the noisy image statistics returns more precise results for the noisy pixel detection. The
proposed scheme is simple and can be implemented as either a single pass or a multi-pass with a maximum
of three iterations with a simple stopping criterion. The goodness of the proposed scheme is evaluated with respect to the qualitative and quantitative measures obtained by MATLAB simulations with standard images added with impulsive noise of varying densities. From the comparative analysis it is evident that the proposed scheme out performs the state-of-art schemes, preferably in cases of high-density impulse noise
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Design of an Optical Medium for the Simulation of Neutron Transport in Reacto...David Konyndyk
This document proposes designing an optical medium that can simulate neutron transport in nuclear reactor components for educational and analysis purposes. It begins with an overview of key neutron transport concepts like scattering, thermalization, and scattering angles. It then presents the experimental method, which involves using spherical glass microshells as scattering particles in resin and analyzing their properties via random walk simulations and theoretical scattering distributions. The document establishes quantitative relationships between the optical and neutron systems and lays groundwork for future experiments to test the optical medium's ability to accurately model spatial neutron distributions and energy deposition patterns in reactor components.
This document discusses using smoothing filters based on rough set theory for medical image enhancement. It introduces common smoothing filters like mean, median, mode, and triangular filters. These filters can reduce noise and enhance edges in medical images. The document proposes a parallel rough set based model that implements multiple smoothing filters at once to obtain independent results and generate an enhanced mean image for improved medical image quality and complex image processing.
Super resolution imaging using frequency wavelets and three dimensionalIAEME Publication
This document describes a method for achieving super resolution imaging using frequency wavelets and three-dimensional views with holographic technique. The method involves three main steps: 1) Registering low resolution images taken of the same specimen from different angles using digital holographic equipment to determine sub-pixel shifts between images; 2) Performing interpolation using frequency wavelet methods to increase resolution and add high frequency information; 3) Reconstructing a super resolution image by minimizing degradation from aliasing, noise, and blur. The resulting image has high resolving power, clarity, and a 3D view of the specimen.
Importance of Mean Shift in Remote Sensing SegmentationIOSR Journals
1) Mean shift is a non-parametric clustering technique that can segment remote sensing images into homogeneous regions without prior knowledge of the number of clusters or constraints on cluster shape.
2) The document presents a case study demonstrating mean shift can segment an image containing oil storage tanks into distinct regions faster than level set segmentation.
3) Mean shift is shown to be well-suited for remote sensing image segmentation tasks like forest mapping and land cover classification due to its ability to handle noise, gradients, and texture variations common in real-world images.
Droege Pupil Center Detection In Low Resolution ImagesKalle
In some situations, high quality eye tracking systems are not affordable. This generates the demand for inexpensive systems built upon non-specialized, off the shelf devices. Investigations show that algorithms developed for high resolution systems do not perform satisfactorily on such lowcost and low resolution systems. We investigate
algorithms specifically tailored to such low resolution input devices, based on combination of different strategies. An approach called gradient direction consensus is introduced and compared to image based correlation with adaptive templates as well as other known methods. The results are compared using synthetic input data with known ground truth.
This paper introduces the Artifi cial Neural Networks (ANN) function to model probabilistic dependencies, in supervised classification tasks for discrimination between earthquakes and explosions problems. ANNs are regarded as the discriminating tools to classify the natural seismic events (earthquakes) from the artifi cial ones (Man-made explosions) based on the seismic signals recorded at regional distances. The bulk of our novel is to improve the obtained numerical results using this advance technique. The ANNs, by testing the different types of seismic features, showed the potential application of this method to discriminate the classes. During the above study, we found out that the Neural Networks have been used in a fully innovative manner in this work. Here the ARMA coefficients filters detects
the type of the source whenever a natural or artificial source changes the nature of the background noise of the seismograms. During the above study, we found out that this algorithm is sometimes capable to alarm the further natural seismological events just a little before the onset.
Design and Development of Forest Fire Management Systemsipij
Forest fire is one of those natural disasters that have been causing huge destruction in terms of loss of vegetation, animals and hence affects the economy. Image segmentation techniques have been applied on satellite images of forest fire to extract fire object and some data mining techniques have been used for predicting the spread of forest fire. This paper proposes a novel approach to isolation of fire region using time-sequenced images, classifying fire images from non-fire images, predicting its movement and estimating the area burnt. Once the images are enhanced, the fire region is segmented out. Feature extraction provides the necessary inputs for classification of images as fire and non-fire images. Linear regression is used to predict the movement of forest fire to facilitate better evacuation strategy. Burnt area is calculated from the difference image. This work is helpful in drafting evacuation strategies quickly by predicting the movement of forest fire and facilitates the kick-off of rehabilitation activities by identifying and assessing the burnt area.
Image segmentation methods for brain mri imageseSAT Journals
This document compares different edge detection methods for brain MRI images and proposes a new active contour (snake) model approach. It first describes traditional gradient-based methods like Sobel, Prewitt and Canny edge detectors, noting their limitations in medical images. It then introduces active contour models, which use energy functions to capture edges of curved objects sharply. An experiment applies different methods to a brain MRI image and compares their outputs visually and using PSNR, finding the active contour method performs best in segmenting the brain region accurately. The document concludes the active contour approach is well-suited for medical image segmentation tasks.
This document analyzes the effect of different mobility patterns on the AODV and OLSR routing protocols in a mobile ad hoc network (MANET) using various TCP variants. It simulates scenarios using the OPNET simulator with 60 nodes under static and random waypoint mobility models. The performance is evaluated in terms of packet end-to-end delay, traffic received, and throughput. The results show that the SACK TCP variant performs best under random waypoint mobility for both protocols, while Tahoe performs best under static mobility for OLSR. It also finds that AODV generally outperforms OLSR and that SACK is the best variant for AODV across both mobility patterns.
This document discusses improving quality of service for connection admission control mechanisms using a two-dimensional queuing model. It proposes a threshold-based connection admission control that prioritizes ongoing connections based on available resources and bandwidth. A two-dimensional queuing model is used for better cross-layer design, modeling traffic arrival processes, and multi-rate transmission. The proposed algorithm aims to provide lower computational complexity, better QoS, increased throughput, and reduced delay compared to other algorithms.
A detailed geological history of quartz and industrial minerals present in different localities of
Eritrea is given. Well-grown transparent quartz crystals reflecting the hexagonal crystallographic features and
isolated, irregular shaped small milky quartz stones are found in western suburb of Asmara and the area
between Molebso and Zara in central northern Eritrea. Mechanism of formation of growth features observed on
the habit faces of transparent quartz crystals is briefly explained. Micro-topographical studies carried out on
these crystals indicate that to begin with, they grow and develop under high supersaturating conditions.
Most of the milky quartz stones are observed to be generally randomly scattered and devoid of gold. However,
few such specimens having yellow colored dots on their surfaces contain gold particles. Energy dispersion of Xray
analysis (EDAX) indicates high content of gold to the tune of 48% present in such samples. Commercial
implications related to quartz bearing gold are discussed. It is proposed that gold exists in large quantity in
quartz veins deep beneath the surface of earth in this region.
This document summarizes research on the effect of high temperatures on the compressive strength of concrete. Ninety concrete cubes were cast in three grades and subjected to temperatures from 200°C to 800°C for 1-2 hours. Testing found that strength was largely unaffected up to 350°C but started declining at 500°C, with over 30% reduction at 650°C. Beyond 650°C, concrete was largely decimated. Higher exposure times resulted in greater damage. The research adds to understanding concrete performance during fires and suggests structures may require repair after 500°C exposure but major work after 650°C.
Comparative Assessment of Two Thermodynamic Cycles of an aero-derivative Mari...IOSR Journals
Abstract: This paper explores the gas turbine potentials that are fully enhanced by the use of intercooling and
thermal recuperation as an engineering option available in the design of gas turbines and offered for marine
applications. It examines the off-design performance of two different cycle designs of a 25MW aero-derivative
engine by modelling and simulating each of them to operate under conditions other than those of their design
point. The simple cycle model consists of a single-spool dual shaft layout while the advanced model is
represented by an intercooled-recuperated cycle that runs on a dual-spool and is driven through a three shaft
configuration. In each case, the output shaft is coupled to a power turbine through which the propulsion power
may be transmitted to the propeller of the vessel to operate in a virtual marine environment. An off-design
performance simulation of both engines has been conducted in order to investigate and compare the effect of
ambient temperature variation during their part-load operation and particularly when subjected to a variety of
marine operating conditions. The study assesses the techno-economic impact of the complex design of the
advanced cycle over its simple cycle counterpart and demonstrates its potential for improved operating cost
through reduced fuel consumption as a significant step in the current drive for establishing the marine gas
turbine engine as a viable alternative to traditional prime movers in the ship propulsion industry.
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...ijistjournal
The SAR and SAS images are perturbed by a multiplicative noise called speckle, due to the coherent nature of the scattering phenomenon. If the background of an image is uneven, the fixed thresholding technique is not suitable to segment an image using adaptive thresholding method. In this paper a new Adaptive thresholding method is proposed to reduce the speckle noise, preserving the structural features and textural information of Sector Scan SONAR (Sound Navigation and Ranging) images. Due to the massive proliferation of SONAR images, the proposed method is very appealing in under water environment applications. In fact it is a pre- treatment required in any SONAR images analysis system. The results obtained from the proposed method were compared quantitatively and qualitatively with the results obtained from the other speckle reduction techniques and demonstrate its higher performance for speckle reduction in the SONAR images.
The document describes a method for estimating parameters of an imploded shell target based on analysis of x-ray radiographs. X-ray radiography is used to measure the target's optical thickness, related to density, by analyzing the shadow cast on a radiograph. A model treats the target as a spherical shell to estimate the inner radius, outer radius, and central optical thickness based on the radiograph's intensity profile. Measurement uncertainties are accounted for using a weighted least-squares method to minimize error and determine parameter estimate uncertainties from the model fit to the data. Choosing an optimal backlighter photon energy can provide a radiograph profile that yields the most precise parameter estimates.
iaetsd Image fusion of brain images using discrete wavelet transformIaetsd Iaetsd
1) The document discusses using discrete wavelet transform to fuse MRI and CT brain images. This allows physicians to view soft tissue details from MRI and bone details from CT in a single fused image.
2) Discrete wavelet transform decomposes images into different frequency bands, allowing salient features like edges to be separated. It is proposed to fuse MRI and CT brain images using discrete wavelet transform to reduce noise and computational load compared to other methods.
3) Fusing the images provides advantages for physicians by having both soft tissue and bone details in a single image, reducing storage costs compared to viewing images separately.
IRJET- Brain Tumor Detection using Digital Image ProcessingIRJET Journal
This document discusses techniques for detecting brain tumors using digital image processing of MRI scans. It begins with an introduction to brain anatomy and tumors. The methodology section then outlines the steps used: 1) Preprocessing images using median filtering to reduce noise, 2) Segmenting images using techniques like k-means clustering, fuzzy c-means, and watershed to separate tumor regions, 3) Extracting features from segmented regions, and 4) Classifying images using the features to detect the presence of tumors. The goal is to develop an automated system to help doctors diagnose brain tumors more accurately from MRI scans.
Performance analysis of image filtering algorithms for mri imageseSAT Publishing House
This document analyzes the performance of three image filtering algorithms (median filter, Wiener filter, and center weighted median filter) at removing noise from MRI images. The algorithms are tested on MRI images corrupted with different noise types. The Wiener filter is found to reconstruct images with the highest quality according to measurements of mean square error and peak signal-to-noise ratio. The study concludes the Wiener filter provides the best denoising of MRI images compared to the other algorithms tested.
This article describes two experiments using single photons to determine the index of refraction and thickness of a microscope coverslip. In the first experiment, transmission of single photons through the coverslip at various angles is measured to determine the index of refraction by fitting the data to Fresnel equations. In the second experiment, photons pass through the coverslip in an interferometer to measure changes in optical path length, allowing the thickness to be calculated using the known index from the first experiment. The results from both single-photon experiments agree well with theoretical models.
Review of Classification algorithms for Brain MRI imagesIRJET Journal
1) The document reviews various classification algorithms that have been used to classify brain MRI images as normal or abnormal. It discusses techniques like decision trees, neural networks, fuzzy logic, and clustering that have been applied.
2) It provides examples of several studies that first performed preprocessing tasks like feature extraction on MRI images before applying classification algorithms like naive Bayes, decision trees, and probabilistic neural networks to classify images with accuracies ranging from 88% to 100%.
3) Boosting and ensemble techniques like combining multiple weak learners into a strong learner are mentioned as ways to improve classification accuracy and response times. The document concludes by surveying different algorithms and their performance on classifying brain tumor MRI images.
Novel adaptive filter (naf) for impulse noise suppression from digital imagesijbbjournal
In general, it is known that an adaptive filter adjusts its parameters iteratively such as size of the working
window, decision threshold values used in two stage detection-estimation based switching filters, number of
iterations etc. It is also known that nonlinear filters such as median filters and its several variants are
popularly known for their ability in dealing with the unknown circumstances. In this paper an efficient and
simple adaptive nonlinear filtering scheme is presented to eliminate the impulse noise from the digital images with an impulsive noise detection and reduction scheme based on adaptive nonlinear filter techniques. The proposed scheme employs image statistics based dynamically varying working window and an adaptive threshold for noise detection with a Noise Exclusive Median (NEM) based restoration. The intensity value of the Noise Exclusive Median (NEM) is derived from the processed pixels in local
neighborhood of a dynamically adaptive window. In the proposed scheme use of an adaptive threshold value derived from the noisy image statistics returns more precise results for the noisy pixel detection. The
proposed scheme is simple and can be implemented as either a single pass or a multi-pass with a maximum
of three iterations with a simple stopping criterion. The goodness of the proposed scheme is evaluated with respect to the qualitative and quantitative measures obtained by MATLAB simulations with standard images added with impulsive noise of varying densities. From the comparative analysis it is evident that the proposed scheme out performs the state-of-art schemes, preferably in cases of high-density impulse noise
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Design of an Optical Medium for the Simulation of Neutron Transport in Reacto...David Konyndyk
This document proposes designing an optical medium that can simulate neutron transport in nuclear reactor components for educational and analysis purposes. It begins with an overview of key neutron transport concepts like scattering, thermalization, and scattering angles. It then presents the experimental method, which involves using spherical glass microshells as scattering particles in resin and analyzing their properties via random walk simulations and theoretical scattering distributions. The document establishes quantitative relationships between the optical and neutron systems and lays groundwork for future experiments to test the optical medium's ability to accurately model spatial neutron distributions and energy deposition patterns in reactor components.
This document discusses using smoothing filters based on rough set theory for medical image enhancement. It introduces common smoothing filters like mean, median, mode, and triangular filters. These filters can reduce noise and enhance edges in medical images. The document proposes a parallel rough set based model that implements multiple smoothing filters at once to obtain independent results and generate an enhanced mean image for improved medical image quality and complex image processing.
Super resolution imaging using frequency wavelets and three dimensionalIAEME Publication
This document describes a method for achieving super resolution imaging using frequency wavelets and three-dimensional views with holographic technique. The method involves three main steps: 1) Registering low resolution images taken of the same specimen from different angles using digital holographic equipment to determine sub-pixel shifts between images; 2) Performing interpolation using frequency wavelet methods to increase resolution and add high frequency information; 3) Reconstructing a super resolution image by minimizing degradation from aliasing, noise, and blur. The resulting image has high resolving power, clarity, and a 3D view of the specimen.
Importance of Mean Shift in Remote Sensing SegmentationIOSR Journals
1) Mean shift is a non-parametric clustering technique that can segment remote sensing images into homogeneous regions without prior knowledge of the number of clusters or constraints on cluster shape.
2) The document presents a case study demonstrating mean shift can segment an image containing oil storage tanks into distinct regions faster than level set segmentation.
3) Mean shift is shown to be well-suited for remote sensing image segmentation tasks like forest mapping and land cover classification due to its ability to handle noise, gradients, and texture variations common in real-world images.
Droege Pupil Center Detection In Low Resolution ImagesKalle
In some situations, high quality eye tracking systems are not affordable. This generates the demand for inexpensive systems built upon non-specialized, off the shelf devices. Investigations show that algorithms developed for high resolution systems do not perform satisfactorily on such lowcost and low resolution systems. We investigate
algorithms specifically tailored to such low resolution input devices, based on combination of different strategies. An approach called gradient direction consensus is introduced and compared to image based correlation with adaptive templates as well as other known methods. The results are compared using synthetic input data with known ground truth.
This paper introduces the Artifi cial Neural Networks (ANN) function to model probabilistic dependencies, in supervised classification tasks for discrimination between earthquakes and explosions problems. ANNs are regarded as the discriminating tools to classify the natural seismic events (earthquakes) from the artifi cial ones (Man-made explosions) based on the seismic signals recorded at regional distances. The bulk of our novel is to improve the obtained numerical results using this advance technique. The ANNs, by testing the different types of seismic features, showed the potential application of this method to discriminate the classes. During the above study, we found out that the Neural Networks have been used in a fully innovative manner in this work. Here the ARMA coefficients filters detects
the type of the source whenever a natural or artificial source changes the nature of the background noise of the seismograms. During the above study, we found out that this algorithm is sometimes capable to alarm the further natural seismological events just a little before the onset.
Design and Development of Forest Fire Management Systemsipij
Forest fire is one of those natural disasters that have been causing huge destruction in terms of loss of vegetation, animals and hence affects the economy. Image segmentation techniques have been applied on satellite images of forest fire to extract fire object and some data mining techniques have been used for predicting the spread of forest fire. This paper proposes a novel approach to isolation of fire region using time-sequenced images, classifying fire images from non-fire images, predicting its movement and estimating the area burnt. Once the images are enhanced, the fire region is segmented out. Feature extraction provides the necessary inputs for classification of images as fire and non-fire images. Linear regression is used to predict the movement of forest fire to facilitate better evacuation strategy. Burnt area is calculated from the difference image. This work is helpful in drafting evacuation strategies quickly by predicting the movement of forest fire and facilitates the kick-off of rehabilitation activities by identifying and assessing the burnt area.
Image segmentation methods for brain mri imageseSAT Journals
This document compares different edge detection methods for brain MRI images and proposes a new active contour (snake) model approach. It first describes traditional gradient-based methods like Sobel, Prewitt and Canny edge detectors, noting their limitations in medical images. It then introduces active contour models, which use energy functions to capture edges of curved objects sharply. An experiment applies different methods to a brain MRI image and compares their outputs visually and using PSNR, finding the active contour method performs best in segmenting the brain region accurately. The document concludes the active contour approach is well-suited for medical image segmentation tasks.
This document analyzes the effect of different mobility patterns on the AODV and OLSR routing protocols in a mobile ad hoc network (MANET) using various TCP variants. It simulates scenarios using the OPNET simulator with 60 nodes under static and random waypoint mobility models. The performance is evaluated in terms of packet end-to-end delay, traffic received, and throughput. The results show that the SACK TCP variant performs best under random waypoint mobility for both protocols, while Tahoe performs best under static mobility for OLSR. It also finds that AODV generally outperforms OLSR and that SACK is the best variant for AODV across both mobility patterns.
This document discusses improving quality of service for connection admission control mechanisms using a two-dimensional queuing model. It proposes a threshold-based connection admission control that prioritizes ongoing connections based on available resources and bandwidth. A two-dimensional queuing model is used for better cross-layer design, modeling traffic arrival processes, and multi-rate transmission. The proposed algorithm aims to provide lower computational complexity, better QoS, increased throughput, and reduced delay compared to other algorithms.
A detailed geological history of quartz and industrial minerals present in different localities of
Eritrea is given. Well-grown transparent quartz crystals reflecting the hexagonal crystallographic features and
isolated, irregular shaped small milky quartz stones are found in western suburb of Asmara and the area
between Molebso and Zara in central northern Eritrea. Mechanism of formation of growth features observed on
the habit faces of transparent quartz crystals is briefly explained. Micro-topographical studies carried out on
these crystals indicate that to begin with, they grow and develop under high supersaturating conditions.
Most of the milky quartz stones are observed to be generally randomly scattered and devoid of gold. However,
few such specimens having yellow colored dots on their surfaces contain gold particles. Energy dispersion of Xray
analysis (EDAX) indicates high content of gold to the tune of 48% present in such samples. Commercial
implications related to quartz bearing gold are discussed. It is proposed that gold exists in large quantity in
quartz veins deep beneath the surface of earth in this region.
This document summarizes research on the effect of high temperatures on the compressive strength of concrete. Ninety concrete cubes were cast in three grades and subjected to temperatures from 200°C to 800°C for 1-2 hours. Testing found that strength was largely unaffected up to 350°C but started declining at 500°C, with over 30% reduction at 650°C. Beyond 650°C, concrete was largely decimated. Higher exposure times resulted in greater damage. The research adds to understanding concrete performance during fires and suggests structures may require repair after 500°C exposure but major work after 650°C.
Comparative Assessment of Two Thermodynamic Cycles of an aero-derivative Mari...IOSR Journals
Abstract: This paper explores the gas turbine potentials that are fully enhanced by the use of intercooling and
thermal recuperation as an engineering option available in the design of gas turbines and offered for marine
applications. It examines the off-design performance of two different cycle designs of a 25MW aero-derivative
engine by modelling and simulating each of them to operate under conditions other than those of their design
point. The simple cycle model consists of a single-spool dual shaft layout while the advanced model is
represented by an intercooled-recuperated cycle that runs on a dual-spool and is driven through a three shaft
configuration. In each case, the output shaft is coupled to a power turbine through which the propulsion power
may be transmitted to the propeller of the vessel to operate in a virtual marine environment. An off-design
performance simulation of both engines has been conducted in order to investigate and compare the effect of
ambient temperature variation during their part-load operation and particularly when subjected to a variety of
marine operating conditions. The study assesses the techno-economic impact of the complex design of the
advanced cycle over its simple cycle counterpart and demonstrates its potential for improved operating cost
through reduced fuel consumption as a significant step in the current drive for establishing the marine gas
turbine engine as a viable alternative to traditional prime movers in the ship propulsion industry.
The Prevalence of Alcohol Consumption among Commercial Drivers in Uyo Local G...IOSR Journals
Abstract: The purpose of the study was to assess the prevalence of alcohol consumption among commercial
drivers in Uyo metropolis. Five research questions and three null hypothesis design was adopted, and the
instrument for the study was mainly interview schedules.
Due to the transitory nature of drivers in Uyo motor parks, convenient sampling was used to draw 160 drivers
who use Uyo motor parks.
The descriptive statistics percentage was used to answer the research questions, while chi – square statistics
was used to test the hypotheses at 0.05 level of significance. All the drivers interviewed drink alcohol for several
reasons. The sale of alcohol in the park and its environs has significant (P< 0.05) influence on their use. There
is no statistically significant difference (P>0.05) in the perceived influence of the use of alcohol on health with
respect to years of experience and age of drivers. The study was concluded with appropriate recommendation to
help the situation.
Key words: alcohol, drivers, prevalence and Uyo
This document summarizes the application architecture design for an automobile dealership company called Mandala Company using Enterprise Architecture Planning (EAP). It identifies 7 key business processes and recommends applications to support each process. The applications are categorized into 6 subsystems: unit sales, service and spare parts, purchasing spare parts, finance and accounting, and personnel. The applications are designed to be web-based for ease of maintenance. The document concludes by classifying the applications using McFarlan's Strategy Grid and identifying enterprise-wide applications.
This document discusses power flow management through an Interline Power Flow Controller (IPFC). It begins with an abstract that introduces IPFC as a FACTS controller that can provide balance of reactive and active power between two lines from the same substation through voltage source converters connected in series with the lines and a common DC link. It then provides background on reactive power compensation, FACTS devices, and the operating principles of IPFC. The document establishes equations to model the active and reactive power flows that can be controlled by IPFC. It presents a case study applying the IPFC model to a five bus system and shows that IPFC is effective at controlling power flows between lines.
This document summarizes research on the numerical and experimental study of the effect of impeller design on the performance of submerged turbines. A Gorlov helical water turbine was designed, fabricated, and tested both theoretically using computational fluid dynamics software and experimentally in an open channel. The experimental results showed that power increased with water velocity, reaching 4.621 W at a velocity of 1.81 m/s. CFD modeling using Fluent agreed well with experimental results. The study evaluated turbine performance at various water velocities to optimize power extraction based on impeller design.
This document summarizes an experimental investigation into using local Sudanese aggregates to produce high-strength concrete with a compressive strength of 80 MPa. Hundreds of specimens were made using marble and granite aggregates from Sudan along with supplementary cementitious materials like silica fume and fly ash. The concrete achieved the target strength and the aggregates were found to be suitable for high-strength concrete. A second part of the study evaluated the drying shrinkage of the high-strength concrete and found only a weak relationship between higher strength and increased shrinkage. The research aims to support a project heightening an existing dam in Sudan using locally available materials.
Exact Analytical Expression for Outgoing Intensity from the Top of the Atmosp...IOSR Journals
This research is a part of the work devoted on the application of analytical Discrete Ordinate (ADO) method to the polarized monochromatic radiative transfer equation undergoing anisotropic scattering with source function matrix in a finite coupled Atmosphere –Ocean media having flat interface boundary conditions involving specular reflection and transmission matrix. Discontinuities in the derivatives of the Stokes vector with respect to the cosine of the polar angle at smooth interface between the two media with different refractive indices (air and water) is tackled by using a suitable quadrature scheme devised earlier. Atmosphere and ocean are assumed to be homogeneous. No stratification is adopted in the two media. Exact expression for the
emergent radiation intensity vector from the top of the atmosphere is derived. Exact expressions for the emergent polarized radiation intensity vector from the air-water interface as well as from any point of the two medium in any direction can also be derived in terms of eigenvectors and eigenvalues.
The document presents the results of an experimental investigation into the performance of a laboratory screw jack. Various tests were conducted by applying different loads between 450N and 100N to the screw jack. For each load, the mechanical advantage, velocity ratio, and mechanical efficiency were calculated. The results showed that the mechanical efficiency of the screw jack was always less than 50% since the mechanical advantage and velocity ratio were less than half. Frictional forces in the screw and base contributed to the efficiency not remaining constant across different loads.
This document proposes an improved genetic algorithm called DGA that combines genetic algorithm and differential evolution. DGA uses adaptive differential evolution as its mutation operator instead of simple genetic algorithm's crossover and mutation. It also adds strategies of optimal reservation and worst elimination. Simulation results show DGA has stronger global optimization ability, faster convergence speed and better stability compared to simple genetic algorithm.
This document describes the design of a microstrip patch antenna for WiMAX applications at 8.5 GHz. The antenna is designed using Advanced Design System software. It consists of a patch, ground plane, and Roger R04003C dielectric substrate. Simulation results show the antenna has a gain of 6.2 dB and return loss of -0.3 dB at a resonant frequency of 7.1 GHz. Plots of S-parameters, far-field patterns, polarization, and radiation patterns are provided from the simulations. The designed antenna achieves good performance for WiMAX applications in the specified frequency range.
This document discusses different algorithms for task scheduling in cloud computing environments based on various quality of service (QoS) parameters. It summarizes several QoS-based scheduling algorithms including QDA, Improved Cost Based, PAPRIKA, ANT Colony, CMultiQoSSchedule, and SHEFT Workflow. It also provides a comparative table of these algorithms and discusses the various metrics considered by QoS-based scheduling algorithms like time, cost, makespan, trust, and resource utilization. The paper concludes that scheduling is an important factor for cloud environments and that existing algorithms can be improved by considering additional parameters like trust values, execution rates, and success rates.
This document discusses a framework for improving access to virtual reality (VR) environments for citizens with disabilities. It proposes techniques ranging from simple additions to VRML files to scripts that can aid in creating more accessible VR worlds. These techniques aim to improve the usability and accessibility of VR technologies for people with sensory, physical, or cognitive impairments. The framework also provides initial authoring strategies to help make VRML content more accessible. The goal is to leverage VR to enhance the quality of life and independence of citizens with disabilities.
Modeling Of Flat Plate Collector by Using Hybrid TechniqueIOSR Journals
(SWH) are becoming increasingly attractive in sustainable development. Hence the Efforts continuously made here is to reduce their costs to make them more affordable. Solar energy has experienced a remarkable development in recent years because of cost reduction due to technological development as well as renewable energy scheme supported by the government. The process of using sun’s energy to heat water is not a new technology. (SWH) technology has improved a lot during the past century. The primary method of energy transport in solar energy from sun is electromagnetic radiation .This type of radiation coming from the Sun also depends on temperature. The Sun generates electromagnetic radiation in extensive span of wavelengths. However, most of the radiation is being sent out in the observable range due to its surface temperature. The amount of solar energy received in a particular region depends on the time of the day, the season of the year, the sky’s cloudiness, and how closeness of Earth’s equator. For modeling we utilized Genetic algorithm and for prediction we employed hybrid ABC and PSO techniques. Genetic algorithm is utilized in order to optimize the modeling technique by using the dataset collected.
This document discusses the design, analysis, and feasibility testing of a center-mounted suspension system. It begins with an introduction to conventional suspension systems and their limitations. The proposed center-mounted system aims to improve vehicle balance in all terrains by directly attaching the suspension to the vehicle's central chassis. The document then reviews different suspension system types and analyzes the proposed system's working principles and mathematical calculations. Finally, stress analysis using ANSYS software demonstrates the advantages of the center-mounted design in absorbing shocks during turns and on bumpy roads. In conclusion, the proposed system maintains vehicle balance better than conventional designs through its unique center-attached configuration.
J018127176.publishing paper of mamatha (1)IOSR Journals
This document discusses classifying patterns under attacks and evaluating pattern security. It proposes a framework for assessing pattern security and modeling adversaries to characterize attack situations. The framework aims to provide a more comprehensive understanding of how classifiers behave under adversarial conditions. This can help lead to better design decisions that improve classifier security against considered attacks. Three applications are discussed - spam filtering, intrusion detection, and biometric verification - where pattern classifiers may be vulnerable if adversarial scenarios are not accounted for during design and evaluation.
This document describes a sketch-based image retrieval system that uses freehand sketches as queries to retrieve similar colored images from a database. The system first extracts features like color, texture, and shape from the sketch using descriptors such as Color and Edge Directivity Descriptor (CEDD) and Edge Histogram Descriptor (EHD). It then clusters the images in the database using k-means clustering based on the similarity of their features to the sketch. Finally, the system retrieves the most similar colored image from the clustered images as the output match for the user's sketch query.
Similar to “Trade-Off between Detection and Resolution of Two Point Objects Under Various Conditions of Imaging Situations: Part-I: Mathematical Formulation of the Problem”
Purkinje imaging for crystalline lens density measurementPetteriTeikariPhD
Brief introduction for the non-invasive, inexpensive and fast Purkinje image -based method for measuring the spectral transmittance of the human crystalline lens density in vivo.
Alternative download link:
https://www.dropbox.com/s/588y7epy13n34xo/purkinje_imaging.pdf?dl=0
The document describes the development of an open-source optical trapping microscope to manipulate and study nano- and micro-components. Key features of the microscope include x-, y-, and z-motion control of the sample stage, piezoelectric microfluidic chambers, Köhler illumination, and automated particle tracking capabilities. Preliminary experiments were conducted to characterize a single-beam laser optical trap, including analysis of the three-dimensional trapping potential and algorithms to compensate for factors limiting trap quality. Improvements and further research areas are discussed, such as using higher laser power and extracting z-direction information about optical traps.
Photoelasticity uses the birefringent properties of transparent materials under stress to determine stress distributions. When polarized light passes through a stressed transparent material, interference fringes appear indicating principal stress directions and magnitudes. Photoelasticity is used for non-contact stress analysis of components, impact testing, assembly stress analysis, and model verification. Limitations include difficulties acquiring quantitative principal stress data and coating reliability issues for field investigations of structures like concrete.
This document discusses a reflectance perception model based face recognition algorithm that is robust to illumination variations. It begins with an introduction to the challenges of face recognition across different lighting conditions. It then reviews related work on illumination compensation techniques. The document proposes a reflectance perception model that transforms face images into an illumination-insensitive representation by estimating an illumination gain factor. It also describes applying principal component analysis (PCA) to extract facial features from the preprocessed images in a lower dimensional space, removing unwanted vectors. Finally, it discusses fusing matching scores from multiple classifiers using a weighted sum to improve recognition accuracy across variations in lighting.
This document presents a reflectance perception model based face recognition approach that is robust to illumination variations. It proposes a preprocessing algorithm based on the reflectance perception model to generate illumination insensitive images. It then applies principal component analysis (PCA) for feature extraction to reduce the image dimension and remove unwanted vectors. Multiple classifiers are used to extract features from different Fourier domains and frequencies, and scores from these classifiers are combined using a weighted sum fusion method based on equal error rate weights. Experimental results on standard databases show the proposed approach delivers large performance improvements over other face recognition algorithms in handling illumination variations.
Survey on Single image Super Resolution TechniquesIOSR Journals
Super-resolution is the process of recovering a high-resolution image from multiple lowresolutionimages
of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a
higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘lowresolution’
images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for
facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review
of existing super-resolution techniques and highlight the future research challenges. This includes the
formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We
critique these methods and identify areas which promise performance improvements. In this paper, future
directions for super-resolution algorithms are discussed. Finally results of available methods are given.
Survey on Single image Super Resolution TechniquesIOSR Journals
Abstract:Super-resolution is the process of recovering a high-resolution image from multiple low-resolutionimages of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘low-resolution’ images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review of existing super-resolution techniques and highlight the future research challenges. This includes the formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We critique these methods and identify areas which promise performance improvements. In this paper, future directions for super-resolution algorithms are discussed. Finally results of available methods are given. Keywords: Super-resolution, POCS, IBP, Canny Edge Detection
This document presents a novel edge detection algorithm proposed for mammographic images. It begins with an abstract summarizing the paper's focus on edge detection in mammograms and comparison to other common edge detection methods. It then provides background on edge detection and medical image analysis, describing common gradient and derivative-based edge detection methods. The main body introduces a new two-phase edge detection process called Binary Homogeneity Enhancement Algorithm (BHEA) that homogenizes the mammogram and detects edges by traversing the image horizontally and vertically. Results from the new method are then compared to other common edge detection filters.
Strehl Ratio with Higher-Order Parabolic FilterIJMER
In all the branches of science, engineering and technology, it is known that the output due to
an input impulse function, spatial or temporal, is never an impulse. There is a spread of the input impulse
function in the output due to the noise introduced by the physical device. It was Strehl who first
introduced the important image-quality assessment parameter “Definitionshelligkeit” or simply known
as the Strehl Ratio (SR) after his name. In this paper, we have studied this parameter for an optical system apodised with the higher-order super-resolving parabolic filters. The results obtained have been discussed graphically
Effective segmentation of sclera, iris and pupil in noisy eye imagesTELKOMNIKA JOURNAL
In today’s sensitive environment, for personal authentication, iris recognition is the most attentive
technique among the various biometric technologies. One of the key steps in the iris recognition system is
the accurate iris segmentation from its surrounding noises including pupil and sclera of a captured
eye-image. In our proposed method, initially input image is preprocessed by using bilateral filtering.
After the preprocessing of images contour based features such as, brightness, color and texture features
are extracted. Then entropy is measured based on the extracted contour based features to effectively
distinguishing the data in the images. Finally, the convolution neural network (CNN) is used for
the effective sclera, iris and pupil parts segmentations based on the entropy measure. The proposed
results are analyzed to demonstrate the better performance of the proposed segmentation method than
the existing methods.
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION acijjournal
This document summarizes a research paper on using the Cholesky decomposition technique to fuse multispectral images and represent them as a color image. It discusses how multispectral image fusion works by combining images from different spectral bands. It then describes the VTVA (Vector valued Total Variation Algorithm) technique in detail, which uses the covariance matrix and Cholesky decomposition to control the correlation between color components in the fused image. This technique is compared to principal component analysis. The document provides background on RGB color space, color perception, and Cholesky decomposition before outlining the specific steps of the VTVA algorithm.
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSIONacijjournal
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more perceptible to human eye. Multispectral Image fusion is the process of combining
images optically acquired in more than one spectral band. In this paper, we present a pixel-level image
fusion that combines four images from four different spectral bands namely near infrared(0.76-0.90um),
mid infrared(1.55-1.75um),thermal- infrared(10.4-12.5um) and mid infrared(2.08-2.35um) to give a
composite colour image. The work coalesces a fusion technique that involves linear transformation based
on Cholesky decomposition of the covariance matrix of source data that converts multispectral source
images which are in grayscale into colour image. This work is composed of different segments that
includes estimation of covariance matrix of images, cholesky decomposition and transformation ones.
Finally, the fused colour image is compared with the fused image obtained by PCA transformation.
Filtering Based Illumination Normalization Techniques for Face RecognitionRadita Apriana
The main challenge experienced by the present face recognition techniques and smooth filters
are the difficulty in managing illumination. The differences in face images that are created by illumination
are normally bigger compared to the differences in inter-person that is utilized to differentiate identities.
However, face recognition over illumination has more uses in a lot of applications that deal with subjects
that are not cooperative where the highest potential of the face recognition as a non-intrusive biometric
feature can be executed and utilized. A lot of work has been put into the research and development of
illumination and face recognition in the present era and a lot of critical methods have been introduced.
Nevertheless, there are some concerns with face recognition and illumination that require further
considerations which include the deficiencies in comprehending the sub-spaces in illumination pictures,
problems with intractability in face modelling and complicated mechanisms of face surface reflections.
This document discusses object detection using the YOLO algorithm. It begins with an abstract that describes the goal of detecting multiple objects in a single frame using YOLO and evaluating its performance on the MS COCO dataset. The introduction provides background on object detection and outlines that YOLO is one of the fastest algorithms. The literature survey section summarizes previous research on two-stage detectors like RCNN, Fast RCNN, and Faster RCNN and their drawbacks. It also discusses one-stage detectors like SSD and how YOLO has improved accuracy.
Literature Survey on Image Deblurring TechniquesEditor IJCATR
Image restoration and recognition has been of great importance nowadays. Face recognition becomes difficult when it comes
to blurred and poorly illuminated images and it is here face recognition and restoration come to picture. There have been many
methods that were proposed in this regard and in this paper we will examine different methods and technologies discussed so far. The
merits and demerits of different methods are discussed in this concern
This document discusses band ratioing, image differencing, and principal and canonical component analysis techniques in remote sensing. Band ratioing involves dividing pixel values in one band by another band to enhance spectral differences. Image differencing calculates differences between images after alignment. Principal component analysis transforms correlated spectral data into fewer uncorrelated bands retaining most information, while canonical component analysis aims to maximize separability of user-defined features. These techniques can help analyze multispectral and hyperspectral remote sensing data.
Strehl Ratio with Higher-Order Parabolic FilterIJMER
In all the branches of science, engineering and technology, it is known that the output due to
an input impulse function, spatial or temporal, is never an impulse. There is a spread of the input impulse
function in the output due to the noise introduced by the physical device. It was Strehl who first
introduced the important image-quality assessment parameter “Definitionshelligkeit” or simply known
as the Strehl Ratio (SR) after his name. In this paper, we have studied this parameter for an optical
system apodised with the higher-order super-resolving parabolic filters. The results obtained have been
discussed graphically
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...CSCJournals
Sonar images produced due to the coherent nature of scattering phenomenon inherit a multiplicative component called speckle and contain almost homogeneous as well as textured regions with relatively rare edges. Speckle removal is a pre-processing step required in applications like the detection and classification of objects in the sonar image. In this paper computationally efficient Fractional Integral Mask algorithms to remove the speckle noise from sonar images is proposed. Riemann- Liouville definition of fractional calculus is used to create Fractional integral masks in eight directions. The use of a mask incorporated with the significant coefficients from the eight directional masks and a single convolution operation required in such case helps in obtaining the computational efficiency. The sonar image heterogeneous patch classification is based on a new proposed naive homogeneity index which depends on the texture strength of the patches and despeckling filters can be adjusted to these patches. The application of the mask convolution only to the selected patches again reduce the computational complexity. The non-homomorphic approach used in the proposed method avoids the undesired bias occurring in the traditional homomorphic approach. Experiments show that the mask size required directly depends on the fractional order. Mask size can be reduced for lower fractional orders thus ensuring the computation complexity reduction for lower orders. Experimental results substantiate the effectiveness of the despeckling method. The different non reference image performance evaluation criterion are used to evaluate the proposed method.
An Experimental Approach For Evaluating Superpixel's Consistency Over 2D Gaus...CSCJournals
This article proposes a rigorous method to assess the consistency of superpixels for different superpixel segmentation algorithms. The proposed method extracts the superpixels that remain unchanged over certain levels of noise by adopting the Jaccard Similarity Coefficient (JSC). Technically, we developed a measure of Jaccard similarity for superpixel segmentation algorithms to compare the similarity between sets of superpixels (original and noisy). The algorithm calls the superpixel segmentation algorithm to generate the superpixel results of the original images and saves their boundary masks and labels. It then applies varying degrees of noise to the images and produces the superpixels results, and the process is repeated for four levels with increased noise value at each iteration. We chose 2D Gaussian Blur, Impulse Noise and a combination of both to corrupt the images. The proposed algorithm generates similarity indices of superpixels (original and noisy) using Jaccard Similarity (JS). To be categorized as a consistent superpixel, the similarity index must meet a predefined coefficient threshold (?) of JSC. The superpixels consistency of four different superpixel segmentation algorithms including Bilateral geodesic distance (BGD), Flooding based superpixels generation (FBS), superpixels via geodesic distance (GDS), and Turbopixel (TP) are evaluated. Precisely, the experimental results demonstrated that no single algorithm was able to yield an optimal outcome and failed to maintain consistent superpixels at each level of noise. Conclusively, more robust superpixel algorithms must be developed to solve such problems effectively.
1) Researchers used galaxy images from the Hubble Space Telescope to measure multipole moments that quantify galaxy shapes without oversimplifying models.
2) They estimated the probability distribution of galaxy shapes in high dimensions using kernel density estimation on principal components of the data.
3) This probability distribution will be used as a prior in Bayesian analysis to refine possible lens models that can explain observations by allowing all physically reasonable models.
Similar to “Trade-Off between Detection and Resolution of Two Point Objects Under Various Conditions of Imaging Situations: Part-I: Mathematical Formulation of the Problem” (20)
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
8.Isolation of pure cultures and preservation of cultures.pdf
“Trade-Off between Detection and Resolution of Two Point Objects Under Various Conditions of Imaging Situations: Part-I: Mathematical Formulation of the Problem”
1. IOSR Journal of Mathematics (IOSR-JM)
e-ISSN: 2278-5728,p-ISSN: 2319-765X, Volume 6, Issue 5 (May. - Jun. 2013), PP 45-55
www.iosrjournals.org
www.iosrjournals.org 45 | Page
“Trade-Off between Detection and Resolution of Two Point
Objects Under Various Conditions of Imaging Situations: Part-I:
Mathematical Formulation of the Problem”
P. Thirupathi
Department of Mathematics, Osmania University, Hyderabad, 500007, A.P, India.
Abstract: It is a well-experienced fact that whenever one tries to detect a weak object point in the vicinity of an
intense point object, viz., a binary star-SIRUS and its companion weak satellite star, there is always loss of
resolution of the optical system. In other words, one wants to improve the defectively of the system, there is
always a loss of resolution capabilities of the system. Thus, there is a trade-off between Detection and
Resolution of optical systems under various imaging situations. In this first paper of discussion of this trade-off,
we have derived the Fourier analytical formulation of this problem. This formulation will be used to find out a
compatible trade-off between Detection and Resolution in our further publications.
Key words: Fourier Optics, Mathematical Optics, Super-Resolution, etc.-
I. Introduction
In this paper, the two-point resolution capabilities have been discussed for an optical system with
parabolic filters. The Rayleigh and the Sparrow resolution limits are computed and studied as functions of the
degree of coherence of illumination, (Incoherent, coherent and partially coherent) of the two point objects. The
problem of the definition and determination of an image quality criterion has long been and still is a major one
in the field of image evaluation and assessment. Though a number of physical parameters for assessing the
quality of an image have been proposed from time to time, none of these measures is completely satisfactory.
Some of these parameters are: - Resolving power, Strehl Definition, Optical Transfer Function, Equivalent Pass-
band, Relative Structural Content, Correlation Quality. Image Fidelity and peachiness. Historically, one of the
first measures established for the evaluation of optical system was to specify how well the system could resolve
a two–point object, and the two-point resolution is one of the simplest quality criteria in terms of the impulse
response from among several criteria available is chosen. The intensity distribution in the image should satisfy
the requirement of the criteria chosen. The limiting separation thus determined gives the imaging system’s
response in terms of two-point resolution. In the case of sources of short wave length radiations such as X- rays,
gamma rays or sub-atomic particles, conventional method of ray bending i.e., reflection, refraction and
diffraction, cannot be used for imaging due to their high penetrating power and rectilinear propagation. In such
cases, coded imaging (CI) techniques can play an important role in determining source location and source
distribution. According to them “CI”, when reduced to the basics, is a two step process.
In the first step, the source information is recorded or encoded by geometrical shadow casting through
a coded aperture (no ray bending is involved). In the second step, the image is matched to the coded aperture
design. Though the two-point resolution is one of the simplest criterion to assess the performance of optical
imaging systems, it has its inherent complexity owing to the fact that the limit of resolution is sensitive to a large
number of factors via., nature of the optical system, nature of illumination, object point separation, intensity
ratio of the object points degree of coherence, resolution criterion used, etc. Therefore, there has to be some
flexibility in the exact quantitative definition of the limiting resolving power achievable. The importance of two-
point resolution lies in the fact that it is one of the earliest and simplest physical parameters used to evaluate the
performance of optical imaging system in various imaging situations, incoherent, partially coherent and
coherent illuminations. It may be noted that the resolving power of an imaging system as determined by the
Rayleigh criterion is not the property of the system alone but also of the pair of objects and the coherence
condition of illumination. Though the optical transfer function as an assessment parameter is superior to two-
point resolution for optical systems operating in incoherent illumination, it should be noted that partially
coherent and coherent imaging system become non-linear in both amplitude and intensity, if the detection step is
also included in the imaging system. Due to the non-linearity associated with the partially coherent imaging
systems, the systems become object-dependent and cannot be completely characterized by a system transfer
function as in the linear case.
2. Trade-Off between Detection and Resolution of Two Point objects under various conditions of
www.iosrjournals.org 46 | Page
II. Various Resolution Criteria
As the subject of two-point resolution is sensitive to a wide variety of factors, a criterion of resolution
is required in order to determine the limit of resolution. Several criteria have been proposed from time to time. It
should be mentioned that all criteria of resolution are arbitrary and as BARAKAT [1] has mentioned, any
criterion of resolution is not a law of physics. It may be pointed out that none of these criteria either
determines or sets an absolute limit on the limit of resolution. Therefore, it is meaningless to talk about the
absolute resolving- power of an imaging system. The resolving-power, as RONCHI [2] opined, depends on a
three-fold combination of
The source and its energy,
The instrument and its energy distribution capacities and
The receiver and its sensitivity characteristics.
However, arbitrary these criteria may be, they serve the purpose of comparison of the performances of
various imaging systems. The criteria “yield useful rules of thumb for engineering practice”. In this section ,
the various resolution criteria proposed from time to time are presented and the Rayleigh and Sparrow criteria of
resolution that have been used in the present study are explained in more details in the following two sections.
The subject of two-point resolution starts with the celebrated Rayleigh criterion. LORD RAYLEIGH [3]
developed the first resolution criterion, which now bears his name. Rayleigh recognized the arbitrariness of the
criterion. In his own words, “This rule is convenient on account of its simplicity and it is sufficiently
accurate in view of the necessary uncertainty as to what is meant by resolution”. Rayleigh criterion,
through arbitrary, has the virtue of being particularly uncomplicated.
SPARROW gave the alternate criterion which he called “undulation condition”. ASAKURA [4]
recognizing that the case of object points having equal intensity is rare in actual imaging situations, introduced
the “modified Sparrow criterion” to suit actual imaging situations and studied the problem of two-point
resolution of unequally bright points under partially coherent illumination. BHATNAGAR, SIROHI and
SHARMA [5] proposed a criterion for the case of unequally bright points. This criterion is based on the ratio of
LPI (lower peak intensity) and DipI (dip point intensity) in the resultant intensity distribution of the image of
two points. According to this criterion, the two points are just resolved if
0.735Dip
LP
I
I
or 0.265
LP Dip
LP
L I
I
……………( 1)
response functions of the optical system. In the field of image quality assessment, the study of two-point
resolution is still a subject of great interest. In the field of astronomy, the problem of two-point resolution is very
significant when it comes to resolve two closely spaced point objects, viz., binary stars. According to the
principles of geometrical optics, there is no upper limit to the resolving power of a perfect optical imaging
system. But, whatever, and perfect the optical imaging system may be, the image of a point is never a point, due
to diffraction and aberration. Due to the wave nature of light, the image of a point object is a spread of light
called the Fraunhofer diffraction pattern. When two object points are close to each other, the image of two-point
objects consists of super-position of the two point spread functions corresponding to the two objects points and
the type of super- position depends on the phase correlation between the two objects points. If the object pointes
are very close to each other, it will be difficult to recognize the two images in the super- position of the point
spread functions. The overlapping of individual diffraction patterns makes it a difficult job to detect the presence
of individual images. The size of the finest detail or minimum separation object points that can be just resolved
is given by the “Limiting Resolution”. The reciprocal of the limit of resolution gives the resolving power of
the optical system. The lesser the limit of resolution, the greater will be the resolving power of the optical
system. In order to study the problem of two-point resolution experimentally, two pinholes in an opaque screen
are illuminated by a source of monochromatic light. The light emerging from the pinholes is passed through the
optical system under study. The total intensity distribution in the image of the pinholes is studied as a function
of the image plane co ordinates. To determine the minimum resolvable separation between the pinholes, a
criterion of distance in object space to the first null in the axial intensity distribution.
However, the Rayleigh and Sparrow criteria have been the most extensively used criteria in the field of
image science. It may be mentioned that these criteria are based, directly or indirectly, on the (PSF)
The reasons for choosing these two criteria for the present dissertation are given below.
In the field of image science, both the Rayleigh and Sparrow criteria have been and are still being used
extensively in the assessment performance of optical imaging systems.
Several workers have modified these criteria to suit various imaging situations for the propose of comparing
and assessing the performance of optical systems.
3. Trade-Off between Detection and Resolution of Two Point objects under various conditions of
www.iosrjournals.org 47 | Page
As the Rayleigh criterion has limited applicability, we have chosen the Sparrow criterion also. The Sparrow
criterion is sensitive to various parameters such as intensity ratio of object points, non-uniform transmission
of the aperture, degree of coherence of illumination of the object points. The Sparrow criterion is amenable
to quantitative calculations
It has been empirically found that the effects of noise limitations on the two-point resolution correlate well
with these two criteria.
III. The Rayleigh Criterion
The celebrated Raleigh criterion states that “the two point sources are just resolved if the maximum
of one irradiance pattern coincides with the first minimum of the other”. This means that two closely
spaced points can be considered as just resolved if we are able to distinguish the resultant PSF in the image as
being due to two objects instead of one. It may be pointed out the Rayleigh proposed his criterion to be used for
line spectra in Spectroscopy. But it can be equally applied for images of point objects as well. In its original
form, the Rayleigh criterion is applicable to two equally bright points under incoherent illumination. The
Rayleigh criterion implies a pronounced central dip (minimum) in the resultant image intensity distribution
curve of the equally bright and incoherent object points. This dip or the „saddle point‟ is midway between the
two PSF peaks. For circular apertures, the dip-point intensity is 73.5% of the maximum intensity. This implies a
drop of 36.5% in intensity. The corresponding values for slit apertures are respectively 81.1% and 18.9%, for a
circular aperture the intensity distribution is the Bessel function squared or the Besinc function and for one
dimensional object (slits) it is the sinc function squared.
The coherent Rayleigh limit for two dimensional systems is 5,146 dimensionless diffraction units for point
objects of equal intensity. In the resultant image intensity distribution curve a smaller drop in intensity is
associated with a smaller limit of resolution. Several researchers, to suit various imaging situations, modified the
Rayleigh criterion. To suit the case of object points of unequal intensities TOLANSKY [6], CHATURVEDI
and SODHA [7] modified the Raleigh criterion. In the redefined Rayleigh criterion, the two objects points are
said to be just resolved, if the contrast between the lower intensity peak LPI and the dip (saddle point) point of
minimum intensity DipI is given by,
0.735Dip
LP
I
I
or 0.265
LP Dip
LP
L I
I
……………(2)
BHATNAGAR, SIROHI and SHARMA [5] have employed the above modified Rayleigh criterion for case of
unequally bright object points. The corresponding value for two lines is 0.19.
IV. The Sparrow Criterion
The Rayleigh criterion cannot be applied for intensity spread functions having non-zero minima or
coherent systems or for unequally bright point objects. Sparrow recognized the limitations and the arbitrariness
involved in the Rayleigh criterion and observed that “as originality proposed, the Rayleigh criterion was not
intended as a measure of the actual limit of resolution, but rather as an index of the relative merit of
different instruments”. Sparrow proposed an alternate criterion of resolution, which he is called “the
undulation condition”. This is referred to as Sparrow criterion. According to the Sparrow criterion, two
object points can just be resolved when the second derivative of the total intensity distribution in the diffraction
image of the two object points, vanishes at a point midway between the respective Gaussian image points. When
this condition is satisfied, the distance between the two object points gives the Sparrow limit of resolution,
hereafter to be referred to as SL. According to this criterion, two object points are said to be just resolved if in
the resultant intensity distribution curve, the central dip just vanishes. The separation between the object points
under these conditions gives the Sparrow Limit (SL).
When the actual separation between two object points assumed to be of equal brightness, is larger than
the critical limit (SL), the dip in the resultant intensity distribution curve is at the midpoint between the two
Gaussian image points. As the actual separation between the object points is decreased, the dip-point decreases
in its upward concavity and it just vanishes at a particular separation of the two object points. The separation
between the object points under this condition of the vanishing dip gives the Sparrow limit of resolution 0Z .
The Rayleigh criterion implies a finite contrast in the image while Sparrow criterion leads to the
limiting case of vanishingly small contrast. “In its original context, the Sparrow criterion was applied to
incoherent illumination; the immediate generalization to coherent illumination is due to LUNEBERG. The case
of two objects points of equal intensity is very rare in practical imaging situations. In the holographic image
formation under partially coherent illumination and in the defocused image of two points in partially coherent or
coherent illumination, the object points are of unequal intensities. Realizing this aspect, ASAKURA [4]
4. Trade-Off between Detection and Resolution of Two Point objects under various conditions of
www.iosrjournals.org 48 | Page
introduced the “modified Sparrow criterion” to suit the situation of actual object points which are unequally
bright. This modified Sparrow criterion is relevant in such practical imaging systems.
The “modified Sparrow criterion” states that, “the resolution is retained, when the second derivative of
the image intensity distribution vanishes at a certain point (Z= 0Z ) between two Gaussian image points, with the
condition that this point 0Z should be a solution for the first derivative of the image intensity distribution
becoming zero”. The modified Sparrow criterion can be mathematically written as,
2
02
0
I z
atz z
Z
……………(3)
and
2
02
0
I z
atz z
Z
When the two object points are of equal intensities and very well separated, in the resultant intensity
distribution curve, there will be a very well pronounced dip point, which is located at the centre between the
Gaussian image points. When the two object points are of unequal intensities, it is noticed that the dip point in
the resultant intensity distribution curve is not located midway between the two Gaussian image points. It is also
observed that as the difference between the intensities of the object points increases, the dip point is found to
shift towards lower peak in the intensity distribution curve.
As the two object points come closer, the dip disappears at a certain separation. This vanishing dip-
point becomes a point of inflection which is no longer a minimum or maximum point. At this point, both the
first and the second derivatives of the resultant intensity distribution become zero.
V. Review Of Previous Works On TPR
The two-point resolution is historically, one of the earliest physical parameters proposed to evaluate the
performance of optical imaging systems. The various criteria of resolution that have proposed from time to time
by several authors are all arbitrary, and hence, they do not determine or set absolute limit of resolution imaging
system. The studies on the problem of two-point resolution of imaging systems were initiated by LORD
RAYLEIGH [8]. Subsequently, this problem has attracted the attention of several researchers, several authors
who have investigated this problem in various imaging situations. Hence, a vast amount of literature has been
reported by several authors on this subject. The papers of CESINI et. al. [9], BARAKAT [10], ASAKURA [4]
and MILLS and THOMPSON [11] provide a very good review of the studies on this subject.
The survey of the literature reveals that both the Rayleigh and Sparrow criteria were modified to suit
various imaging situations, CHATURVEDI and SODHA [7], ASAKURA [4], JAISWAL and BHOGRA [12],
BHATNAGAR, SIROHI and SHARMA [5], etc. CARSWELL and RICHARD [13] suggested a criterion for
coherent system as an extension of Rayleigh criterion. The Rayleigh criterion was based on the tacit assumption
that the two object points are incoherent and it was stated for object points of equal intensity. An alternate and
more practical criterion, called the Sparrow criterion, for equally bright point object in incoherent illumination
was stated by Sparrow. ASAKURA [4] introduced the modified Sparrow criterion for two point resolution of
unequally bright points and investigated the Sparrow limit in partially coherent illumination.
The two-point resolution studies for one- dimensional systems have been made by ROJAK [14] for
intermediate states of coherence NYYSSONEN and THOMPSON [15] have plotted and studied the actual
intensity distribution in the image for the coherent and actual intensity distribution in the image for the coherent
and incoherent extremes. GRIMES and THOMPSON [16] discussed the two-point resolution with partially
coherent light for equally bright object points. They have studied the relation between the measurable and the
real separations of the two object points and also verified it experimentally.
GUPTA, SIROHI and NAYYAR [17] used the Sparrow criterion and derived an expression to obtain
the limit of resolution or an annular aperture in partially coherent light. They have also studied the variation of
the critical resolution for various obscuration ratios and found a near linear relation. A few studies have also
been reported on the problem of two- point resolution in microscopes. BASURAY [18] has studied the two-
point resolution of phase objects in partially coherent light in ordinary microscopes. BHATNAGAR and
SIROHI [19] have studied the effect of a centrally obstructed condenser on resolution of a microscope.
MEHTA [20] employed Sparrow criterion and investigated the dependence of the critical resolution of
coherent properties of the point’s sources taking into account non-uniformity of illumination. He found that the
non- uniform illumination has increased the just resolvable separation. MEHTA, VIRDI and NAYYAR [21]
studied the two-point resolution by a circular aperture employing non- uniform and non-symmetric illumination.
5. Trade-Off between Detection and Resolution of Two Point objects under various conditions of
www.iosrjournals.org 49 | Page
ASAKURA [4] for the first time introduced the modified Sparrow criterion to study the two–point resolution of
unequally bright points under partially coherent illumination for the Airy pupil. SODHA and AGARWAL [22]
discussed the dependence of the limit of resolution of telescopes on various factors like the intensity ratio,
background intensity and the ratio of the minimum to the lower maximum of the resultant intensity pattern of
the two objects. BHATNAGAR, SIROHI and SHARMA [5] made use of the modified Rayleigh criterion and
investigated the dependence of the limit of resolution, on the intensity ratio and the background intensity in
partially coherent light.
The literature is rich in the studies on the effect of apodisation on the two-point Resolution of imaging
systems. BARAKAT and LEVIN [23] used apodization to increase the two-point resolution in terms of the
Sparrow criterion, for both coherent and incoherent cases. ASAKURA and UENO [24] also employed
apodisation to increase two-point resolution and obtained the required pupil function by solving the
homogeneous Freehold integral equations. SHANKARAIAH et.al. [25] used Gaussian apodisation and studied
the resolution of two unequally bright points in partially coherent illumination. NAYYAR and VERMA [26]
have discussed the partially coherent two-point resolution of a Gaussian aperture making use of several
resolution criteria. MAGIERA and MAGIERA [27] study the partially coherent two-point resolution by Walsh-
type apertures using the Sparrow criterion.
GRUBER and THOMPSON [28] have discussed the effect of apodisation in coherent imaging systems.
SURENDAR et al [29] have used Lanczos’ filters and studied the resolution of unequally bright points in
partially coherent illumination. THOMPSON [30] has investigated the diffraction by annular apertures with
semi-transparent central regions that add a uniform phase and found an improved two-point resolution.
NAYYAR [31] has discussed two-point resolution employing both the Rayleigh and Sparrow criteria for the
semi-transparent -phase annular apertures and for the annulus in partially coherent illumination. NAYYAR
and VERMA [26] have investigated the effect of non-uniform and non-symmetric illumination on the two-point
resolution of a microscope using a semi-transparent -phase annular aperture.
There have been a few studies NAYYAR [31] Mc KECHNIE [32] on the two-point resolution for two
anti –phase coherent point objects with a theoretical prediction of an infinite degree of resolution which has
been exploited in holographic spectroscopy. MILLS and THOMPSON [11] have combined apodisation and
aberration and examined the Sparrow limit for spherical aberration, coma and defocus, both with and without
apodisation. They employed Gaussian apodisers Mc KECHNIE [32].In this case, the value of is neither 0
(incoherent Illumination) nor 1 (Perfectly Coherent Illumination), as can assume any value in the range of
0 1 the equation ( V-c) will remain unchanged.
2 2
02 .I Z G Z B G Z B Z G Z B G Z B ……………(4)
Obviously, the values of close to 0 will behave more like an incoherent situation, whereas, the
values of close to 1 will behave more like a coherent situation.
A close examination of the results obtained by computed values of intensities, shows that when the
composite filter is used in the super-resolving region, the Rayleigh and Sparrow limits of resolution of this filter
are found to be less than those for the apodised on the diffraction-limited systems. For this configuration of the
composite filter the limit of Resolution are found to show a no-linear variation with the coherence parameter.
Further, contrary to apodised and diffraction –limited systems, the analyzed the influence of defocusing on
resolution employing modified Rayleigh criterion and found an increase in the resolution with defocus in
incoherent illumination. The influence of partially coherent illumination and spherical aberration on microscopic
resolution has been studied by SOM [33].
It has been observed by ASAKURA [4] that in two- point resolution studies there are only two
measurable quantities the separation between the two peaks and the intensities of the peaks in the resultant
image intensity distribution of a two-point object. For a perfect imaging system, these two quantities are
normally expected to give corresponding actual quantities of the object points. However, this is not found to be
true GRIMES and THOMPSON [16], GRUBER and THOMPSON [28], MILLS and THOMPSON [11]. The
difference between the actual and measured separation (peak- to peak distance in the image plane) of the two
object points has been called “mensuration error” MILLS and THOMPSON, [11]. They have also found that
apodisation decreases this error, at the same time, degrade the resolution limit. They had also performed an
experiment which confirms the theoretical results.
VI. Formulation of Two Point Resolution
Our derivation will be based on the method by HOPKINS and BARHAM [34]. According to them,
coherence between the two points in the object plane must be exactly the same when the condensers have the
same numerical aperture for the two possible kinds of illumination viz., Kohler and critical illumination. In
6. Trade-Off between Detection and Resolution of Two Point objects under various conditions of
www.iosrjournals.org 50 | Page
critical illumination, any two points in the object plane are illuminated co-physically but with different
amplitude by a single element of the illuminant. On the other hand, in Kohler illumination, the coherent light
from an element of the source illuminates the two points in the object plane with a different phase, but with the
same amplitude. HOPKINS and BARHAM [34] had shown that precisely the same results are obtained for both
critical and Kohler illumination. This implies that the coherence between the two points in the object plane in
the two types of illumination must be the same as long as the ratio of the numerical aperture of the condenser to
that of the objective is the same in both the cases as have been mentioned earlier. This conclusion was in
agreement with that predicted by ZERNIKE [35]. Consequently, the expression for the total intensity
distributions at an arbitrary point in the image plane remains the same irrespective of the type of illumination
that is considered.
Figure 1 gives a schematic representation of the optical system for partially coherent resolution of two
unequally bright object points. The relative positions of the source plane, object, the image plane and also the
positions of the condenser and objective are indicated in the figure. Let us consider two- pinholes in the object
plane which are illuminated by a condenser of circular aperture. The image forming objective also consists of an
aperture which is also circular in shape.
It has to be mentioned that hereafter, we are going to necessarily introduce some variables whose
symbols are the same for the variables used in the Ref [36]. Further, the same variables elsewhere may have
different symbols. This has been necessitated because of the large number of variables used in the present work
and also to retain the familiar symbols of these variables used in the most of the standard reference books and
literatures. Fortunately, there will be no confusion at all, so we can safely ignore all the symbols for different
variables used in other works.
Fig.1. Optical system for the Resolution of two unequal bright points
Assuming critical illumination of the object plane, the resolvable separation between the two pin-holes
can be written as
sin
K
d
n
…………… (1)
where is the semi-angular aperture of the objective, K is a constant which is a function of , the
ratio of numerical apertures of the condenser and the objective
i.e.
0
.
.
c
N A
N A
……………( 2)
When 0 the illumination corresponds to complete coherence and when , it
corresponds to complete incoherence of the object plane. A finite value of indicates that the illuminating
beam is partially coherent. The magnification, m of the optical system can be expressed by
1
d
m
d
……………( 3)
Where
1
d is the length in the image space whose refractive index is
1
n and d is the length in the
object space and its refractive index is n . If the ray makes angles and
1
, the magnification becomes
7. Trade-Off between Detection and Resolution of Two Point objects under various conditions of
www.iosrjournals.org 51 | Page
1 1
.
.
n Sin
m
n Sin
…………… (4)
Further, the variable appearing in the amplitude distributions in the diffracted pattern in the image
plane is the co-ordinate distance Z and is given by
1 1 12 2
. .Z dn Sin d n Sin
……………(5)
and the theoretical separation between the two pin holes 1P and 2P is given by
0 0 0
2
.Z L N A
……………( 6)
Where 1P and 2P are the two apertures ( 1 2 0p p L ) whose images are
1
1p .Assuming an element ds
of the source S, it is imaged by the condenser C at ,P Z , its amplitude in the image of ds will be similar
to that in the Airy disc with the centre at P . Let 1 1PP L and 2 2PP L , thus the amplitude at 1P due to
dS may be written as
1 1
1
1
2
. .
2
. .
c
c
J L N A
A
L N A
=
1 1
1
J r
r
……………( 7)
Where 1 1
2
.r L NA
and
0
.
.
c
N A
N A
and the amplitude at 2P is given by
1 2
2
2
J r
A
r
……………(8)
Where 2 2 0
2
.r L NA
If is the ratio between the intensities of two point objects 1P and 2 0 1P , then we
consider 2 1A P .
The diffraction images of the point objects 1P and 2P will be formed 1P and 2P whose optical
distances are 1Z and 2Z from
1
P respectively and hence the resultant amplitude at
1
P will be given by
1 1 2 2
1 2
1 2
J Z J Z
A A A
Z Z
…………… (9)
But when the objective is apodised by a pupil, each point gives rise to diffraction image whose
normalized amplitude response to unit amplitude in the object point is given by
1
0
0
2 ..G Z f r J Zr rdr ……………( 10)
Obviously, the intensity at
1
P due to the entire source S is derived by integrating
2
A over the domain
1S and is given by
2
1 2 1 1 2 2,
s
I Z Z AG Z A G Z ds
= 2 2 2 2
1 1 2 2 1 2 1 22
s s s
A G Z ds A G Z ds A A G Z G Z ds
……………(11)
8. Trade-Off between Detection and Resolution of Two Point objects under various conditions of
www.iosrjournals.org 52 | Page
Since the domain of S is of infinite extent, the geometrical image of the source will be large compared
to the distances of 1P and 2P .
Assuming the co-ordinates of the points 1P and 2P as ,0Y and ,0Y respectively and
02 ,Y Z Z as the co-ordinates of P , we can write
2 2 2
1 2 .r Y Z YZ Cos ……………(12)
and
2 2 2
2 2 .r Y Z YZ Cos ……………(13)
Therefore,
1
2 22 2
12
1 1
2 20 0 2
1
2 .
.
2 .s
J Y Z YZ Cos
A ds Z dz d
J Y Z yz Cos
……………(14)
When the origin is displaced to the point ,0y the integration remains
un-changed as the domain of the integration extended to infinity. Therefore,
2
12
1
0
.
.
.s
J Z
A ds dz
Z
.
2
2
0
d
……………( 15)
Since
1
0
1
. .
2
J x
dx
x
on similar lines, we can write
2
1 2
s
A ds
……………(16)
since 2 1A A
1 1
2 2 2 22 2 2
1 1
1 2 1 1
2 2 2 20 0 2 2
2 . 2 .
. .
2 . 2 .s
J Y Z YZ Cos J Y Z YZ Cos
A A ds Z dz d
Y Z yz Cos Y Z YZ Cos
…………………………………....(17]
which will be reduced to, when the origin is displaced to the point (- ,Y O ) and 2Y is replaced by
0Z ,
1
2 22 2
1 0 01
1
2 20 0 2
0 0
2 ..
. .
.
2 .
J Z Z Z Z CosJ Z
Z dz d
Z
Z Z Z Z Cos
…………..( 18)
using Newman’s addition theorem for Bessel functions the eqn. (18) can be expanded as
0 0 0
1
2 p p
p
J Q J a J b Cos P J a J b
……………(19)
where
22 2
2Q a b abCos
Differentiating the eqn. (19) with respect to Q we obtain
1
0 .
J Q
J Q ab Sin
Q Q
……………( 20)
Thus,
1
1
sin 2 . p
p
J Q
ab P Sin P J b
Q
9. Trade-Off between Detection and Resolution of Two Point objects under various conditions of
www.iosrjournals.org 53 | Page
1
1
2
p p
p
J Q PSinP
b J a J b
Q a Sin
Assuming 0 0,a Z b Z , and Eqn.(5.20), the Eqn. ( 5.18) can be written as
2
1
1 2 2 1
0 0 0
2
. p pp
s
J Z PSin P
A A ds dZ J a J b d
Z Z Sin
……………(21)
Where takes the value of 2 , when P is odd and takes the value of zero, when P is even.
Thus, Eqn (5.21) becomes
0 1
1 2 2 13...
0 0
. . .4
.
p p
p
s
J Z J Z J Z
A A ds dZ
Z Z
……………(22)
The integral in Eqn (5.22) is one of the Lommel’s integrals. Its value is
1
2
for P =1 and zero for other
odd integral values.
Thus,
1 0
1 2 2
0
2 .4
( . )s
J Z
A A ds
Z
……………(23)
The total intensity at
1
p in the image plane will be obtained by substituting the values from the Eqns
(15), (16), (23) in Eqn. (11) as
1 02 2
1 2 1 2 1 22 2 2
0
22 J Z
I Z Z G Z G Z G Z G Z
Z
and is written, after ignoring 2
as
1 02 2
1 2 1 2 1 2
0
.
. 2.2
.
J Z
I Z Z G Z G Z G Z G Z
Z
……………(24)
= 2 2
1 2. 2G Z G Z 0 1 2Z G Z G Z ……………(25)
Where
1 0
0
0
2J Z
Z
Z
is the coherence factor between the two object points.
When 0 1 00, 0Z J Z ; then 0Z is a root of 1 0J Z . The above condition holds good when
1p and 2p are incoherently illuminated. Therefore, the Eqn. (25) reduces to
2 2
1 2 1 2.I Z Z G Z G Z ……………(26)
which is particularly incoherent if 1 ; 0z is a non-zero root of 1 0 0J Z ,that is, the
distance between the two point objects 1p and 2p equal to the radius of the every dark ring of the Airy
pattern. The resolution of these point objects, incoherently illuminated if 0
. . c
N A N A , is given by
0.61
sin
d
n
……………(27)
This is possible only when 0 0Y Z and the two narrow apertures are illuminated, incoherently.
When the aperture of the condenser is made limiting narrow i.e. 0
and
1 0
0
2
1
J Z
Z
, we get
2
1 2 1 2.I Z Z G Z G Z ……………(28)
10. Trade-Off between Detection and Resolution of Two Point objects under various conditions of
www.iosrjournals.org 54 | Page
which implies that through the distance between the two points is independent of the aperture of
condenser, both the points are illuminated with coherent light and is obtained when 0 1z Measuring the
coordinate x from the mid-point, the total resultant intensity along the line 1p and 2p is formulated as
02I x I x y I x y y Z G x y G x y ……………(29)
where
2 2
; ;I x y G x y I x y G x y x y and 0 / 2y Z , ……………( 30)
It is found that using Kohler illumination method, the expression (29) can also be derived. The total
intensity in case of equally bright object points, by assuming different coherence factors between them, have
also been obtained by GRIMES and THOMPSON [16] and HOPKINS and BARHAM [34]. Where X= 0 / 2Z
and 0Z is the separation between the object points 1p and 2p the expression (29) obtained above is in
agreement with the expression obtained by ASAKURA [4] for the total intensity in the image of two unequally
bright points in partially coherent illumination for the Airy case. Asakura’s expression is given by
2 2
1 1 1 1
2 2 2 ( ) 2 ( )
2
( ) ( )
J X B J X B J X B J X B
I X
X B X B X B X B
… (31)
Equation (29) is same as equation (31) with the following notation.
1 1
1 2
2 2
;
J X B J X B
G Z G Z
X B X B
0 02 ;B Z Z
Equation (31) with a slight change of notation may be rewritten as
2 2
02I Z G Z B G Z B Z G Z B G Z B ……………(32)
Where 02B Z is the actual separation between the object points, is the ratio of the intensities of
the object points and 0Z is the real part of the complex degree of coherence of illumination of the object
point. G Z B and G Z B are the complex amplitude impulse response functions of the optical
imaging system corresponding to the object points, each of which is situated at a distance of 0 / 2Z on either
side of the optical axis. The amplitude impulse response functions G Z B are given by.
1
2
0
0
2G Z B f r r J Z B r dr ……………(33)
The above expression gives the amplitude response function at the Gaussian focal plane if 1f r the
above expression gives the amplitude impulse response function for Airy pupil.
1
0
0
2G Z B J Z B r rdr ……………(34)
The amplitude response function at a defocused plane specified by Y is given by,
1
2
0
, 2 exp / 2G Y Z B f r iyr Z B rdr ……………(35)
Where f r is the filter of the optical system and it specifies the non-uniformity of amplitude
transmission at the exit pupil.
Acknowledgement
The authors are grateful to Prof.P.K.Mondal, the director, Mondal Institute of Optics(MIO), Hyderabad, Andhra
pradesh, India for developing our interest in this topic.
11. Trade-Off between Detection and Resolution of Two Point objects under various conditions of
www.iosrjournals.org 55 | Page
References
[1] BARAKAT, R., opt. Acta., vol.17, 1969.
[2] RONCHI, V., et.al., Atti. Fond. G. Ranchi., vol.35, 1980.
[3] LORD. RAYLEIGH, Collected papers (Cambridge Univ. Press, Cambridge), Vol.3, 1902.
[4] ASAKURA,T., Nouv, Rev.opt., vol.5, 1974.
[5] BHATNAGAR, G.S., SIROHI, R.S. and SHARMA, S.K., opt. commun., vol.3, 1971.
[6] TOLANSKY, S.C. “High Resolution Spectro Scopy (Mathuen & Co., Landon), 1947.
[7] CHATURVEDI, K.C. and SODHA, M.S., Indian J. Phys., vol.30, 1956.
[8] LORD. RAYLEIGH, Collected papers (Cambridge Univ. Press, Cambridge), Vol.3, 1902.
[9] CESNI, G., et.al., J. optics (paris), vol.10, 1979.
[10] BARAKAT, R., J. opt. Soc. Am., vol.52, 1962.
[11] MILLS, J.P. and THOMPSON, B.J., J. Opt.Soc. Am.A., vol. 3, 1986.
[12] JAISWAL, A.K. and BHOGRA, R.K., optica. Acta., vol.21, 1974.
[13] CARSWELL, A.I. and RICHARD, C., Appl. Opt., vol.4, 1965.
[14] ROJAK, F., M.S. THESIS, “Two point Resolution with partially coherent
Light”, Lowel Technological Institute, Lowell, mass., 1961.
[15] NYYSSONEN, D. and THOMPSON, B.J., J. opt. Soc. Am., vol.57, 1967.
[16] GRIMES, D.N. and THOMPSON, B.J., J. opt. soc. Am. vol. 57, 1967.
[17] GUPTA, B.N., SIROHI, R.S. and NAYYAR, V.P., Phys.Letters.,vol.33A,1970.
[18] BASURAY, A., J. opt. India, vol.1, 1972.
[19] BHATNAGAR, G.S. and SIROHI. R.S., optica. Acta., vol.18, 1971.
[20] MEHTA, B.L., Appl. Opt., vol.13, 1974.
[21] MEHTA, B.L., VIRDI, S.P.S. and NAYYAR, V.P., Atti. Fond.G. Ronchi, vol.26,1971.
[22] SODHA, M.S. and AGARWAL, A . K., Optik, vol.24, 1967.
[23] BARAKAT, R. and LEVIN, E., J. opt. Soc. Am., vol. 53, 1963.
[24] ASAKURA, T. and UENO, T., J. Opt ( paris)., vol.8, 1977.
[25] SHANKARAIAH, M., et.al., Atti. Fond. G. Ronchi., vol.37, 1982.
[26] NAYYAR, V.P. and VERMA, N.K., Appl. Opt., vol.17, 1978.
[27] MAGIERA, A. and MAGIERA, L.Optica.Applicata., vol.14, 1984.
[28] GRUBER, L.S. and THOMPSON, B.G., opt. Eng., vol.13, 1974.
[29] SURENDAR, K., et.al., opt. India, vol.22, 1993.
[30] THOMPSON, B.J., “Image assessment and Specification” (Ed. D. Dutton, Proc. SPIE, California), vol.46, 1974.
[31] NAYYAR, V.P., Nouv. Rev. opt., vol.5, 1974.
[32] Mc KECHINIE, T.S., Optica. Acta., vol.20, 1973.
[33] SOM, S.C., opt. Acta., vol.18, 1971.
[34] HOPKINS, H.H. and BARHAM, P.M., Proc. Phys., 1950.
[35] ZERNIKE, F., Physica., vol. 5, 1938.
[36] LIPSON, A ., LIPSON, J. and LIPSON, H ., “ Optical Physics”,4th
Ed.Cambridge University Press, London, 2011.