"Efficient time-domain back-projection focusing core for the image formation of very high resolution and highly squinted SAR spotlight data on scenes with strong topography variation" - Author(s): Francesco Tataranni, Giuseppe Disimino, Antonella Gallipoli, INNOVA Consorzio per l’Informatica e la Telematica (Italy); Paolo Inversi, Telespazio S.p.A. (Italy)
This document provides information on several remote sensing projects from IEEE 2015. It lists the titles, languages, and abstracts for 8 projects related to classification and analysis of hyperspectral and multispectral images. The projects focus on techniques such as sparse representation in tangent space, Gabor feature-based collaborative representation, level set evolutions for object extraction, and dimension reduction using spatial and spectral regularization.
1. The document discusses and compares various motion estimation methods used in video compression standards, including translational and affine motion models. 2. It describes pixel domain block matching and frequency domain matching techniques. 3. It provides details on parameters for block matching motion estimation such as search area size, sub-pixel precision, and hierarchical and early termination techniques to improve efficiency.
This paper proposes an improved Semi-Global Matching (SGM) algorithm for stereo vision that introduces a "branch cost propagation" concept. This allows each path to actively search for and collect feature information, boosting meaningful signal energy and helping overcome noise. The authors implemented this "branch SGM" on an FPGA, finding it used 10% more resources but reduced error rates by 10-30% compared to standard SGM. Standard SGM is a widely used real-time stereo matching method that aggregates costs along multiple scanline paths, but can be noisy. The proposed method aims to enhance SGM's noise resistance for applications like autonomous vehicles.
Building and road detection from large aerial imageryShunta Saito
This document presents a convolutional neural network approach for simultaneously detecting buildings and roads from aerial imagery in 3 channels. The CNN is trained on image patches from a dataset of 147 aerial images and corresponding 3-channel label maps containing buildings, roads, and other labels. Several CNN architectures are tested on 10 held-out images, with the basic architecture achieving the best precision of 0.8905 and 0.9241 for roads and buildings, respectively, outperforming a previous approach. The proposed method requires no pre-processing or hand-designed image features as the CNN is able to learn good feature extractors automatically through training.
This document provides information on several remote sensing projects from IEEE 2015. It lists the titles, languages, and abstracts for 8 projects related to classification and analysis of hyperspectral and multispectral images. The projects focus on techniques such as sparse representation in tangent space, Gabor feature-based collaborative representation, level set evolutions for object extraction, and dimension reduction using spatial and spectral regularization.
1. The document discusses and compares various motion estimation methods used in video compression standards, including translational and affine motion models. 2. It describes pixel domain block matching and frequency domain matching techniques. 3. It provides details on parameters for block matching motion estimation such as search area size, sub-pixel precision, and hierarchical and early termination techniques to improve efficiency.
This paper proposes an improved Semi-Global Matching (SGM) algorithm for stereo vision that introduces a "branch cost propagation" concept. This allows each path to actively search for and collect feature information, boosting meaningful signal energy and helping overcome noise. The authors implemented this "branch SGM" on an FPGA, finding it used 10% more resources but reduced error rates by 10-30% compared to standard SGM. Standard SGM is a widely used real-time stereo matching method that aggregates costs along multiple scanline paths, but can be noisy. The proposed method aims to enhance SGM's noise resistance for applications like autonomous vehicles.
Building and road detection from large aerial imageryShunta Saito
This document presents a convolutional neural network approach for simultaneously detecting buildings and roads from aerial imagery in 3 channels. The CNN is trained on image patches from a dataset of 147 aerial images and corresponding 3-channel label maps containing buildings, roads, and other labels. Several CNN architectures are tested on 10 held-out images, with the basic architecture achieving the best precision of 0.8905 and 0.9241 for roads and buildings, respectively, outperforming a previous approach. The proposed method requires no pre-processing or hand-designed image features as the CNN is able to learn good feature extractors automatically through training.
The document discusses a maximum likelihood algorithm for accurately estimating the Doppler centroid from SAR data using natural point targets. It begins by motivating the need for an accurate estimation method without using expensive transponders. It then describes (1) using persistent point scatterers as targets, (2) a spotlight azimuth focusing technique to extract target spectra, and (3) maximizing the likelihood function to estimate the Doppler centroid. Results on simulated and real data show the estimate achieves the Cramer-Rao lower bound and improves SAR image quality when applied.
[unofficial] Pyramid Scene Parsing Network (CVPR 2017)Shunta Saito
Pyramid Scene Parsing Network introduces the Pyramid Pooling Module to improve semantic segmentation. The module captures context at different regions and scales by performing average pooling at different pyramid levels on the final convolutional feature map. Experiments on ADE20K and PASCAL VOC datasets show the Pyramid Pooling Module improves mean Intersection-over-Union by over 4% compared to global average pooling, achieving state-of-the-art performance.
IRJET- Structured Compression Sensing Method for Massive MIMO-OFDM SystemsIRJET Journal
The document presents a structured compression sensing method for massive MIMO-OFDM systems called Priori information-assisted adaptive structured subspace pursuit (PA-ASSP) algorithm. PA-ASSP aims to improve channel estimation accuracy with reduced complexity compared to existing algorithms like adaptive structured subspace pursuit (ASSP). It initializes channel estimation using prior information and exploits common sparsity of MIMO channels across OFDM symbols. Simulation results show PA-ASSP achieves better bit error rate and normalized mean square error performance than ASSP and other algorithms under different SNR levels.
논문 제목부터 재미있어 보이는 주제 입니다. 오늘 딥러닝 논문읽기 모임에서 소개드릴 논문은 DEAR: Deep Reinforcement Learning for Online Advertising Impression in Recommender Systems, 강화학습을 이용한 온라인 추천 시스템 입니다. 비공개 된 정보들이 몇가지가 있지만, 아이디어면에서 여러분들이 충분히 재밌게 들으실수 있습니다. 강화학습의 기본적인 개념부터,
논문에 대한 디테일하고 깊이 있는 리뷰를
펀디멘탈팀 김창연 님이 도와주셨습니다!
오늘도 많은 관심 미리 감사드립니다!
추가로 .. 딥러닝 논문읽기 모임은 청강방 오픈채팅 방을 운영하고 있습니다. 최근 악성 홍보 봇 계정이 늘어나 방을 비밀번호를 걸어두게 되었습니다
딥러닝 청강방도 많은 관심 부탁드립니다!
청강방 링크 : https://open.kakao.com/o/gp6GHMMc
청강방 비밀번호 : 0501
This document compares geometry-based Doppler ambiguity resolution methods for squint synthetic aperture radar (SAR) and presents an indirect scheme for estimating Doppler rate in low-contrast scenes. It introduces squint SAR geometry and the effects of incorrect Doppler parameters. It then describes conventional, iterative, and improved Radon transform geometry-based methods for resolving Doppler ambiguity, noting the improved schemes are faster. Finally, it presents a method to indirectly estimate Doppler rate in low-contrast scenes by first estimating it in high-contrast areas and using the inverse relationship between Doppler rate and range.
Focal Loss for Dense Object Detection proposes a novel focal loss function to address the extreme foreground-background class imbalance encountered in training dense object detectors. The focal loss focuses training on hard examples and prevents easy negatives from overwhelming the detector. RetinaNet, a simple dense detector designed with a ResNet-FPN backbone and focal loss, achieves state-of-the-art accuracy while running faster than existing two-stage detectors. Extensive experiments demonstrate the focal loss enables training highly accurate dense detectors on datasets with vast numbers of background examples like COCO.
The implementation of the improved omp for aic reconstruction based on parall...Nxfee Innovation
This document presents a hardware implementation of an improved orthogonal matching pursuit (OMP) algorithm for signal reconstruction in analog-to-information converters based on compressive sensing. The proposed architecture reduces computational complexity and the number of iterations compared to the original OMP algorithm. It achieves a higher recovery signal-to-noise ratio of 31.04 dB. The design includes parallel complex multiplication, matrix inversion using the Goldschmidt algorithm, and signal estimation units. Implementation on a Xilinx Virtex6 FPGA shows the architecture uses a few percentage of resources at 135.4 MHz with a reconstruction time of 170 μs, faster than existing designs.
Volume ray casting algorithms benefit greatly with recent increase of GPU capabilities and power. In this paper,
we present a novel memory efficient ray casting algorithm for unstructured grids completely implemented on GPU
using a recent off-the-shelf nVidia graphics card. Our approach is built upon a recent CPU ray casting algorithm,
called VF-Ray, that considerably reduces the memory footprint while keeping good performance. In addition to
the implementation of VF-Ray in the graphics hardware, we also propose a restructuring in its data structures. As
a result, our algorithm is much faster than the original software version, while using significantly less memory, it
needed only one-half of its previous memory usage. Comparing our GPU implementation to other hardware-based
ray casting algorithms, our approach used between three to ten times less memory. These results made it possible
for our GPU implementation to handle larger datasets on GPU than previous approaches.
We provide solutions in seismic data processing. Our focus is on research and development by applying state of the art signal processing techniques and computational intelligence in areas such as multiple attenuation, velocity model building and tomography.
We provide solutions in seismic data processing. Our focus is on research and development by applying state of the art signal processing techniques and computational intelligence in areas such as multiple attenuation, velocity model building and tomography.
A brief study of 2 papers in hue preservation and color reproduction.
[MMTH09] Color Correction for Tone Mapping
[KLLH11] Hue Preservation using Enhanced Integrated Multi-scale Retinex for Improved Color Correction
Multiuser MIMO Vector Perturbation Precodingadeelrazi
This paper proposes methods for sum rate optimization in multi-user MIMO systems using vector perturbation precoding. It derives an expression for sum rate in terms of the average transmitted vector energy. It then uses this to obtain a high-SNR upper bound on sum rate and proposes an extension of vector perturbation that allocates different rates to different users. It also proposes a low-complexity user scheduling algorithm as a method for rate allocation.
Like other fields of computer vision, image retrieval has been
revolutionized by deep learning in recent years. Convolutional neural networks are now the tool of choice for computing feature representations of images. Many successful architectures employ global pooling layers to aggregate feature maps to a compact image representation. Using the neural network training procedure based on backpropagation and gradient descent methods, we can learn the global pooling operation from the training data.
We review existing approaches to learned pooling and propose two new layers: A learnable, extended variant of LSE pooling and the generalized max pooling layer based on an aggregation function from classical computer vision.
Our experiments show that learned global pooling can improve performance of image retrieval networks compared to the average pooling baseline for both tasks. For writer identification, our generalized max pooling layer outperforms all other tested pooling layers. Our learnable LSE pooling performs better than global average pooling and yields the best rank-1 score in our experiments on the Market-1501 dataset.
A Novel Blind SR Method to Improve the Spatial Resolution of Real Life Video ...IRJET Journal
This document proposes a novel blind super resolution method to improve the spatial resolution of real-life video sequences. The key aspects of the proposed method are:
1) It estimates blur without knowing the point spread function or noise statistics using a non-uniform interpolation super resolution method and multi-scale processing.
2) It uses a cost function with fidelity and regularization terms of a Huber-Markov random field to preserve edges and fine details in the reconstructed high resolution frames.
3) It performs masking to suppress artifacts from inaccurate motions, adaptively weighting the fidelity term at each iteration for faster convergence.
The method is tested on real-life videos with complex motions, objects, and brightness changes, showing
The document discusses a method for reducing peak-to-average power ratio (PAPR) in orthogonal frequency-division multiplexing (OFDM) systems using compandors. It proposes using a new class of compandors with an approximation of the nonlinear optimal compression function using first-degree spline functions to reduce complexity. Simulation results showed that using the suggested compandor model in an OFDM system improves performance by reducing PAPR.
This document discusses video compression techniques. It begins by explaining the large file sizes required for uncompressed video frames and motivates the need for compression. It then describes some key techniques for video compression including spatial and temporal correlation, temporal modeling using predicted and residual frames, block-based motion estimation, and motion compensation. Specific algorithms discussed are mean absolute difference, mean squared error for cost functions, and adaptive rood pattern search for motion estimation. Examples of motion between video frames are provided. The techniques aim to reduce file sizes by removing spatial and temporal redundancy between frames.
This paper aims, a 3D-Pilot Aided Multi-Input Multi-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) Channel Estimation (CE) for Digital Video Broadcasting -T2 (DVB-T2) for the 5 different proposed block and comb pilot patterns model and performed on different antenna configuration. The effects of multi-transceiver antenna on channel estimation are addressed with different pilot position in frequency, time and the vertical direction of spatial domain framing. This paper first focus on designing of 5-different proposed spatial correlated pilot pattern model with optimization of pilot overhead. Then it demonstrates the performance comparison of Least Square (LS) & Linear Minimum Mean Square Error (LMMSE), two linear channel estimators for 3D-Pilot Aided patterns on different antenna configurations in terms of Bit Error Rate. The simulation results are shown for Rayleigh fading noise channel environments. Also, 3x4 MIMO configuration is recommended as the most suitable configuration in this noise channel environments.
DETERMINATION OF SPATIAL RESOLUTION IN COMPUTED RADIOGRAPHY (CR) BY COMPARING...AM Publications
The QC (Quality Control) testing of spatial resolution in CR (Computed Radiography) using ESF-PSF and IP-PSF methods has been investigated. The object used in this study is a phantom made of copper with 15 cm both in lenght and widht, and 1 mm in thickness. The exposure to phantom was occured with some variation of voltage, i.e. 50 kV, 60 kV, 70 kV and 80 kV for CR system. Current variation wass performed by four times for each voltage, i.e. 1.6 mAs; 4 mAs; 16 mAs and 32 mAs. Digital image data used for the acquisition is in the DICOM format. Measurement of image's spatial resolution wass performed by calculate the value of FWHM as an indicator of good or poor spatial resolution of images. Measurement of FWHM value has performed by using MATLAB R2015b and Corel Draw X7 programs. The FWHM value was obtained from gaussian function which provides a complete information on opaqueness effects that occur in images. The results showed that the best value of spatial resolution for the ESF-PSF methode is 2.50 lp/mm and the worst value is 2.36 lp/mm, while for the best resolution using IP-PSF is 2.85 lp/mm and worst is 1.01 lp/mm. The value of spatial resolution is proportional to the voltage of the tube, where the higher voltage provides the higher value of spatial resolution. But the value of spatial resolution has decreased with the current variation due to the higher current of mobile X-ray's tube.
IRJET- Design and Analysis of Passive Multi-Static Radar SystemIRJET Journal
This document presents a new algorithm for passive multi-static radar detection called Range-Doppler Transformation. The algorithm relies on large networks of inexpensive radar receivers to detect targets. It transforms target detections in the Range-Doppler domain at each receiver into ellipses in the spatial domain. These ellipses are intersected and the point of highest consensus is identified as the target location. The algorithm is more accurate, robust to synchronization errors, and has better time complexity than current passive radar detection methods. It was tested in simulations with medium success at locating targets within error bounds of the simulation.
IRJET- Performance Analysis of IP Over Optical CDMA System based on RD CodeIRJET Journal
This document presents a performance analysis of an IP over optical CDMA network system based on a random diagonal (RD) code. It proposes using spectral amplitude coding OCDMA to directly connect the IP layer to the optical layer, eliminating intermediate layers and reducing overhead. The system architecture, design steps, and simulation setup are described. Simulation results using OptiSystem show that bit error rate increases with the number of simultaneous users and data transmission capacity decreases with transmission distance as expected. The RD code OCDMA system provides a potential solution for next-generation networks by enabling intelligent functions and advanced services at the optical layer.
A Review on Airlight Estimation Haze Removal AlgorithmsIRJET Journal
This document reviews algorithms for estimating airlight to remove haze from images. It discusses how haze degrades image quality by attenuating light reflected from objects and adding atmospheric light. Common haze removal techniques rely on a atmospheric scattering model. The dark channel prior method estimates atmospheric light using the fact that at least one color channel will have some pixels with very low intensities in haze-free images. Bilateral, trilateral, and CLAHE filters can then be used as post-processing steps to improve results. The document aims to develop new airlight estimation methods with lower computational complexity.
The document discusses a maximum likelihood algorithm for accurately estimating the Doppler centroid from SAR data using natural point targets. It begins by motivating the need for an accurate estimation method without using expensive transponders. It then describes (1) using persistent point scatterers as targets, (2) a spotlight azimuth focusing technique to extract target spectra, and (3) maximizing the likelihood function to estimate the Doppler centroid. Results on simulated and real data show the estimate achieves the Cramer-Rao lower bound and improves SAR image quality when applied.
[unofficial] Pyramid Scene Parsing Network (CVPR 2017)Shunta Saito
Pyramid Scene Parsing Network introduces the Pyramid Pooling Module to improve semantic segmentation. The module captures context at different regions and scales by performing average pooling at different pyramid levels on the final convolutional feature map. Experiments on ADE20K and PASCAL VOC datasets show the Pyramid Pooling Module improves mean Intersection-over-Union by over 4% compared to global average pooling, achieving state-of-the-art performance.
IRJET- Structured Compression Sensing Method for Massive MIMO-OFDM SystemsIRJET Journal
The document presents a structured compression sensing method for massive MIMO-OFDM systems called Priori information-assisted adaptive structured subspace pursuit (PA-ASSP) algorithm. PA-ASSP aims to improve channel estimation accuracy with reduced complexity compared to existing algorithms like adaptive structured subspace pursuit (ASSP). It initializes channel estimation using prior information and exploits common sparsity of MIMO channels across OFDM symbols. Simulation results show PA-ASSP achieves better bit error rate and normalized mean square error performance than ASSP and other algorithms under different SNR levels.
논문 제목부터 재미있어 보이는 주제 입니다. 오늘 딥러닝 논문읽기 모임에서 소개드릴 논문은 DEAR: Deep Reinforcement Learning for Online Advertising Impression in Recommender Systems, 강화학습을 이용한 온라인 추천 시스템 입니다. 비공개 된 정보들이 몇가지가 있지만, 아이디어면에서 여러분들이 충분히 재밌게 들으실수 있습니다. 강화학습의 기본적인 개념부터,
논문에 대한 디테일하고 깊이 있는 리뷰를
펀디멘탈팀 김창연 님이 도와주셨습니다!
오늘도 많은 관심 미리 감사드립니다!
추가로 .. 딥러닝 논문읽기 모임은 청강방 오픈채팅 방을 운영하고 있습니다. 최근 악성 홍보 봇 계정이 늘어나 방을 비밀번호를 걸어두게 되었습니다
딥러닝 청강방도 많은 관심 부탁드립니다!
청강방 링크 : https://open.kakao.com/o/gp6GHMMc
청강방 비밀번호 : 0501
This document compares geometry-based Doppler ambiguity resolution methods for squint synthetic aperture radar (SAR) and presents an indirect scheme for estimating Doppler rate in low-contrast scenes. It introduces squint SAR geometry and the effects of incorrect Doppler parameters. It then describes conventional, iterative, and improved Radon transform geometry-based methods for resolving Doppler ambiguity, noting the improved schemes are faster. Finally, it presents a method to indirectly estimate Doppler rate in low-contrast scenes by first estimating it in high-contrast areas and using the inverse relationship between Doppler rate and range.
Focal Loss for Dense Object Detection proposes a novel focal loss function to address the extreme foreground-background class imbalance encountered in training dense object detectors. The focal loss focuses training on hard examples and prevents easy negatives from overwhelming the detector. RetinaNet, a simple dense detector designed with a ResNet-FPN backbone and focal loss, achieves state-of-the-art accuracy while running faster than existing two-stage detectors. Extensive experiments demonstrate the focal loss enables training highly accurate dense detectors on datasets with vast numbers of background examples like COCO.
The implementation of the improved omp for aic reconstruction based on parall...Nxfee Innovation
This document presents a hardware implementation of an improved orthogonal matching pursuit (OMP) algorithm for signal reconstruction in analog-to-information converters based on compressive sensing. The proposed architecture reduces computational complexity and the number of iterations compared to the original OMP algorithm. It achieves a higher recovery signal-to-noise ratio of 31.04 dB. The design includes parallel complex multiplication, matrix inversion using the Goldschmidt algorithm, and signal estimation units. Implementation on a Xilinx Virtex6 FPGA shows the architecture uses a few percentage of resources at 135.4 MHz with a reconstruction time of 170 μs, faster than existing designs.
Volume ray casting algorithms benefit greatly with recent increase of GPU capabilities and power. In this paper,
we present a novel memory efficient ray casting algorithm for unstructured grids completely implemented on GPU
using a recent off-the-shelf nVidia graphics card. Our approach is built upon a recent CPU ray casting algorithm,
called VF-Ray, that considerably reduces the memory footprint while keeping good performance. In addition to
the implementation of VF-Ray in the graphics hardware, we also propose a restructuring in its data structures. As
a result, our algorithm is much faster than the original software version, while using significantly less memory, it
needed only one-half of its previous memory usage. Comparing our GPU implementation to other hardware-based
ray casting algorithms, our approach used between three to ten times less memory. These results made it possible
for our GPU implementation to handle larger datasets on GPU than previous approaches.
We provide solutions in seismic data processing. Our focus is on research and development by applying state of the art signal processing techniques and computational intelligence in areas such as multiple attenuation, velocity model building and tomography.
We provide solutions in seismic data processing. Our focus is on research and development by applying state of the art signal processing techniques and computational intelligence in areas such as multiple attenuation, velocity model building and tomography.
A brief study of 2 papers in hue preservation and color reproduction.
[MMTH09] Color Correction for Tone Mapping
[KLLH11] Hue Preservation using Enhanced Integrated Multi-scale Retinex for Improved Color Correction
Multiuser MIMO Vector Perturbation Precodingadeelrazi
This paper proposes methods for sum rate optimization in multi-user MIMO systems using vector perturbation precoding. It derives an expression for sum rate in terms of the average transmitted vector energy. It then uses this to obtain a high-SNR upper bound on sum rate and proposes an extension of vector perturbation that allocates different rates to different users. It also proposes a low-complexity user scheduling algorithm as a method for rate allocation.
Like other fields of computer vision, image retrieval has been
revolutionized by deep learning in recent years. Convolutional neural networks are now the tool of choice for computing feature representations of images. Many successful architectures employ global pooling layers to aggregate feature maps to a compact image representation. Using the neural network training procedure based on backpropagation and gradient descent methods, we can learn the global pooling operation from the training data.
We review existing approaches to learned pooling and propose two new layers: A learnable, extended variant of LSE pooling and the generalized max pooling layer based on an aggregation function from classical computer vision.
Our experiments show that learned global pooling can improve performance of image retrieval networks compared to the average pooling baseline for both tasks. For writer identification, our generalized max pooling layer outperforms all other tested pooling layers. Our learnable LSE pooling performs better than global average pooling and yields the best rank-1 score in our experiments on the Market-1501 dataset.
A Novel Blind SR Method to Improve the Spatial Resolution of Real Life Video ...IRJET Journal
This document proposes a novel blind super resolution method to improve the spatial resolution of real-life video sequences. The key aspects of the proposed method are:
1) It estimates blur without knowing the point spread function or noise statistics using a non-uniform interpolation super resolution method and multi-scale processing.
2) It uses a cost function with fidelity and regularization terms of a Huber-Markov random field to preserve edges and fine details in the reconstructed high resolution frames.
3) It performs masking to suppress artifacts from inaccurate motions, adaptively weighting the fidelity term at each iteration for faster convergence.
The method is tested on real-life videos with complex motions, objects, and brightness changes, showing
The document discusses a method for reducing peak-to-average power ratio (PAPR) in orthogonal frequency-division multiplexing (OFDM) systems using compandors. It proposes using a new class of compandors with an approximation of the nonlinear optimal compression function using first-degree spline functions to reduce complexity. Simulation results showed that using the suggested compandor model in an OFDM system improves performance by reducing PAPR.
This document discusses video compression techniques. It begins by explaining the large file sizes required for uncompressed video frames and motivates the need for compression. It then describes some key techniques for video compression including spatial and temporal correlation, temporal modeling using predicted and residual frames, block-based motion estimation, and motion compensation. Specific algorithms discussed are mean absolute difference, mean squared error for cost functions, and adaptive rood pattern search for motion estimation. Examples of motion between video frames are provided. The techniques aim to reduce file sizes by removing spatial and temporal redundancy between frames.
This paper aims, a 3D-Pilot Aided Multi-Input Multi-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) Channel Estimation (CE) for Digital Video Broadcasting -T2 (DVB-T2) for the 5 different proposed block and comb pilot patterns model and performed on different antenna configuration. The effects of multi-transceiver antenna on channel estimation are addressed with different pilot position in frequency, time and the vertical direction of spatial domain framing. This paper first focus on designing of 5-different proposed spatial correlated pilot pattern model with optimization of pilot overhead. Then it demonstrates the performance comparison of Least Square (LS) & Linear Minimum Mean Square Error (LMMSE), two linear channel estimators for 3D-Pilot Aided patterns on different antenna configurations in terms of Bit Error Rate. The simulation results are shown for Rayleigh fading noise channel environments. Also, 3x4 MIMO configuration is recommended as the most suitable configuration in this noise channel environments.
DETERMINATION OF SPATIAL RESOLUTION IN COMPUTED RADIOGRAPHY (CR) BY COMPARING...AM Publications
The QC (Quality Control) testing of spatial resolution in CR (Computed Radiography) using ESF-PSF and IP-PSF methods has been investigated. The object used in this study is a phantom made of copper with 15 cm both in lenght and widht, and 1 mm in thickness. The exposure to phantom was occured with some variation of voltage, i.e. 50 kV, 60 kV, 70 kV and 80 kV for CR system. Current variation wass performed by four times for each voltage, i.e. 1.6 mAs; 4 mAs; 16 mAs and 32 mAs. Digital image data used for the acquisition is in the DICOM format. Measurement of image's spatial resolution wass performed by calculate the value of FWHM as an indicator of good or poor spatial resolution of images. Measurement of FWHM value has performed by using MATLAB R2015b and Corel Draw X7 programs. The FWHM value was obtained from gaussian function which provides a complete information on opaqueness effects that occur in images. The results showed that the best value of spatial resolution for the ESF-PSF methode is 2.50 lp/mm and the worst value is 2.36 lp/mm, while for the best resolution using IP-PSF is 2.85 lp/mm and worst is 1.01 lp/mm. The value of spatial resolution is proportional to the voltage of the tube, where the higher voltage provides the higher value of spatial resolution. But the value of spatial resolution has decreased with the current variation due to the higher current of mobile X-ray's tube.
IRJET- Design and Analysis of Passive Multi-Static Radar SystemIRJET Journal
This document presents a new algorithm for passive multi-static radar detection called Range-Doppler Transformation. The algorithm relies on large networks of inexpensive radar receivers to detect targets. It transforms target detections in the Range-Doppler domain at each receiver into ellipses in the spatial domain. These ellipses are intersected and the point of highest consensus is identified as the target location. The algorithm is more accurate, robust to synchronization errors, and has better time complexity than current passive radar detection methods. It was tested in simulations with medium success at locating targets within error bounds of the simulation.
IRJET- Performance Analysis of IP Over Optical CDMA System based on RD CodeIRJET Journal
This document presents a performance analysis of an IP over optical CDMA network system based on a random diagonal (RD) code. It proposes using spectral amplitude coding OCDMA to directly connect the IP layer to the optical layer, eliminating intermediate layers and reducing overhead. The system architecture, design steps, and simulation setup are described. Simulation results using OptiSystem show that bit error rate increases with the number of simultaneous users and data transmission capacity decreases with transmission distance as expected. The RD code OCDMA system provides a potential solution for next-generation networks by enabling intelligent functions and advanced services at the optical layer.
A Review on Airlight Estimation Haze Removal AlgorithmsIRJET Journal
This document reviews algorithms for estimating airlight to remove haze from images. It discusses how haze degrades image quality by attenuating light reflected from objects and adding atmospheric light. Common haze removal techniques rely on a atmospheric scattering model. The dark channel prior method estimates atmospheric light using the fact that at least one color channel will have some pixels with very low intensities in haze-free images. Bilateral, trilateral, and CLAHE filters can then be used as post-processing steps to improve results. The document aims to develop new airlight estimation methods with lower computational complexity.
Video Stitching using Improved RANSAC and SIFTIRJET Journal
1. The document discusses techniques for stitching multiple video frames into a panoramic video using Scale-Invariant Feature Transform (SIFT) and an improved RANSAC algorithm.
2. Key points and feature descriptors are extracted from frames using SIFT to find correspondences between frames. The improved RANSAC algorithm is used to estimate homography matrices between frames and filter outlier matches.
3. Frames are blended together to compensate for exposure differences and misalignments before being mapped to a reference plane to create the panoramic video mosaic. The algorithm aims to produce a high quality panoramic video in real-time.
IRJET- A Comparative Analysis of various Visibility Enhancement Techniques th...IRJET Journal
This document provides a summary and analysis of various single image defogging techniques. It begins with an abstract that outlines how different fog removal algorithms detect and remove fog to improve image visibility. It then reviews several fog removal techniques from research papers. These include using fog density perception to estimate transmission maps, enhancing contrast using dark channel priors, combining dark channel priors with fuzzy logic for efficiency, using dark channel priors and guided filters to extract transmission maps and enhance images. The document aims to analyze and compare different techniques for efficiently removing fog from digital images.
Adaptive Hyper-Parameter Tuning for Black-box LiDAR Odometry [IROS2021]KenjiKoide1
Adaptive Hyper-Parameter Tuning for Black-box LiDAR Odometry
Kenji Koide, Masashi Yokozuka, Shuji Oishi, and Atsuhiko Banno
Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2021), pp. 7708-7714, Prague, Czech Republic, Sep., 2021
https://staff.aist.go.jp/k.koide/
IRJET- A Digital Down Converter on Zynq SoCIRJET Journal
This document describes the design and implementation of a digital down converter (DDC) on a Zynq System on Chip (SoC). Key points:
- The DDC is designed for airborne radar receivers to downconvert high sample rate digitized signals to a lower frequency for easier processing.
- The DDC implementation includes a direct digital synthesizer to generate input signals, complex multiplication for mixing, and a two-stage decimation and filtering process.
- The design is implemented on a Zynq SoC which provides the flexibility of a processor and programmability of an FPGA.
- Results show the DDC design achieves significant improvements in resource utilization compared to a full
Adaptive Neuro-Fuzzy Inference System (ANFIS) for segmentation of image ROI a...IRJET Journal
The document discusses a proposed system for concurrently performing image segmentation and image retrieval from segmented regions of interest (ROIs). It uses an Adaptive Neuro-Fuzzy Inference System (ANFIS) for segmenting ROIs from images and a probabilistic generative model for retrieving similar images based on keypoints detected within ROIs using the MP-KDD algorithm. The system is able to perform retrieval using features from multiple ROIs within a query image.
IRJET- MAC Unit by Efficient Grouping of Partial Products along with Circular...IRJET Journal
This document describes a proposed MAC (multiply and accumulate) unit design that aims to improve performance and reduce resource usage compared to conventional pipeline MAC unit designs. The proposed design uses Booth encoding to reduce the number of partial products, groups the partial products into blocks that are added using multi-operand adders, and implements circular convolution by rearranging the partial products. Simulation results show that the proposed design achieves higher performance and lower resource usage than conventional pipeline and redundant carry-save MAC unit designs. The design is synthesized on an Altera Stratix III FPGA to take advantage of fast carry chains.
IRJET- Efficient Design of Radix Booth MultiplierIRJET Journal
The document proposes a method to optimize binary radix-16 Booth multipliers by reducing the maximum height of the partial product columns from (n + 1)/4 to n/4 for n-bit operands. This is achieved by performing a short carry-propagate addition in parallel to the regular partial product generation, which reduces the maximum height by one row. The method allows further optimizations in the partial product array reduction stage in terms of area, delay, and power. It can also allow additional partial products to be included without increasing delay. The method is generally applicable but provides the most benefit for 64-bit radix-16 Booth multipliers.
1) Researchers at JPL developed a compact digital radar receiver to be used in a Ka-band radar interferometer for ice surface topography mapping.
2) The receiver is designed to be flexible and compact to meet the needs of a 16-element digital beamforming system while also being adaptable to other applications.
3) It can sample RF inputs up to 3.3 GHz at 10 bits and extract data via a front-panel interface, with components selected for potential spaceborne use.
The document describes an optimization module product that helps design efficient indoor wireless networks. The module uses 3D modeling to predict indoor coverage of macro signals and optimize distributed antenna system (DAS) design. Key features include mapping indoor macro signal interpolation, data throughput and signal quality coverage, soft handoff zones, and optimal antenna placement to minimize equipment needs while achieving coverage goals. The module helps network designers rightsize indoor wireless solutions and troubleshoot network problems using coverage mapping outputs.
IRJET- High Speed Multi-Rate Approach based Adaptive Filter using Multiplier-...IRJET Journal
This document presents a high-speed multi-rate approach for an adaptive filter using a multiplier-less technique. The proposed approach uses decimator and interpolator structures in VHDL to design a narrow band filter. Each structure is simulated using an FPGA and compared to existing structures. The resulting structure is more hardware efficient and uses fewer logic slices than existing structures. Key aspects of multi-rate signal processing and the proposed narrow band filter design using decimation and interpolation are discussed. Simulation results show the proposed approach reduces hardware complexity and resource usage compared to direct-form implementation of the filter.
Motivation and results coverage enhancment for 3GPP NR Rel.17 Eiko Seidel
In this paper we would like to emphasize once again the need to look at large coverage scenarios for 5G NR and express our support for the creation of a Rel.17 work item. Furthermore, we provide first system-level simulation results to further motivate work on coverage enhancements and prove our commitment to contribute to a study item in the working groups in Rel.17 with independent performance evaluation.
IRJET-Spectrum Allocation Policies for Flex Grid Network with Data Rate Limit...IRJET Journal
This document discusses spectrum allocation policies for flex grid networks with data rate limited transmission. It begins with an abstract that outlines the tradeoff between data rate, allocated frequency slots, and modulation format that must be considered for spectrum allocation. It then discusses the objectives of identifying optical paths, path lengths, selecting modulation schemes, and finding optimal routes. The methodology section covers factors considered for spectrum allocation like modulation formats, noise, crosstalk, and transmission distances limited by noise and crosstalk for different modulation formats and fiber core counts. Implementation details algorithms for network setup, finding shortest paths, candidate path selection, and spectrum allocation. Results show fragmentation increases with demand but is constant for some core fibers. Higher core counts provide advantages and lower request
Comparative Analysis of DP QPSK and DP 16-QAM Optical Coherent Receiver, with...IRJET Journal
This document compares DP QPSK and DP 16-QAM optical coherent receivers in terms of average bit error rate (BER) when analyzing phase noise. It simulates a 112 Gbps DP 16-QAM and DP QPSK coherent receiver system with digital signal processing (DSP) using Optisystem and MATLAB. The analysis introduces noise before the receiver by varying the optical signal-to-noise ratio (OSNR) and measures average BER. Graphs of average BER versus OSNR are produced for different digital filters and filter orders to determine the filter with minimum phase noise. The DP 16-QAM system shows better power spectrum confinement and is analyzed in more detail.
Fpga implementation of truncated multiplier for array multiplicationFinalyear Projects
The document discusses designing a truncated multiplier for array multiplication on an FPGA. It proposes two improvements: 1) accumulating partial product bits in a carry-save format to reduce area and improve speed compared to other truncated array multipliers, and 2) a new pseudo-carry compensated truncation scheme with an adaptive compensation circuit and fixed bias to minimize truncation error for unsigned integer multiplication. The proposed truncated multiplier is expected to consume less power and area while improving truncation error efficiency compared to existing designs.
IRJET- FPGA Implementation of Low Power Configurable Adder for Approximate Co...IRJET Journal
The document proposes a configurable and low-power approximate adder for approximate computing applications. Existing adders have drawbacks like increased area overhead and power wastage to achieve accuracy configurability. The proposed adder is based on a carry look-ahead adder structure with carry propagation masked at runtime to produce approximate sums. Experimental results on a 16-bit implementation show the proposed adder achieves significant power savings and speedup compared to a conventional carry look-ahead adder, while maintaining a small area overhead. It also outperforms previously studied configurable adders in optimizing power and delay without sacrificing accuracy.
PLNOG 22 - Aleksandra Chećko, Robert Cieloch - 5G: wydatek czy oszczędność?PROIDEA
The document discusses the costs of 5G network deployments and compares macro cells, distributed antenna systems (DAS), small cells, and software-defined radio access networks (SD-RAN). SD-RAN offers 1.5-1.9x higher capacity than other options for the same coverage area and has total cost of ownership (TCO) that is 1.3-2.7x lower. SD-RAN provides scalable capacity expansion and more cost-efficient coverage compared to traditional macro cell networks.
This is a technical presentation which I delivered in the later half of Senior Year during my Undergraduate studies. I spoke about the implementation of a DSP algorithm on an FPGA board.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
A review on techniques and modelling methodologies used for checking electrom...
INNOVA - SPIE Remote Sensing 2019
1. Strasbourg, France 9 - 12 September 2019
EfficientTime-Domain Back-Projection
focusing core for the image formation
of very high resolution and highly
squinted SAR Spotlight data on scenes
with strong topography variation
2. Strasbourg, France 9 - 12 September 2019
Presentation Index:
- Introduction and Motivation
- Algorithms Description
- Architecture Overview
- Dataset Description
- Performances Analysis and Benchmarks
- Conclusions
- Acknowledgments
2
3. Strasbourg, France 9 - 12 September 2019
Challenging scenario of new upcoming satellite SAR missions:
- Ultra-high resolution Spotlight acquisitions performed exploiting long integration time
- Very high satellite agility to acquire wider areas with finer resolutions even in near-contiguous frames within a reduced time
period which means the need to have Spotlight high squinted acquisition geometry
- The need to perform acquisition globally and on the areas with strong topography variations.
➔ This characteristics could result in a significant degradation of the SAR data focusing
performance, mostly for the algorithms working in the Fourier-domain
➔ New accurate and efficient Spotlight image formation algorithm that uses a focusing core
implemented in the time-domain
3Introduction and Motivation
4. Strasbourg, France 9 - 12 September 2019
𝒇 𝒐𝒖𝒕( 𝒕𝒓𝒐𝒘, 𝒕𝒄𝒐𝒍) = 𝒇( 𝒕 𝒂𝒛, 𝒕𝒓𝒈) ∗ 𝒉𝒓𝒐𝒘,𝒄𝒐𝒍(𝒕 𝒂𝒛, 𝒕𝒓𝒈)
𝒇𝒊𝒏( 𝒕 𝒂𝒛, 𝒕𝒓𝒈)
𝑭𝑭 𝑻
𝑭(𝑿, 𝒀 )
Frequency domain
operations
Frequency domain
operations
𝑭′(𝑿, 𝒀 )
𝑭𝑭 𝑻−𝟏
𝒇 𝒐𝒖𝒕( 𝒕𝒓𝒐𝒘, 𝒕𝒄𝒐𝒍)
Input RAW data fin
Output Image fout
Frequency domain focusing concept
Time domain focusing concept
Introduction and Motivation 4
( ⋯)
Input RAW data fin Input RAW data fin
Output Image fout
Output Image fout
𝒇 𝒐𝒖𝒕( 𝒕𝒓𝒐𝒘, 𝒕𝒄𝒐𝒍) = 𝒇( 𝒕 𝒂𝒛, 𝒕𝒓𝒈) ∗ 𝒉𝒓𝒐𝒘,𝒄𝒐𝒍(𝒕 𝒂𝒛, 𝒕𝒓𝒈)
5. Strasbourg, France 9 - 12 September 2019 Introduction and Motivation
Pictorial representation of a spotlight acquisition over an area with a single point target:
(a) The target falls into the antenna beam for each acquired echo lines
(b) The grey area represents all the samples of input RAW data for which the energy of a single pixel of output image
it is spread.
(c) After the range compression it is necessary to back project the target energy on its correct position and this energy is
present in all the image lines.
! Great advantages of TDBP algorithm it is the possibility to adapt the characteristics of the focusing transfer faction on a
pixel bases to obtain optimal focusing performances on all the scene regardless on local height or local squint angle.
! Main drawbacks of using the TDBP algorithms is the great computational burden due to the need to use almost all the
input data samples to focus each pixel of output image.
5
6. Strasbourg, France 9 - 12 September 2019 Algorithms Description 6
The mathematical model behind the Time Domain Back Projection focusing method, one of the first focusing
algorithms described in literature, is very simple. Considering that the proposed solution starts from range-
compressed data, the formula used to reconstruct each of the pixels of the output image is:
This reveals some important characteristics of this imaging approach: the focusing of each pixel require to
use samples from all the lines of input RAW data but it is also independent from the focusing of the other
pixels of the output image.
The first characteristic is the reason of the high computational cost of the algorithm while the second
characteristic suggest that it is possible to reach an high degree of parallelism using the TDBP processing
approach.
Reducing the number of the output pixels to be generated or reducing the number of input echo lines to be
used to focus each of the output pixels it is a good strategy to reduce the overall computational complexity of
the back-projection algorithm.
In order to use the time domain focusing in an operational scenario it is necessary to introduce algorithm
optimizations to reduce the overall computational coast without introducing significant degradation in the
focusing quality.
The proposed solution is based on the use of a sub-apertures processing approach.
7. Strasbourg, France 9 - 12 September 2019 Algorithms Description 7
For the standard processing chain the NBP
STANDARD represents the number of operations required to obtain all the pixels of the output
products; when sub-apertures technique is used, this number is NBP
SUBA and it is reduced by a computational gain factor D respect
to the standard case. D is a factor strongly related to the number of sub-apertures used.
• Focusing each of the NSUBA sub-apertures with TDBP algorithm
means to use a reduced number of input echo lines for each sub aperture.
• Lower resolution of each focused sub-aperture allows for larger spacing
and thus to have fewer output pixels with the same coverage.
• Each focused sub-aperture must be accumulated in the final product output
grid after oversampling along the rows direction.
8. Strasbourg, France 9 - 12 September 2019 Algorithms Description 8
The gain factor D although closely related to the number of sub-apertures is not exactly equal to it due to the effects introduced by
steering and squint of the signal at very high resolution as shown in the next figures.
From this consideration it is clear that the maximum computational gain obtainable is not indeterminate but tends to saturate as the
number of sub-apertures increases. The choice of the optimum number of sub-apertures to be used depends on the characteristics
of the data to be focused.
9. Strasbourg, France 9 - 12 September 2019 Algorithms Description 9
For the squinted acquisitions the presence of the squint and the rotation of the target spectrum increase the difference between the
total bandwidth and the target bandwidth; this minimizes the gain obtainable through the sub-apertures processing optimization
technique.
The solution to this limitation is to process the entire band of the target divided into a set of non-overlapping range-looks, this
allows to greatly reduce the value of the difference between and consequently to strongly increase the value of the computational
gain obtainable with the sub-apertures processing.
The figure on the right shows that after merging the contribution of all the sub-apertures, the portions of the spectrum for each
range-look must also be recomposed to obtain the completed spectrum of the target.
10. Strasbourg, France 9 - 12 September 2019 Algorithms Description 10
In the case of very high resolution squinted acquisition a different optimization solution based on the use of sub-apertures applied to
separated range-looks needs to be implemented.
The NBP
SUBA & RGLOOKS is reduced by a computational gain factor D respect to the standard case of TDBP processing without any
optimization and multiplied by the ratio. The computational gain D is much greater than the one obtainable without the use of range
looks optimization for the squinted acquisitions. The ratio is about 1 for low NRGLOOKS values and it assumes values much greater
than 1, as the number of range-looks increases.
• A certain number of non overlapped range-
looks have to be extracted from the input RGC
image and processed using the TDBP
algorithm with sub-apertures optimization.
• Each focused range look must be
accumulated in the output product grid after
oversampling in the columns direction.
11. Strasbourg, France 9 - 12 September 2019 Architecture Overview 11
Each of the above steps is performed by cloning the algorithm in different instances that
operate parallel to generate a well defined portion of the output
(Data-parallel or Embarrassingly-parallel computation), either intermediate (RGC, IS1) or
final (SCS).
Intermediate images are kept in memory, stored on disk only if necessary.
The pre-focusing range phase prepares the range-compressed image by structuring it in
such a way that it can be accessed efficiently by dozens or hundreds of parallel processes
(threads) used for focusing through TD Back-Projection.
1. Range Compression
2. Doppler Parameter Estimation
3. Range Pre-Focusing
4. Focusing with TD Back-Projection
STANDARD processing chain with Time Domain Back projection focusing without optimizations
12. Strasbourg, France 9 - 12 September 2019 Architecture Overview 12
FAST processing chain with Time Domain Back projection focusing with sub-apertures optimization
1. Range Compression
2. Doppler Parameter Estimation
3. Range Pre-Focusing
4. Focusing with TD Back-Projection with
sub-apertures approach.
13. Strasbourg, France 9 - 12 September 2019 Architecture Overview 13
FAST processing chain with Time Domain Back projection focusing with sub-apertures and range-looks optimizations
1. Range Compression
2. Doppler Parameter Estimation
3. Range Pre-Focusing
4. For i=1 to NRGLOOKS do
a) Range Look i-th: Extraction and Pre-Focusing
b) Range Look i-th: Focusing with TDBP with sub-
apertures
c) Range Look i-th: oversampling and
accumulation in the output SCS grid.
14. Strasbourg, France 9 - 12 September 2019
Main characteristics the raw data simulator:
- Complete and accurate radar parameters control (Carrier Frequency, PRF, Sampling Rate, Chirp Bandwidth, Chirp Duration)
- Accurate earth model
- Accurate geometric constraints definition (look side, orbit direction, incidence angle)
- Coherent orbital and attitude parameters
- Accurate definition of electronic steering model, range and azimuth coverages, azimuth resolution.
- Accurate antenna beam features definition.
- Non-validity of the start/stop approximation
- Complete scene control options (number of targets and their arrangement, backscattering coefficient, scene topography)
Dataset Description
[1] “COSMO-SkyMed di Seconda Generazione SAR image focusing of spotlight data for civilian users” - SPIE Image and Signal Processing for Remote Sensing 2018
Test Dataset
14
15. Strasbourg, France 9 - 12 September 2019
Main characteristics the simulation process:
The simulation process includes orbital data generation, attitude and electronic pointing management,
two-dimensional beam radiation diagram forming, target arrangement into the observed scene and finally
the signal simulation, performed in the time domain target by target, on the basis of the superposition effects
into the output complex raster layer, including signal deramping in the case of Spotlight mode.
Finally, the product is formatted with the requested output data quantization and metadata evaluation.
The layout used for all the data presents in the dataset includes nine point targets arranged on a 3 x 3 grid,
it is suitable for supporting most of the algorithmic analysis and image quality assessment phases
Dataset Description
Dataset Details: dimensions and sizes of input RAW data, output of range compression and output SCS files
This information has been considered during
the design of the software prototype and the
choice of the HW test platform to ensure that
the processing steps, operating in data
parallel mode, could efficiently handle the
huge amount of image data without
encountering problems in memory usage
15
16. Strasbourg, France 9 - 12 September 2019 Performances Analysis and Benchmarks 16
IRF performance and Processing Time performances obtained with the optimized TDBP algorithm considering very challenging data
where are simultaneously present:
- very and ultra high resolutions;
- high squints angles;
- high coverages;
- strong variations of the targets height in the scene.
(a) S2A-Plus 12° squint, 8 x 3.4 Km, Height 0÷2000m
Total processing time, measured on a reference high-performance hardware platform, typical of an
operational environment, with the following features:
4 Intel® Xeon® Platinum 8176 CPU 28C / 56T 38.5MB L3 @ 2.10 GHz / 3.80GHz - 384 GB RAM.
All the processes were carried out using a support DEM.
17. Strasbourg, France 9 - 12 September 2019 Performances Analysis and Benchmarks 17
(u) UUHR 12° squint, 3 x 3 km, Height 0÷1500m
(b) S2B-Plus 12° squint, 10 x 10 Km, Height 0÷3000m
18. Strasbourg, France 9 - 12 September 2019 Performances Analysis and Benchmarks 18
Comparison of IRF performances between the TDBP algorithm and a standard focus algorithm in the frequency
domain based on Omega-K core for a high-resolution high squinted data with a large coverage on the ground.
19. Strasbourg, France 9 - 12 September 2019 Performances Analysis and Benchmarks 19
Comparison of IRF performances between the TDBP algorithm and a standard focus algorithm in the frequency
domain based on Omega-K core for a ultra high-resolution data.
20. Strasbourg, France 9 - 12 September 2019 Performances Analysis and Benchmarks 20
2
0
(CSK-S2) COSMO-SkyMED
Enhanced Spotlight
on Rosamond Lake Area.
Corner Reflector Array (RCRA)
Rosamond Dry Lake Bed,
California, USA
(uavsar.jpl.nasa.gov)
Results obtained focusing a high resolution real image using the TDBP processing algorithm.
The proposed algorithm allows to
reduce the total processing time for
this data from more than 4 hours of
standard processing to nearly 12
minutes when the optimizations are
enabled.
21. Strasbourg, France 9 - 12 September 2019 Conclusions 21
Conclusions:
• The use of an algorithm in the time domain allows to achieve excellent focusing results even in the case of acquisitions with very challenging
characteristics
• This focusing technique is independent from the radar wavelength and the size of the scenes, and it is particularly suitable for future SAR missions (e.g.
COSMO-SkyMed Seconda Generazione)
• The algorithmic optimizations proposed and tested through software prototypes prove to be efficient and effective substantially reducing the
computational burden and maintaining at the same time almost unchanged the quality of the image obtained compared to the not optimized standard
TDBP processing
• The robustness of the implementation of the algorithm, both optimized and not, has been verified on a real image demonstrating its accuracy even in not
ideal conditions
• The computational architecture of the software prototype has been design to achieve a very high degree of parallelization taking advantage of the recent
HW multiprocessor/multicore architectures.
• The proposed TDBP algorithm, in both standard and optimized formulations, allows to focus small selected areas of the output image in proportionally
reduced time allowing a local improvements in image quality with reduced hardware platform requirements that would make it suitable for use on a data
analysis workstation.
• The independent focalization of each pixel allows to get the output image focused in the desired projection (e.g. Ground-Projected or Geocoded) reducing
the total execution time of the complete processing chain.
Roadmap:
• It is under evaluation the possibility of re-engineering the proposed implementation to meet the requirements need to run on the GPU-Accelerated
platforms.
• For ultra-high resolution squinted data, the computational gain of the optimizations proposed tends to saturate reducing the obtained benefit after a
certain point. A new version of the algorithm it is under investigation to overcome such this limitation to increase the efficiency in focusing this type of
data not far from being really available for a daily use in an operational environment for the upcoming missions
22. Strasbourg, France 9 - 12 September 2019 Acknowledgments 22
We thank the Italian Space Agency (ASI) for the opportunity to use COSMO-skymed mission data for the
development, testing and verification activities described above.
Thanks to the organization Dell EMC that made available the server Dell PowerEdge® (PE) R840 to conduct
the benchmarking of prototype software providing technical support for the optimal configuration of the
platform HW/SW.