- Two convolutional neural network architectures are presented to reduce noise in low-dose CT images. The first network is inspired by dictionary learning methods. An efficient improved network is also presented.
- Important parameters for each network are investigated to determine the best performance. The models are tested and results are compared to state-of-the-art methods, showing superior performance.
- Future work could explore advanced deep learning methods like deep residual networks, generative adversarial networks, or improving contrast in DICOM images.
Image Denoising Based On Wavelet for Satellite Imagery: A ReviewIJMER
In this paper studied the use of wavelet and their family to denoising images. Satellite images
are extensively used in the field of RS and GIS for land possession, mapping use for planning and
decision support. As of many Satellite image having common problem i.e. noise which hold unwanted
information in an images. Different types of noise are addressing different techniques to denoising
remotely sense images. Noise within the remote sensing images identifying and denoising them is big
challenge before the researcher. Therefore we review wavelet for denoising of the remote sensing
images. Thus implementing wavelet is essential to get much higher quality denoising image. However,
they are usually too computationally demanding. In order to reduce the
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
COMm.PAT- Printed Antenna on a Non-Conventional Substrate for CommunicationIOSR Journals
Abstract: Antennas used for communication purpose have been grown recently using the direct printing methods to ensure light weight technology, ease of fabrication and simply available flexible substrates in the field of printed electronics. The use of flexible non-conventional substrates in this regard has been the current issue for the provision of same results obtained as that of using a conventional substrate. In this paper, we propose low cost printed antenna on a non-conventional substrate that provides higher read range (using ISM Band) for Real time location systems (RTLS) (for e.g. wireless sensing). This antenna makes use of the deployment techniques that involve the selection of environment friendly RF substrate thatcan be served by the localized networking companies in low cost.It further evaluates the substrate characteristics and mathematical properties using the EM Simulator Software (HFSS-High Frequency Structure Simulator) followed by the fabrication of antenna, both on a conventional PCB (FR4) substrate and on a non-conventional substrate (Kodak photo paper) having different materialistic properties using a direct write printing technology. The results include the resonant frequency and the input impedance matching for the feed on a Vector Network Analyzer. Keywords:Low cost eco-friendly Antenna, nano-silver particle ink, Non-conventional substrate, printed antenna design parameters
A STRUCTURED DEEP NEURAL NETWORK FOR DATA-DRIVEN LOCALIZATION IN HIGH FREQUEN...IJCNCJournal
Next-generation wireless networks such as 5G and 802.11ad networks will use millimeter waves operating
at 28GHz, 38GHz, or higher frequencies to deliver unprecedentedly high data rates, e.g., 10 gigabits per
second. However, millimeter waves must be used directionally with narrow beams in order to overcome the
large attenuation due to their higher frequency. To achieve high data rates in a mobile setting,
communicating nodes need to align their beams dynamically, quickly, and in high resolution. We propose a
data-driven, deep neural network (DNN) approach to provide robust localization for beam alignment,
using a lower frequency spectrum (e.g., 2.4 GHz). The proposed DNN-based localization methods use the
angle of arrival derived from phase differences in the signal received at multiple antenna arrays to infer the
location of a mobile node. Our methods differ from others that use DNNs as a black box in that the
structure of our neural network model is tailored to address difficulties associated with the domain, such as
collinearity of the mobile node with antenna arrays, fading and multipath. We show that training our
models requires a small number of sample locations, such as 30 or fewer, making the proposed methods
practical. Our specific contributions are: (1) a structured DNN approach where the neural network
topology reflects the placement of antenna arrays, (2) a simulation platform for generating training and
evaluation data sets under multiple noise models, and (3) demonstration that our structured DNN approach
improves localization under noise by up to 25% over traditional off-the-shelf DNNs, and can achieve submeter
accuracy in a real-world experiment.
Accelerated Joint Image Despeckling Algorithm in the Wavelet and Spatial DomainsCSCJournals
Noise is one of the most widespread problems present in nearly all imaging applications. In spite of the sophistication of the recently proposed methods, most denoising algorithms have not yet attained a desirable level of applicability. This paper proposes a two-stage algorithm for speckle noise reduction jointly in the wavelet and spatial domains. At the first stage, the optimal parameter value of the spatial speckle reduction filter is estimated, based on edge pixel statistics and noise variance. Then the optimized filter is used at the second stage to additionally smooth the approximation image of the wavelet sub-band. A complexity reduction algorithm for wavelet decomposition is also proposed. The obtained results are highly encouraging in terms of image quality which paves the way towards the reinforcement of the proposed algorithm for the performance enhancement of the Block Matching and 3D Filtering algorithm tackling multiplicative speckle noise.
Image Denoising Based On Wavelet for Satellite Imagery: A ReviewIJMER
In this paper studied the use of wavelet and their family to denoising images. Satellite images
are extensively used in the field of RS and GIS for land possession, mapping use for planning and
decision support. As of many Satellite image having common problem i.e. noise which hold unwanted
information in an images. Different types of noise are addressing different techniques to denoising
remotely sense images. Noise within the remote sensing images identifying and denoising them is big
challenge before the researcher. Therefore we review wavelet for denoising of the remote sensing
images. Thus implementing wavelet is essential to get much higher quality denoising image. However,
they are usually too computationally demanding. In order to reduce the
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
COMm.PAT- Printed Antenna on a Non-Conventional Substrate for CommunicationIOSR Journals
Abstract: Antennas used for communication purpose have been grown recently using the direct printing methods to ensure light weight technology, ease of fabrication and simply available flexible substrates in the field of printed electronics. The use of flexible non-conventional substrates in this regard has been the current issue for the provision of same results obtained as that of using a conventional substrate. In this paper, we propose low cost printed antenna on a non-conventional substrate that provides higher read range (using ISM Band) for Real time location systems (RTLS) (for e.g. wireless sensing). This antenna makes use of the deployment techniques that involve the selection of environment friendly RF substrate thatcan be served by the localized networking companies in low cost.It further evaluates the substrate characteristics and mathematical properties using the EM Simulator Software (HFSS-High Frequency Structure Simulator) followed by the fabrication of antenna, both on a conventional PCB (FR4) substrate and on a non-conventional substrate (Kodak photo paper) having different materialistic properties using a direct write printing technology. The results include the resonant frequency and the input impedance matching for the feed on a Vector Network Analyzer. Keywords:Low cost eco-friendly Antenna, nano-silver particle ink, Non-conventional substrate, printed antenna design parameters
A STRUCTURED DEEP NEURAL NETWORK FOR DATA-DRIVEN LOCALIZATION IN HIGH FREQUEN...IJCNCJournal
Next-generation wireless networks such as 5G and 802.11ad networks will use millimeter waves operating
at 28GHz, 38GHz, or higher frequencies to deliver unprecedentedly high data rates, e.g., 10 gigabits per
second. However, millimeter waves must be used directionally with narrow beams in order to overcome the
large attenuation due to their higher frequency. To achieve high data rates in a mobile setting,
communicating nodes need to align their beams dynamically, quickly, and in high resolution. We propose a
data-driven, deep neural network (DNN) approach to provide robust localization for beam alignment,
using a lower frequency spectrum (e.g., 2.4 GHz). The proposed DNN-based localization methods use the
angle of arrival derived from phase differences in the signal received at multiple antenna arrays to infer the
location of a mobile node. Our methods differ from others that use DNNs as a black box in that the
structure of our neural network model is tailored to address difficulties associated with the domain, such as
collinearity of the mobile node with antenna arrays, fading and multipath. We show that training our
models requires a small number of sample locations, such as 30 or fewer, making the proposed methods
practical. Our specific contributions are: (1) a structured DNN approach where the neural network
topology reflects the placement of antenna arrays, (2) a simulation platform for generating training and
evaluation data sets under multiple noise models, and (3) demonstration that our structured DNN approach
improves localization under noise by up to 25% over traditional off-the-shelf DNNs, and can achieve submeter
accuracy in a real-world experiment.
Accelerated Joint Image Despeckling Algorithm in the Wavelet and Spatial DomainsCSCJournals
Noise is one of the most widespread problems present in nearly all imaging applications. In spite of the sophistication of the recently proposed methods, most denoising algorithms have not yet attained a desirable level of applicability. This paper proposes a two-stage algorithm for speckle noise reduction jointly in the wavelet and spatial domains. At the first stage, the optimal parameter value of the spatial speckle reduction filter is estimated, based on edge pixel statistics and noise variance. Then the optimized filter is used at the second stage to additionally smooth the approximation image of the wavelet sub-band. A complexity reduction algorithm for wavelet decomposition is also proposed. The obtained results are highly encouraging in terms of image quality which paves the way towards the reinforcement of the proposed algorithm for the performance enhancement of the Block Matching and 3D Filtering algorithm tackling multiplicative speckle noise.
Deep Learning-Based Universal Beamformer for Ultrasound ImagingShujaat Khan
In ultrasound (US) imaging, individual channel RF measurements are back-propagated and accumulated to form an image after applying specific delays. While this time reversal is usually implemented using a hardware- or software-based delay-and-sum (DAS) beamformer, the performance of DAS decreases rapidly in situations where data acquisition is not ideal. Herein, for the first time, we demonstrate that a single data-driven adaptive beamformer designed as a deep neural network can generate high quality images robustly for various detector channel configurations and subsampling rates. The proposed deep beamformer is evaluated for two distinct acquisition schemes: focused ultrasound imaging and planewave imaging. Experimental results showed that the proposed deep beamformer exhibit significant performance gain for both focused and planar imaging schemes, in terms of contrast-to-noise ratio and structural similarity.
Adaptive and compressive beamforming using deep learning for medical ultrasoundShujaat Khan
In ultrasound (US) imaging, various types of adaptive beamforming techniques have been investigated to improve the resolution and the contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, the performance of these adaptive beamforming approaches degrades when the underlying model is not sufficiently accurate and the number of channels decreases. To address this problem, here, we propose a deep-learning-based beamformer to generate significantly improved images over widely varying measurement conditions and channel subsampling patterns. In particular, our deep neural network is designed to directly process full or subsampled radio frequency (RF) data acquired at various subsampling rates and detector configurations so that it can generate high-quality US images using a single beamformer. The origin of such input-dependent adaptivity is also theoretically analyzed. Experimental results using the B-mode focused US confirm the efficacy of the proposed methods.
Unsupervised Deep Learning for Accelerated High Quality EchocardiographyShujaat Khan
Echocardiography is a pivotal imaging tool for emergency medicine. Unfortunately, it suffers from poor image quality due to the intrinsic limitations of sonography systems. Towards this end, a better quality can be achieved at the cost of reduced frame rate by increasing the number of transmit/receive events and utilizing computationally expensive noise suppression algorithms. However, this visual quality and temporal resolution trade-off is a bottleneck for many echocardiography applications. Conventional acceleration methods, such as multi-line acquisition (MLA), work only for limited acceleration factors and produce blocking artifacts at a high frame rate. Accordingly, various machine learning algorithms have been designed to reduce blocking artifacts in MLA. These algorithms require access to either high-quality raw RF data or time-delayed baseband IQ data. Unfortunately, in many lower-end commercial systems, such data are not accessible. On the other hand, ultrasound images are badly affected by speckle noises which significantly reduces the image quality. We propose an image domain unsupervised deep learning framework using cycleGAN architecture for high quality accelerated echocardiography that simultaneously reduces the blocking artifacts and the speckle noise. The method is evaluated on real in-vivo and phantom data and achieves notable performance gain.
Universal plane wave compounding for high quality us imaging using deep learningShujaat Khan
Plane-wave compounding is to sum up several successive plane waves incident at different angles to form an image. By applying time-reversal of the received signals, transmit focusing can be synthesized. Unfortunately, to improve the temporal resolution, the number of plane waves should be reduced, which often degrades the image quality. To address this problem, an image domain learning method using neural networks has been proposed, but the network needs to be retrained when the number of plane waves changes. Herein, we propose, for the first time, a universal plane-wave compounding scheme using deep learning to directly process plane waves and RF data acquired at different view angles and sub-sampling rate to generate high quality US images.
Extend Your Journey: Considering Signal Strength and Fluctuation in Location-...Chih-Chuan Cheng
Reducing the communication energy is essential to facilitate the growth of emerging mobile applications. In this paper, we introduce signal strength into location-based applications to reduce the energy consumption of mobile devices for data reception. First, we model the problem of data fetch scheduling, with the objective of minimizing the energy required to fetch location-based information without impacting the application’s semantics adversely. To solve the fundamental problem, we propose a dynamic programming algorithm and prove its optimality in terms of energy savings. Then, we perform postoptimal analysis to explore the tolerance of the algorithm to signal strength fluctuations. Finally, based on the algorithm, we consider implementation issues.We have also developed a virtual tour system integrated with existing web applications to validate the practicability of the proposed concept. The results of experiments conducted based on real-world case studies are very encouraging and demonstrate the applicability of the proposed algorithm towards signal strength fluctuations.
Switchable and tunable deep beamformer using adaptive instance normalization ...Shujaat Khan
Recent proposals of deep learning-based beamformers for ultrasound imaging (US) have attracted significant attention as computational efficient alternatives to adaptive and compressive beamformers. Moreover, deep beamformers are versatile in that image post-processing algorithms can be readily combined. Unfortunately, with the existing technology, a large number of beamformers need to be trained and stored for different probes, organs, depth ranges, operating frequency, and desired target ‘styles’, demanding significant resources such as training data, etc. To address this problem, here we propose a switchable and tunable deep beamformer that can switch between various types of outputs such as DAS, MVBF, DMAS, GCF, etc., and also adjust noise removal levels at the inference phase, by using a simple switch or tunable nozzle. This novel mechanism is implemented through Adaptive Instance Normalization (AdaIN) layers, so that distinct outputs can be generated using a single generator by merely changing the AdaIN codes. Experimental results using B-mode focused ultrasound confirm the flexibility and efficacy of the proposed method for various applications.
Volume ray casting algorithms benefit greatly with recent increase of GPU capabilities and power. In this paper,
we present a novel memory efficient ray casting algorithm for unstructured grids completely implemented on GPU
using a recent off-the-shelf nVidia graphics card. Our approach is built upon a recent CPU ray casting algorithm,
called VF-Ray, that considerably reduces the memory footprint while keeping good performance. In addition to
the implementation of VF-Ray in the graphics hardware, we also propose a restructuring in its data structures. As
a result, our algorithm is much faster than the original software version, while using significantly less memory, it
needed only one-half of its previous memory usage. Comparing our GPU implementation to other hardware-based
ray casting algorithms, our approach used between three to ten times less memory. These results made it possible
for our GPU implementation to handle larger datasets on GPU than previous approaches.
Switchable Deep Beamformer for Ultrasound Imaging Using ADAINShujaat Khan
In ultrasound (US) imaging, various adaptive beamforming methods have been proposed to improve the resolution and contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, they often require computationally expensive calculations and their performance degrades when the underlying model is not sufficiently accurate. Moreover, ultrasound images usually require various type of post filtration such as deblurring and despeckling, etc., which further increase the complexity of the system. Deep learning-based solutions provides a quick remedy to these issue; however, in the current technology, a separate beamformer should be trained and stored for each application, demanding significant scanner resources. To address this problem, here we propose a switchable deep beamformer that can produce various types of output such as DAS, speckle removal, deconvolution, etc., using a single network with a simple switch. In particular, the switch is implemented through Adaptive Instance Normalization (AdaIN) layers, so that distinct outputs can be generated by merely changing the AdaIN code. Experimental results using B-mode focused ultrasound confirm the efficacy of the proposed methods.
BalloonNet: A Deploying Method for a Three-Dimensional Wireless Network Surro...Naoki Shibata
Aiming at fast establishment of a wireless network around a multi-level building in a disaster area, we propose an efficient method to determine the locations of network nodes in the air. Nodes are attached to balloons outside a building and deployed in the air so that the network can be accessed from anywhere in the building. In this paper, we introduce an original radio propagation model for predicting path loss from an outdoor position to a position inside a building. In order to address the three-dimensional deployment problem, the proposed method optimizes an objective function for satisfying two goals: (1) guarantee the coverage: the target space needs to be covered by over a certain percentage by wireless network nodes, (2) minimize the number of network nodes. For solving this problem, we propose an algorithm based on a genetic algorithm. To evaluate the proposed method, we compared our method with three benchmark methods, and the results show that the proposed method requires fewer nodes than other methods.
Transformer Architectures in Vision
[2018 ICML] Image Transformer
[2019 CVPR] Video Action Transformer Network
[2020 ECCV] End-to-End Object Detection with Transformers
[2021 ICLR] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
WEBINAR ON FUNDAMENTALS OF DIGITAL IMAGE PROCESSING DURING COVID LOCK DOWN by by K.Vijay Anand , Associate Professor, Department of Electronics and Instrumentation Engineering , R.M.K Engineering College, Tamil Nadu , India
Haze removal for a single remote sensing image based on deformed haze imaging...LogicMindtech Nologies
IMAGE PROCESSING Projects for M. Tech, IMAGE PROCESSING Projects in Vijayanagar, IMAGE PROCESSING Projects in Bangalore, M. Tech Projects in Vijayanagar, M. Tech Projects in Bangalore, IMAGE PROCESSING IEEE projects in Bangalore, IEEE 2015 IMAGE PROCESSING Projects, MATLAB Image Processing Projects, MATLAB Image Processing Projects in Bangalore, MATLAB Image Processing Projects in Vijayangar
A new approach of edge detection in sar images using region based active cont...eSAT Journals
Abstract This paper presents a new methodology for the edge detection of complex radar images. The approach includes the edge improvisation algorithm and followed with edge detection. The nature of complex radar images made edge enhancement part before the edge detection as the data is highly heterogeneous in nature. Thus, the use of discrete wavelet transform in the edge improvisation algorithm is justified. Then region based active contour model is used as edge detection algorithm. The paper proposes the distribution fitting energy with a level set function and neighborhood means and variances as variables. The performance is tested by applying it on different images and the results are been analyzed. Keywords: Edge detection, Edge improvisation, Synthetic Aperture radar (SAR), wavelet transforms.
Deep Convolutional Neural Networks (CNNs) have achieved impressive performance in
edge detection tasks, but their large number of parameters often leads to high memory and energy
costs for implementation on lightweight devices. In this paper, we propose a new architecture, called
Efficient Deep-learning Gradients Extraction Network (EDGE-Net), that integrates the advantages of Depthwise Separable Convolutions and deformable convolutional networks (DeformableConvNet) to address these inefficiencies. By carefully selecting proper components and utilizing
network pruning techniques, our proposed EDGE-Net achieves state-of-the-art accuracy in edge
detection while significantly reducing complexity. Experimental results on BSDS500 and NYUDv2
datasets demonstrate that EDGE-Net outperforms current lightweight edge detectors with only
500k parameters, without relying on pre-trained weights.
Deep Learning-Based Universal Beamformer for Ultrasound ImagingShujaat Khan
In ultrasound (US) imaging, individual channel RF measurements are back-propagated and accumulated to form an image after applying specific delays. While this time reversal is usually implemented using a hardware- or software-based delay-and-sum (DAS) beamformer, the performance of DAS decreases rapidly in situations where data acquisition is not ideal. Herein, for the first time, we demonstrate that a single data-driven adaptive beamformer designed as a deep neural network can generate high quality images robustly for various detector channel configurations and subsampling rates. The proposed deep beamformer is evaluated for two distinct acquisition schemes: focused ultrasound imaging and planewave imaging. Experimental results showed that the proposed deep beamformer exhibit significant performance gain for both focused and planar imaging schemes, in terms of contrast-to-noise ratio and structural similarity.
Adaptive and compressive beamforming using deep learning for medical ultrasoundShujaat Khan
In ultrasound (US) imaging, various types of adaptive beamforming techniques have been investigated to improve the resolution and the contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, the performance of these adaptive beamforming approaches degrades when the underlying model is not sufficiently accurate and the number of channels decreases. To address this problem, here, we propose a deep-learning-based beamformer to generate significantly improved images over widely varying measurement conditions and channel subsampling patterns. In particular, our deep neural network is designed to directly process full or subsampled radio frequency (RF) data acquired at various subsampling rates and detector configurations so that it can generate high-quality US images using a single beamformer. The origin of such input-dependent adaptivity is also theoretically analyzed. Experimental results using the B-mode focused US confirm the efficacy of the proposed methods.
Unsupervised Deep Learning for Accelerated High Quality EchocardiographyShujaat Khan
Echocardiography is a pivotal imaging tool for emergency medicine. Unfortunately, it suffers from poor image quality due to the intrinsic limitations of sonography systems. Towards this end, a better quality can be achieved at the cost of reduced frame rate by increasing the number of transmit/receive events and utilizing computationally expensive noise suppression algorithms. However, this visual quality and temporal resolution trade-off is a bottleneck for many echocardiography applications. Conventional acceleration methods, such as multi-line acquisition (MLA), work only for limited acceleration factors and produce blocking artifacts at a high frame rate. Accordingly, various machine learning algorithms have been designed to reduce blocking artifacts in MLA. These algorithms require access to either high-quality raw RF data or time-delayed baseband IQ data. Unfortunately, in many lower-end commercial systems, such data are not accessible. On the other hand, ultrasound images are badly affected by speckle noises which significantly reduces the image quality. We propose an image domain unsupervised deep learning framework using cycleGAN architecture for high quality accelerated echocardiography that simultaneously reduces the blocking artifacts and the speckle noise. The method is evaluated on real in-vivo and phantom data and achieves notable performance gain.
Universal plane wave compounding for high quality us imaging using deep learningShujaat Khan
Plane-wave compounding is to sum up several successive plane waves incident at different angles to form an image. By applying time-reversal of the received signals, transmit focusing can be synthesized. Unfortunately, to improve the temporal resolution, the number of plane waves should be reduced, which often degrades the image quality. To address this problem, an image domain learning method using neural networks has been proposed, but the network needs to be retrained when the number of plane waves changes. Herein, we propose, for the first time, a universal plane-wave compounding scheme using deep learning to directly process plane waves and RF data acquired at different view angles and sub-sampling rate to generate high quality US images.
Extend Your Journey: Considering Signal Strength and Fluctuation in Location-...Chih-Chuan Cheng
Reducing the communication energy is essential to facilitate the growth of emerging mobile applications. In this paper, we introduce signal strength into location-based applications to reduce the energy consumption of mobile devices for data reception. First, we model the problem of data fetch scheduling, with the objective of minimizing the energy required to fetch location-based information without impacting the application’s semantics adversely. To solve the fundamental problem, we propose a dynamic programming algorithm and prove its optimality in terms of energy savings. Then, we perform postoptimal analysis to explore the tolerance of the algorithm to signal strength fluctuations. Finally, based on the algorithm, we consider implementation issues.We have also developed a virtual tour system integrated with existing web applications to validate the practicability of the proposed concept. The results of experiments conducted based on real-world case studies are very encouraging and demonstrate the applicability of the proposed algorithm towards signal strength fluctuations.
Switchable and tunable deep beamformer using adaptive instance normalization ...Shujaat Khan
Recent proposals of deep learning-based beamformers for ultrasound imaging (US) have attracted significant attention as computational efficient alternatives to adaptive and compressive beamformers. Moreover, deep beamformers are versatile in that image post-processing algorithms can be readily combined. Unfortunately, with the existing technology, a large number of beamformers need to be trained and stored for different probes, organs, depth ranges, operating frequency, and desired target ‘styles’, demanding significant resources such as training data, etc. To address this problem, here we propose a switchable and tunable deep beamformer that can switch between various types of outputs such as DAS, MVBF, DMAS, GCF, etc., and also adjust noise removal levels at the inference phase, by using a simple switch or tunable nozzle. This novel mechanism is implemented through Adaptive Instance Normalization (AdaIN) layers, so that distinct outputs can be generated using a single generator by merely changing the AdaIN codes. Experimental results using B-mode focused ultrasound confirm the flexibility and efficacy of the proposed method for various applications.
Volume ray casting algorithms benefit greatly with recent increase of GPU capabilities and power. In this paper,
we present a novel memory efficient ray casting algorithm for unstructured grids completely implemented on GPU
using a recent off-the-shelf nVidia graphics card. Our approach is built upon a recent CPU ray casting algorithm,
called VF-Ray, that considerably reduces the memory footprint while keeping good performance. In addition to
the implementation of VF-Ray in the graphics hardware, we also propose a restructuring in its data structures. As
a result, our algorithm is much faster than the original software version, while using significantly less memory, it
needed only one-half of its previous memory usage. Comparing our GPU implementation to other hardware-based
ray casting algorithms, our approach used between three to ten times less memory. These results made it possible
for our GPU implementation to handle larger datasets on GPU than previous approaches.
Switchable Deep Beamformer for Ultrasound Imaging Using ADAINShujaat Khan
In ultrasound (US) imaging, various adaptive beamforming methods have been proposed to improve the resolution and contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, they often require computationally expensive calculations and their performance degrades when the underlying model is not sufficiently accurate. Moreover, ultrasound images usually require various type of post filtration such as deblurring and despeckling, etc., which further increase the complexity of the system. Deep learning-based solutions provides a quick remedy to these issue; however, in the current technology, a separate beamformer should be trained and stored for each application, demanding significant scanner resources. To address this problem, here we propose a switchable deep beamformer that can produce various types of output such as DAS, speckle removal, deconvolution, etc., using a single network with a simple switch. In particular, the switch is implemented through Adaptive Instance Normalization (AdaIN) layers, so that distinct outputs can be generated by merely changing the AdaIN code. Experimental results using B-mode focused ultrasound confirm the efficacy of the proposed methods.
BalloonNet: A Deploying Method for a Three-Dimensional Wireless Network Surro...Naoki Shibata
Aiming at fast establishment of a wireless network around a multi-level building in a disaster area, we propose an efficient method to determine the locations of network nodes in the air. Nodes are attached to balloons outside a building and deployed in the air so that the network can be accessed from anywhere in the building. In this paper, we introduce an original radio propagation model for predicting path loss from an outdoor position to a position inside a building. In order to address the three-dimensional deployment problem, the proposed method optimizes an objective function for satisfying two goals: (1) guarantee the coverage: the target space needs to be covered by over a certain percentage by wireless network nodes, (2) minimize the number of network nodes. For solving this problem, we propose an algorithm based on a genetic algorithm. To evaluate the proposed method, we compared our method with three benchmark methods, and the results show that the proposed method requires fewer nodes than other methods.
Transformer Architectures in Vision
[2018 ICML] Image Transformer
[2019 CVPR] Video Action Transformer Network
[2020 ECCV] End-to-End Object Detection with Transformers
[2021 ICLR] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
WEBINAR ON FUNDAMENTALS OF DIGITAL IMAGE PROCESSING DURING COVID LOCK DOWN by by K.Vijay Anand , Associate Professor, Department of Electronics and Instrumentation Engineering , R.M.K Engineering College, Tamil Nadu , India
Haze removal for a single remote sensing image based on deformed haze imaging...LogicMindtech Nologies
IMAGE PROCESSING Projects for M. Tech, IMAGE PROCESSING Projects in Vijayanagar, IMAGE PROCESSING Projects in Bangalore, M. Tech Projects in Vijayanagar, M. Tech Projects in Bangalore, IMAGE PROCESSING IEEE projects in Bangalore, IEEE 2015 IMAGE PROCESSING Projects, MATLAB Image Processing Projects, MATLAB Image Processing Projects in Bangalore, MATLAB Image Processing Projects in Vijayangar
A new approach of edge detection in sar images using region based active cont...eSAT Journals
Abstract This paper presents a new methodology for the edge detection of complex radar images. The approach includes the edge improvisation algorithm and followed with edge detection. The nature of complex radar images made edge enhancement part before the edge detection as the data is highly heterogeneous in nature. Thus, the use of discrete wavelet transform in the edge improvisation algorithm is justified. Then region based active contour model is used as edge detection algorithm. The paper proposes the distribution fitting energy with a level set function and neighborhood means and variances as variables. The performance is tested by applying it on different images and the results are been analyzed. Keywords: Edge detection, Edge improvisation, Synthetic Aperture radar (SAR), wavelet transforms.
Deep Convolutional Neural Networks (CNNs) have achieved impressive performance in
edge detection tasks, but their large number of parameters often leads to high memory and energy
costs for implementation on lightweight devices. In this paper, we propose a new architecture, called
Efficient Deep-learning Gradients Extraction Network (EDGE-Net), that integrates the advantages of Depthwise Separable Convolutions and deformable convolutional networks (DeformableConvNet) to address these inefficiencies. By carefully selecting proper components and utilizing
network pruning techniques, our proposed EDGE-Net achieves state-of-the-art accuracy in edge
detection while significantly reducing complexity. Experimental results on BSDS500 and NYUDv2
datasets demonstrate that EDGE-Net outperforms current lightweight edge detectors with only
500k parameters, without relying on pre-trained weights.
Deep Convolutional Neural Networks (CNNs) have achieved impressive performance in
edge detection tasks, but their large number of parameters often leads to high memory and energy
costs for implementation on lightweight devices. In this paper, we propose a new architecture, called
Efficient Deep-learning Gradients Extraction Network (EDGE-Net), that integrates the advantages of Depthwise Separable Convolutions and deformable convolutional networks (DeformableConvNet) to address these inefficiencies. By carefully selecting proper components and utilizing
network pruning techniques, our proposed EDGE-Net achieves state-of-the-art accuracy in edge
detection while significantly reducing complexity. Experimental results on BSDS500 and NYUDv2
datasets demonstrate that EDGE-Net outperforms current lightweight edge detectors with only
500k parameters, without relying on pre-trained weights.
A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times
that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to
be possible to perfectly reconstruct a signal from a series of measurements. Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized. An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if the signal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly. The main idea is that with prior knowledge about constraints on the signal’s frequencies, fewer samples are needed to reconstruct the signal. Sparse sampling (also known as, compressive sampling, or compressed sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions tounder determined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Shannon-Nyquist sampling theorem. There are two conditions under which recovery is possible.[1] The first one is sparsity which requires the signal to be sparse in some domain. The second one is incoherence which is applied through the isometric property which is sufficient for sparse signals Possibility
of compressed data acquisition protocols which directly acquire just the important information Sparse sampling (CS) is a fast growing area of research. It neglects the extravagant acquisition process by measuring lesser values to reconstruct the image or signal. Sparse sampling is adopted successfully in various fields of image processing and proved its efficiency. Some of the image processing applications like face recognition, video encoding, Image encryption and reconstruction are presented here.
Step by step process of uploading presentation videos Hoopeer Hoopeer
Deep neural network, compressive sensing, floating gate techniques can be efficiently employed to increase voltage swing and reduce supply voltage requirements of class AB regulated cascode current mirrors, implement extreme low power analog circuits with this process. /also have good references for subthreshold region.
[Extreme Low Power Differential Pair: An Experimental Evaluation, Super-Gain-Boosted Miller Op-Amp based on Nested Regulated Cascode Techniques , Step by Step process of uploading presentation videos, Dennis Ritchie The creator of the C programming language and co-creator of Unix
A Novel Technique for Multi User Multiple Access Spatial Modulation Using Ada...ijtsrd
The need for high peak data rates with the corresponding need for signi cantly increased spectral e ciencies, and the support for service speci c quality of service QoS requirements are the key elements that drive the research in the area of wireless communication access technologies Since space constellations and signal constellations are orthogonal, the well known digital signal modulation schemes can be used on top. The spatial multiplexing gain comes from the simultaneous transmission of spatially encoded bits. Analytical and numerical performance of SM in di erent channel conditions, including practical channel considerations, are studied and compared to existing MIMO techniques in this thesis. Results show that SM achieve low BER bit error ratio with a tremendous reduction in receiver complexity without sacri cing spectral e ciency. Anshul Sengar | Gaurav Morghare "A Novel Technique for Multi User Multiple Access Spatial Modulation Using Adaptive Coding and Modulation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-1 , December 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47856.pdf Paper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/47856/a-novel-technique-for-multi-user-multiple-access-spatial-modulation-using-adaptive-coding-and-modulation/anshul-sengar
Multivariate dimensionality reduction in cross-correlation analysis ivanokitov
In master event location, a matched-filter like technique based on cross-correlation with pre-defined waveform template, a crucial role plays a template design. Reduction of templates number for certain region under monitoring is extremely important both for interactive and real-time processing as it may dramatically reduce the time of resulting product delivery and may improve low magnitude event detection threshold and location.
A number of dimensionality reduction methods have been explored to minimize the number of master events needed for cross correlation based seismic event detection and location, including multidimensional data model approaches (hypercomplex and tensorial). The primary method considered is Principle Component Analysis (PCA), which is widely accepted as a superior method of matrix factorization or Singular Value Decomposition (SVD). For regional seismic events, Harris (2006) used this in designing a subspace detector for the cross correlation based event location. Other methods of dimensionality reduction explored either theoretically or analytically included Robust PCA, Kernel PCA, Incremental PCA (IPCA), Empirical Subspace Detector (SSD) (Barrett and Beroza, 2015) and Independent Component Analysis (ICA).
Distributed processing of probabilistic top k queries in wireless sensor netw...IEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
JAVA 2013 IEEE DATAMINING PROJECT Distributed processing of probabilistic top...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
1. Thesis presentation by,
Seyyedomid Badretale
Supervisor:
Dr. J. Alirezaie
Design and implementation of Convolutional Neural
Networks for Low-Dose CT Image Noise Reduction
3. 3September-10-17 |
• Two CNN architectures are presented to remove noise from low-
dose CT images.
• First network is inspired by the dictionary learning methods and a
function is assigned for each layer based on this correlation.
• An efficient network is presented by improving the first
architecture in terms of speed and the performance.
• Important parameters for each network are investigated to find
the best performance.
• The models are tested and the results are compared to the state-
of-the-arts methods.
Contributions
4. X-Ray Computed Tomography (CT) Imaging
• Widely used diagnostic device
• Can reveal bones as well as soft tissues.
• Cross-sectional images
• Produce 3D images
• X-rays are ionizing beams
• Harmful to tissues (DNA)
• Higher cancer risk
September-10-17 | 4
Fundamentals of CT imaging [1]
5. 5September-10-17 |
• low mAs → few photons detected → more noise
(low-dose) (Photon starvation) (low SNR)
Radiation Dose and Image Quality
60 mA 440 mA
Computed tomography artifacts [2]
6. Related Works
September-10-17 | 6
Literature
Review
➢Denoising applied to sinogram (projection data) [3]
➢ Effective, but Not readily available and accessible.
➢Denoising applied to the reconstructed image.
➢Wavelet [4]
➢Total Variation [5]
➢Dictionary Learning and Sparse Representation [6,7]
Deep
learning
➢Wikipedia: Machine learning […] gives “computers the ability
to learn without being explicitly programmed.”
➢Ingredients: smart algorithms, lots of examples (data), lots of
computational power.
7. CNN in Medical applications
September-10-17 | 7
▪ DICOM images
▪ 16-bits images
▪ Image and info
▪ Wide range of intensity
▪ Medical noise vs Natural noise
▪ Statistical property of low-dose CT images: Methods, such as median filtering, Gaussian filtering
▪ Filter size
▪ Too small : Less structural information
▪ Too large : huge computational complexity and less size of the dataset, smoothing
8. 8September-10-17 |
• Two CNN architectures are presented to remove noise from
low-dose CT images.
• First network is inspired by the dictionary learning methods
and a function is assigned for each layer based on this
correlation.
• An efficient network is presented by improving the first
architecture in terms of speed and the performance.
• Important parameters for each network are investigated to find
the best performance.
• The model is tested and the results are compared to the state-
of-the-arts methods.
Contributions
10. 10September-10-17 |
Extracting low-dose
patches and project
onto a low-dose
dictionary
Iteratively processing
low-dose coefficients in
order to create the
normal-dose coefficients
Projecting the normal-
dose coefficients onto the
normal-dose dictionary
and averaging the
overlapping patches
from Dictionary learning to LDCNN
15. 15September-10-17 |
End-to-end mapping between the low-dose and normal-dose CT images
Fully feed-forward network, it is unnecessary to solve the optimization problem.
Trained hierarchically structured feature maps from low-level (blobs, edges,
etc.) to high-level (more complex and detailed shapes) Transfer learning
Concise structure, but provides superior accuracy compared to state-of-the-art
methods.
Properties of the LDCNN
19. 19September-10-17 |
• Two CNN architectures are presented to remove noise from
low-dose CT images.
• First network is inspired by the dictionary learning methods
and a function is assigned for each layer based on this
correlation.
• An efficient network is presented by improving the first
architecture in terms of speed and the performance.
• Important parameters for each network are investigated to find
the best performance.
• The model is tested and the results are compared to the state-
of-the-arts methods.
Contributions
23. 23September-10-17 |
from the sparse-LDCNN to Deep-LDCNN
• Apply layer to reduce the computational
cost (pooling layer)
Compressing layer
• Adopt a wider mapping layer with the
lower size of the feature maps .
• Capture the non-linear property of the
noise
Mapping layer
• Inverse process of the compressing layer
Enlarging layer
27. 27September-10-17 |
Sensitiveparameters
Number of feature maps in the feature extraction layer
()
Number of mapping layers ()
Number of feature maps in the compressing layer ()
Network parameters and Efficiency
Symmetric
Size of the feature maps in the
mapping layers
Compressing and Enlarging layers
29. 29September-10-17 |
Implementation Details
Normalizing images and
Extracting patches from CT
images using the stride and patch
size and create 4-D Hierarchical
Data Format (HDF) file
Caffe implementation :Defining
the layer parameters such as the
number of layers, number of
filters, kernel size, loss layer. Also
the network parameters such as
learning rate, momentum, number
of iterations.
Importing the network structures
from Caffe to Matlab and saving
the parameters
Applying test images and
evaluation metrics
30. 30September-10-17 |
CT Datasets
I. Anthropomorphic thoracic phantom
• 407 normal-dose (200mAs) and corresponding low-dose
(25mAs) CT images.
• Focused on the lung nodules with different characteristics (size,
density, shape, location)
II. CATPHAN600 phantom
• 584 normal-dose (210mAs) and corresponding low-dose
(60mAs) CT images.
• Involves line pairs of different spacing and spheres with varying
contrast and is used for evaluating spatial resolutions.
III. Piglet dataset
• 906 normal-dose (300mAs) and corresponding low-dose (73,30,
15mAs) CT images.
31. 31September-10-17 |
CT Datasets
Datasets Properties:
DICOM Images Randomly Shuffled data
Training, validation, and
test set involves 50%,
25%, and 25% of each
dataset respectively.
37. 37September-10-17 |
Quality
Assessments
• Peak signal-to-noise ratio (PSNR)
• Root mean squared error (RMSE)
• Structural Similarity (SSIM)
• Multi-scale SSIM (MSSIM)
• Universal quality index (UQI)
• Weighted signal-to-noise ratio (WSNR)
• Visual information fidelity (VIF)
• Noise quality measure (NQM)
• information fidelity criterion (IFC)
Evaluation Metrics
38. 38September-10-17 |
Evaluation Metrics
UQI
• luminance, contrast,
and structural
comparisons
WSNR (dB)
• Contrast sensitivity
functions (CSF) are
used as weights.
VIF
• The amount of
information shared
between the source and
the distorted image.
NQM (dB)
• Considers variation in
contrast sensitivity
with distance, image
dimensions and spatial
frequency.
IFC
• The mutual
information is derived
for one sub-band and
then generalized for
multiple sub-bands.
46. 46September-10-17 |
Summary and Conclusion
Demand for low-
dose CT image
denoising.
The CNN
architecture
inspired by the
dictionary
methods was
presented.
The architecture,
which was
improved in
terms of the
speed and the
performance was
presented.
The parameters
for each
architecture were
investigated to
find the optimum
results.
The performance
outperformed
other state-of-
the-art methods.
47. 47September-10-17 |
• Two CNN architectures are presented to remove noise from low-
dose CT images.
• First network is inspired by the dictionary learning methods and a
function is assigned for each layer based on this correlation.
• An efficient network is presented by improving the first
architecture in terms of speed and the performance.
• Important parameters for each network are investigated to find
the best performance.
• The model is tested and the results are compared to the state-of-
the-arts methods.
Contributions
48. 48September-10-17 |
Future work
Hot research areas:
• Deep learning is growing fast and advanced methods are proposing.
• Deep residual framework.
• Generative adversarial framework.
• Block-matching framework.
• Improving the contrast issue in DICOM.
49. 49September-10-17 |
➢ [1] A. C. Kak and M. Slaney, “Principles of Computerized Tomographic Imaging,” IEEE Press, vol. 4, no. Dec, pp. 1–2, 1988.
➢ [2] F. E. Boas and D. Fleischmann, "Computed tomography artifacts: Causes and reduction techniques," Imaging in Medicine, vol. 4, no. 2, pp. 229-240, 2012.
➢ [3] P. J. La Rivière, “Penalized-likelihood sinogram smoothing for low-dose CT,” Med. Phys., vol. 32, no. 6Part1, pp. 1676–1683, 2005.
➢ [4] S. G. Chang, B. Yu, and M. Vetterli, “Adaptive wavelet thresholding for image denoising and compression,” IEEE Trans. Image Process., vol. 9, no. 9, pp. 1532–
1546, 2000.
➢ [5] L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D Nonlinear Phenom., vol. 60, no. 1–4, pp. 259–268, 1992.
➢ [6] S. Ghadrdan, J. Alirezaie, S. Member, J. Dillenseger, and P. Babyn. 2014. Low-dose Computed Tomography Image Denoising based on Joint Wavelet and Sparse
representation. Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE, 2014, 3325–3328.
➢ [7] Priyam Chatterjee and Peyman Milanfar. 2009. Image denoising using locally learned dictionaries. Proceedings of SPIE 7246, 7: 72460V–72460V–10, 1438 - 1451
➢ [8] A. Danielyan, V. Katkovnik, and K. Egiazarian, “BM3D Frames and Variational Image Deblurring”, IEEE Trans. Image Process., vol. 21, no. 4, pp. 1715-1728,
April 2012.
➢ [9] Weisheng Dong, Guangming Shi, Yi Ma, and Xin Li. 2015. Image Restoration via Simultaneous Sparse Coding: Where Structured Sparsity Meets Gaussian Scale
Mixture. International Journal of Computer Vision 114, 2: 217-232.
➢ [10] A. Khodabandeh, J. Alirezaie, P. Babyn, and A. Ahmadian, “Computed Tomography Image Denoising by Learning to Separate Morphological Diversity,”
Telecommun. Signal Process. (TSP), 2015 38th Int. Conf., pp. 513–517, 2015.
➢ [11] H. Chen, Y. Zhang, W. Zhang, P. Liao, K. Li, and J. Zhou, “LOW-DOSE CT DENOISING WITH CONVOLUTIONAL NEURAL NETWORK College of
Computer Science , Sichuan University , Chengdu 610065 , China National Key Laboratory of Fundamental Science on Synthetic Vision , Sichuan University , Chengdu
Department of Scientific Re,” pp. 2–5, 2017.
References
50. 50September-10-17 |
➢S. Badretale, F. Shaker, J. Alirezaie, P. Babyn, “Fully Convolutional Architecture for
Low-Dose CT Image Noise Reduction”, Accepted and Presented in International
Conference on Artificial Intelligence Applications and Technologies, 2017, USA.
➢S. Badretale, F. Shaker, J. Alirezaie, P. Babyn, “Deep Convolutional approach for
Low-Dose CT Image Noise Reduction”, Submitted to the ICBME 2017.
Publications