Deep Learning-Based Universal Beamformer for Ultrasound ImagingShujaat Khan
In ultrasound (US) imaging, individual channel RF measurements are back-propagated and accumulated to form an image after applying specific delays. While this time reversal is usually implemented using a hardware- or software-based delay-and-sum (DAS) beamformer, the performance of DAS decreases rapidly in situations where data acquisition is not ideal. Herein, for the first time, we demonstrate that a single data-driven adaptive beamformer designed as a deep neural network can generate high quality images robustly for various detector channel configurations and subsampling rates. The proposed deep beamformer is evaluated for two distinct acquisition schemes: focused ultrasound imaging and planewave imaging. Experimental results showed that the proposed deep beamformer exhibit significant performance gain for both focused and planar imaging schemes, in terms of contrast-to-noise ratio and structural similarity.
Universal plane wave compounding for high quality us imaging using deep learningShujaat Khan
Plane-wave compounding is to sum up several successive plane waves incident at different angles to form an image. By applying time-reversal of the received signals, transmit focusing can be synthesized. Unfortunately, to improve the temporal resolution, the number of plane waves should be reduced, which often degrades the image quality. To address this problem, an image domain learning method using neural networks has been proposed, but the network needs to be retrained when the number of plane waves changes. Herein, we propose, for the first time, a universal plane-wave compounding scheme using deep learning to directly process plane waves and RF data acquired at different view angles and sub-sampling rate to generate high quality US images.
Switchable and tunable deep beamformer using adaptive instance normalization ...Shujaat Khan
Recent proposals of deep learning-based beamformers for ultrasound imaging (US) have attracted significant attention as computational efficient alternatives to adaptive and compressive beamformers. Moreover, deep beamformers are versatile in that image post-processing algorithms can be readily combined. Unfortunately, with the existing technology, a large number of beamformers need to be trained and stored for different probes, organs, depth ranges, operating frequency, and desired target ‘styles’, demanding significant resources such as training data, etc. To address this problem, here we propose a switchable and tunable deep beamformer that can switch between various types of outputs such as DAS, MVBF, DMAS, GCF, etc., and also adjust noise removal levels at the inference phase, by using a simple switch or tunable nozzle. This novel mechanism is implemented through Adaptive Instance Normalization (AdaIN) layers, so that distinct outputs can be generated using a single generator by merely changing the AdaIN codes. Experimental results using B-mode focused ultrasound confirm the flexibility and efficacy of the proposed method for various applications.
Switchable Deep Beamformer for Ultrasound Imaging Using ADAINShujaat Khan
In ultrasound (US) imaging, various adaptive beamforming methods have been proposed to improve the resolution and contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, they often require computationally expensive calculations and their performance degrades when the underlying model is not sufficiently accurate. Moreover, ultrasound images usually require various type of post filtration such as deblurring and despeckling, etc., which further increase the complexity of the system. Deep learning-based solutions provides a quick remedy to these issue; however, in the current technology, a separate beamformer should be trained and stored for each application, demanding significant scanner resources. To address this problem, here we propose a switchable deep beamformer that can produce various types of output such as DAS, speckle removal, deconvolution, etc., using a single network with a simple switch. In particular, the switch is implemented through Adaptive Instance Normalization (AdaIN) layers, so that distinct outputs can be generated by merely changing the AdaIN code. Experimental results using B-mode focused ultrasound confirm the efficacy of the proposed methods.
Adaptive and compressive beamforming using deep learning for medical ultrasoundShujaat Khan
In ultrasound (US) imaging, various types of adaptive beamforming techniques have been investigated to improve the resolution and the contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, the performance of these adaptive beamforming approaches degrades when the underlying model is not sufficiently accurate and the number of channels decreases. To address this problem, here, we propose a deep-learning-based beamformer to generate significantly improved images over widely varying measurement conditions and channel subsampling patterns. In particular, our deep neural network is designed to directly process full or subsampled radio frequency (RF) data acquired at various subsampling rates and detector configurations so that it can generate high-quality US images using a single beamformer. The origin of such input-dependent adaptivity is also theoretically analyzed. Experimental results using the B-mode focused US confirm the efficacy of the proposed methods.
Deep Learning-Based Universal Beamformer for Ultrasound ImagingShujaat Khan
In ultrasound (US) imaging, individual channel RF measurements are back-propagated and accumulated to form an image after applying specific delays. While this time reversal is usually implemented using a hardware- or software-based delay-and-sum (DAS) beamformer, the performance of DAS decreases rapidly in situations where data acquisition is not ideal. Herein, for the first time, we demonstrate that a single data-driven adaptive beamformer designed as a deep neural network can generate high quality images robustly for various detector channel configurations and subsampling rates. The proposed deep beamformer is evaluated for two distinct acquisition schemes: focused ultrasound imaging and planewave imaging. Experimental results showed that the proposed deep beamformer exhibit significant performance gain for both focused and planar imaging schemes, in terms of contrast-to-noise ratio and structural similarity.
Universal plane wave compounding for high quality us imaging using deep learningShujaat Khan
Plane-wave compounding is to sum up several successive plane waves incident at different angles to form an image. By applying time-reversal of the received signals, transmit focusing can be synthesized. Unfortunately, to improve the temporal resolution, the number of plane waves should be reduced, which often degrades the image quality. To address this problem, an image domain learning method using neural networks has been proposed, but the network needs to be retrained when the number of plane waves changes. Herein, we propose, for the first time, a universal plane-wave compounding scheme using deep learning to directly process plane waves and RF data acquired at different view angles and sub-sampling rate to generate high quality US images.
Switchable and tunable deep beamformer using adaptive instance normalization ...Shujaat Khan
Recent proposals of deep learning-based beamformers for ultrasound imaging (US) have attracted significant attention as computational efficient alternatives to adaptive and compressive beamformers. Moreover, deep beamformers are versatile in that image post-processing algorithms can be readily combined. Unfortunately, with the existing technology, a large number of beamformers need to be trained and stored for different probes, organs, depth ranges, operating frequency, and desired target ‘styles’, demanding significant resources such as training data, etc. To address this problem, here we propose a switchable and tunable deep beamformer that can switch between various types of outputs such as DAS, MVBF, DMAS, GCF, etc., and also adjust noise removal levels at the inference phase, by using a simple switch or tunable nozzle. This novel mechanism is implemented through Adaptive Instance Normalization (AdaIN) layers, so that distinct outputs can be generated using a single generator by merely changing the AdaIN codes. Experimental results using B-mode focused ultrasound confirm the flexibility and efficacy of the proposed method for various applications.
Switchable Deep Beamformer for Ultrasound Imaging Using ADAINShujaat Khan
In ultrasound (US) imaging, various adaptive beamforming methods have been proposed to improve the resolution and contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, they often require computationally expensive calculations and their performance degrades when the underlying model is not sufficiently accurate. Moreover, ultrasound images usually require various type of post filtration such as deblurring and despeckling, etc., which further increase the complexity of the system. Deep learning-based solutions provides a quick remedy to these issue; however, in the current technology, a separate beamformer should be trained and stored for each application, demanding significant scanner resources. To address this problem, here we propose a switchable deep beamformer that can produce various types of output such as DAS, speckle removal, deconvolution, etc., using a single network with a simple switch. In particular, the switch is implemented through Adaptive Instance Normalization (AdaIN) layers, so that distinct outputs can be generated by merely changing the AdaIN code. Experimental results using B-mode focused ultrasound confirm the efficacy of the proposed methods.
Adaptive and compressive beamforming using deep learning for medical ultrasoundShujaat Khan
In ultrasound (US) imaging, various types of adaptive beamforming techniques have been investigated to improve the resolution and the contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, the performance of these adaptive beamforming approaches degrades when the underlying model is not sufficiently accurate and the number of channels decreases. To address this problem, here, we propose a deep-learning-based beamformer to generate significantly improved images over widely varying measurement conditions and channel subsampling patterns. In particular, our deep neural network is designed to directly process full or subsampled radio frequency (RF) data acquired at various subsampling rates and detector configurations so that it can generate high-quality US images using a single beamformer. The origin of such input-dependent adaptivity is also theoretically analyzed. Experimental results using the B-mode focused US confirm the efficacy of the proposed methods.
Modern medical imaging has been digitized using various technologies which are described here in this presentation.Presented in Department of radiology, ,B.Sc Medical Imaging technology,Institute of Medicine, Nepal.
In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.
Modern medical imaging has been digitized using various technologies which are described here in this presentation.Presented in Department of radiology, ,B.Sc Medical Imaging technology,Institute of Medicine, Nepal.
In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.
A ROBUST CHAOTIC AND FAST WALSH TRANSFORM ENCRYPTION FOR GRAY SCALE BIOMEDICA...sipij
In this work, a new scheme of image encryption based on chaos and Fast Walsh Transform (FWT) has been proposed.
We used two chaotic logistic maps and combined chaotic encryption methods to the two-dimensional FWT of images.
The encryption process involves two steps: firstly, chaotic sequences generated by the chaotic logistic maps are used to
permute and mask the intermediate results or array of FWT, the next step consist in changing the chaotic sequences or
the initial conditions of chaotic logistic maps among two intermediate results of the same row or column. Changing the
encryption key several times on the same row or column makes the cipher more robust against any attack. We tested
our algorithms on many biomedical images. We also used images from data bases to compare our algorithm to those
in literature. It comes out from statistical analysis and key sensitivity tests that our proposed image encryption schemeprovides an efficient and secure way for real-time encryption and transmission biomedical images.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...PetteriTeikariPhD
Shallow literature analysis on recent trends in computational ophthalmic imaging with focus on neurodegenerative disease imaging / oculomics.
Open-ended literature review on what you could be building next.
#1/2: Hardware
#2/2: Computational imaging
Alternative download link:
https://www.dropbox.com/scl/fi/d34pgi3xopfjbrcqj2lvi/retina_imaging_2024_computational.pdf?rlkey=xnt1dbe8rafyowocl9cbgjh3p&dl=0
Instructions you are to read the assigned paper and prep.docxjaggernaoma
Instructions::::
you are to read the assigned paper and prepare the following:
· Summary (2 pages, 11 font size, double space)
· Discuss space-division multiplexing
· What it is
· The feasibility and potential future direction
· Implication of Shannon’s theorem in the achievement of high bit rate
· What can be considered the best technology for higher bit rate in optical fiber?
10 C O M M U N I C AT I O N S O F T H E A C M | O C T O B E R 2 0 1 6 | V O L . 5 9 | N O . 1 0
news
I
M
A
G
E
B
Y
J
A
M
A
N
I
C
A
I
L
L
E
T
/E
P
F
L
news
N
tens of kilometers before it must be de-
tected and retransmitted. In ensuing
years the bit rate increased steadily,
driven both by faster transmitters and
receivers and by fiber designs that min-
imized the spread of the pulses.
As the pace of improvements began
to slow, researchers realized they could
send more information through fiber
by combining light of slightly differ-
ent wavelengths, each carrying its own
S
I N C E O P T I C A L F I B E R S were
first deployed for commu-
nications in the 1970s, the
number of bits per second
a single fiber can carry has
grown by the astonishing factor of 10
million, permitting an enormous in-
crease in total data traffic, including
cellular phone calls that spend most of
their lives as bits traveling in fiber.
The exponential growth resembles
Moore’s Law for integrated circuits.
Technology journalist Jeff Hecht has
proposed calling the fiber version
“Keck’s Law” after Corning researcher
Donald Keck, whose improvements in
glass transparency in the early 1970s
helped launch the revolution. The sim-
plicity of these “laws,” however, ob-
scures the repeated waves of innovation
that sustain them, and both laws seem
to be approaching fundamental limits.
Fiber researchers have some cards to
play, though. Moreover, if necessary the
industry can install more fibers, similar
to the way multiple processors took the
pressure off saturating clock rates.
However, the new solutions may not
yield the same energy and cost savings
that have helped finance the telecom-
munication explosion.
Optical fiber became practical when
researchers learned how to purify ma-
terials and fabricate fibers with extraor-
dinary transparency, by embedding
a higher refractive-index core to trap
the light deep within a much larger
cladding. Subsequent improvements
reduced losses to their current levels,
about 0.2 dB/km for light wavelengths
(infrared “colors”) near 1.55 μm. A la-
ser beam that is turned on and off to
encode bits can transmit voice or data
Optical Fibers
Getting Full
Exploring ways to push more data through
a fiber one-tenth the thickness of the average human hair.
Science | DOI:10.1145/2983268 Don Monroe
http://dx.doi.org/10.1145/2983268
O C T O B E R 2 0 1 6 | V O L . 5 9 | N O . 1 0 | C O M M U N I C AT I O N S O F T H E A C M 11
news
Nonlinear Shannon L.
An optimized discrete wavelet transform compression technique for image trans...IJECEIAES
Transferring images in a wireless multimedia sensor network (WMSN) knows a fast development in both research and fields of application. Nevertheless, this area of research faces many problems such as the low quality of the received images after their decompression, the limited number of reconstructed images at the base station, and the high-energy consumption used in the process of compression and decompression. In order to fix these problems, we proposed a compression method based on the classic discrete wavelet transform (DWT). Our method applies the wavelet compression technique multiple times on the same image. As a result, we found that the number of received images is higher than using the classic DWT. In addition, the quality of the received images is much higher compared to the standard DWT. Finally, the energy consumption is lower when we use our technique. Therefore, we can say that our proposed compression technique is more adapted to the WMSN environment.
The aim of this paper is to determine the viability of Indoor Optical Wireless Communication System. This paper introduces Visible Light Communication along with its merits, demerits and applications. Then the main characteristics of VLC system are described, around which the project is designed. Multiple Input-Multiple Output (MIMO) technique is used in the project in order to enhance the data rate of transmission. Instead of using a system of only one LED and one APD, which transmits only one bit at a time, a system of 4 LEDs and 4 APDs is introduced, which increases the data rates by 300% from the previous case. We observe the signal, noise, SNR, BER etc. across the room dimension. Finally, in the last chapter we summarize our results on the basis of MATLAB simulations and propose some modifications to this model that can be implemented in future.
Abstract— This paper demonstrates overcoming of the Abbe diffraction limit (ADL) on image resolution. Here, terahertz multispectral reconstructive imaging has been described and used for analyzing nanometer size metal lines fabricated on a silicon wafer. It has also been demonstrated that while overcoming the ADL is a required condition, it is not sufficient to achieve sub-nanometer image resolution with longer wavelengths. A nanoscanning technology has been developed that exploits the modified Beer-Lambert’s law for creating a measured reflectance data matrix and utilizes the ‘inverse distance to power equation’ algorithm for achieving 3D, sub-nanometer image resolution. The nano-lines images reported herein, were compared to SEM images. The terahertz images of 70 nm lines agreed well with the TEM images. The 14 nm lines by SEM were determined to be ~15 nm. Thus, the wavelength dependent Abbe diffraction limit on image resolution has been overcome. Layer-by-layer analysis has been demonstrated where 3D images are analyzed on any of the three orthogonal planes. Images of grains on the metal lines have also been analyzed. Unlike electron microscopes, where the samples must be in the vacuum chamber and must be thin enough for electron beam transparency, terahertz imaging is non-destructive, non-contact technique without laborious sample preparation.
Abstract:
This paper demonstrates overcoming of the Abbe diffraction limit (ADL) on image resolution. Here, terahertz multispectral reconstructive imaging has been described and used for analyzing nanometer size metal lines fabricated on a silicon wafer. It has also been demonstrated that while overcoming the ADL is a required condition, it is not sufficient to achieve sub-nanometer image resolution with longer wavelengths. A nanoscanning technology has been developed that exploits the modified Beer-Lambert’s law for creating a measured reflectance data matrix and utilizes the ‘inverse distance to power equation’ algorithm for achieving 3D, sub-nanometer image resolution. The nano-lines images reported herein, were compared to SEM images. The terahertz images of 70 nm lines agreed well with the TEM images. The 14 nm lines by SEM were determined to be 15 nm. Thus, the wavelength dependent Abbe diffraction limit on image resolution has been overcome. Layer-by-layer analysis has been demonstrated where 3D images are analyzed on any of the three orthogonal planes. Images of grains on the metal lines have also been analyzed. Unlike electron microscopes, where the samples must be in the vacuum chamber and must be thin enough for electron beam transparency, terahertz imaging is non-destructive, non-contact technique without laborious sample preparation.
August 2022 - Top 5 Cited Articles in Microwave engineering jmicro
International Journal of Microwave Engineering (JMICRO) is a peer-reviewed, open access journal which invites high quality manuscripts that focuses on Engineering and theory associated with microwave / millimeter-wave technology, guided wave structures, electromagnetic theory and implementation. Authors are invited to submit original research works that stimulate the development of latest technology in industry and academia. Good quality review papers and short communications are also acceptable.
JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...ijma
ABSTRACT
Image usage over the internet becomes more and more important each day. Over 3 billion images are shared each day over the internet which raise a concern about how to protect images copyrights? Or how to utilize image sharing experience? This paper proposes a new robust image watermarking algorithm based on compressed sensing (CS) and quantization index modulation (QIM) watermark embedding. The algorithm capitalizes on the CS to compress and encrypt images jointly with Entropy Coding, Arnold Cat Map, Pseudo-random numbers and Advanced Encryption Standard (AES). Our proposed algorithm works under the JPEG standard umbrella. Watermark embedding is done in 3 different locations inside the image using QIM. Those locations differ with each 8-by-8 image block. Choosing which combination of coefficients to be used in QIM watermark embedding depends on selecting a combination from combinations table, which is generated at the same time with projection matrices using a 10-digits Pseudorandom number secret key SK1. After quantization phase, the algorithm shuffles image blocks using Arnold’s Cat Map with a 10-digits Pseudo-random number secret key SK2, followed by a unique method for splitting every 8x8 block into two unequal parts. Part number one will act as the host for two QIM watermarks then goes through encoding phase using Run-Length Encoding (RLE) followed by Huffman Encoding, while part number two goes through sparse watermark embedding followed by a third QIM watermark embedding and compression phase using CS, then Huffman encoder is used to encode this part. The algorithm aims to combine image watermarking, compression and encryption capabilities in one algorithm while balancing how those capabilities works with each other to achieve significant improvement in terms of image watermarking, compression and encryption. 15 different images usually used in image processing benchmarking were used for testing the algorithm capabilities and experiments show that our proposed algorithm achieves robust watermarking jointly with encryption and compression under the JPEG standard framework.
JOINT IMAGE WATERMARKING, COMPRESSION AND ENCRYPTION BASED ON COMPRESSED SENS...
EL.2014.1684
1. 781Electronics Letters 22nd May 2014 Vol.50 No.11doi: 10.1049/el.2014.1684
target
object
focused THz beam
(the detector)
DLP projector
ITO
Si
‘H’ shaped aperture
photo induced
coded aperture
mask (16 ×16)
Low cost and complexity real-time THz imaging
may be within reach as work in the US combines
commercially available digital light processing
technology and compressed sensing to enhance
coded aperture imaging.
Double time
Recent developments in THz imaging have led to
new approaches for studying electron density and
temperature fluctuations inside high-temperature
plasmas; including electron cyclotron emission
imaging and microwave imaging reflectometry. In
this application, events of interest occur in millisec-
onds, requiring a high frame rate to observe them.
At the same time, real-time millimetre-wave to
THz imaging has also become of interest for
through-barrier imaging and aviation applications
such as severe-weather operation. For example, THz
imaging could allow pilots to see through atmo-
spheric conditions like fog and sand. Here, the frame
rate needed is lower, but it is critical that video frame
rates can be produced in real-time.
Existing THz imaging systems generally fall into
one of three categories: single-element imagers that
obtain images by mechanical scanning; array imagers
like focal-plane arrays (FPAs); and coded-aperture
imaging (CAI).
In applications requiring very high frame rates,
mechanical scanning is impractical. Array-based
imagers like FPAs offer the highest imaging speed
with high SNR, but tend to be complex and expen-
sive, especially for large-scale arrays with high
imaging resolution, as an antenna and low noise
amplifier/receiver is needed for each pixel.
Building a view
CAI-based systems use spatial encoding and
modulation to eliminate the need for detector arrays.
A single THz detector in combination with a series of
N × N coded aperture masks is employed to recon-
struct an N × N resolution image of a target from the
THz waves passing through the different masks. The
concept has been demonstrated using fixed masks
fabricated on PCBs, with 2D scanning to obtain imag-
ing data through the masks.
For high frame rates, aperture arrays electronically
actuated by Schottky diodes and graphene modulators
have been proposed. But these approaches have simi-
lar cost and complexity issues to FPAs, requiring a
diode/modulator and biasing line per pixel, and pre-
patterned circuits (e.g., antenna arrays) for operation.
In the work reported in this issue, the team from the
University of Notre Dame’s THz circuits and systems
research group report an approach to CAI THz imag-
ing with the potential for real-time video performance
but with low system cost and complexity.
Projections
The approach has two key features. The first is their
method for producing the coded apertures. Instead of
using a series of physical masks, their system proj-
ects ‘apertures’on a plain un-patterned, semi-insulat-
ing Si substrate, using a commercial digital light
processing (DLP) projector. “We found in earlier
work that THz waves can be easily modulated using
photo-induced free carriers. When light illuminates a
semiconductor, free carriers are generated as long as
the photon energy of the light exceeds the band gap
energy of the semiconductor. The charge carrier
density, and hence the photo conductivity and THz
transmission, can be controlled by the energy density
of the incident photons. We apply this to spatially
modulate the THz wavefront and generate reconfigu-
rable aperture arrays directly on the unpatterned
Silicon wafer,” explained team member Md. Itrat Bin
Shams. For this reason, they describe their system as
photo-induced CAI (PI-CAI) and it avoids the need
for complex microfabrication processes and precise
alignment of masks.
TOP: Members of the THz
Circuits and Systems group
in the lab at the University of
Notre Dame with the vector
network analyser used in
the work. (From left to right)
J. Qayyum, S. Rahman, M.
I. B. Shams, Z. Jiang, and
Prof. L. Liu
BOTTOM: The experimental
setup. In the PI-CAI system
the coded aperture masks
are created by projecting
light patterns onto silicon
substrate
The second key feature is the use of compressed
sensing (CS), reducing the frame acquisition time
by at least 40%. They report that for a 256 pixel
image, the use of CS reduces acquisition time from
24 to 14 seconds. That is, of course, still an appar-
ently long way from the 24 frames per second that
could be considered a video frame rate, but the team
are confident that their results show video rates are
within reach, and for much higher pixel counts.
“This acquisition time is limited by the relatively
slow (1.3 kHz) DMD control electronics in the DLP.
The approach can be improved using existing high-
speed DMD chipsets and optimised data-acquisition
software. For a 32 × 32 image frame, approximately
1229 masks are needed, and with a 32 kHz DMD
26 full frames can be collected in a second. We
expect to demonstrate a real-time PI-CAI system in
one year,” said Shams. “A 1k pixel real-time THz
imaging system would find a wide range of immedi-
ate applications. We would like to see these imagers
applied in low-cost diagnostics (e.g. plasmas, quasi-
optical systems) and rapid screening of skin disease
and cancers.”
The group at Notre Dame are using elements
of this work in other research. They have applied
PI-CAI to characterise micromachined THz horn
antennas and are applying the THz wave spatial
optical modulation mechanism to other tunable/
reconfigurable quasi-optical THz devices, such as
tuneable zone-plates, beam-steering antennas
and, potentially, tuneable filters.
Page 801
‘Approaching
real-time terahertz
imaging with
photo-induced coded
apertures and
compressed
sensing’,
M.I.B. Shams,
Z. Jiang, S. Rahman,
J. Qayyum,
L.-J. Cheng,
H.G. Xing, P. Fay and
L. Liu
➔
Simpler, lower-cost THz imaging in real-time with digital
light processing and compressed sensing
further reading
✽
seeing through a mask
✽