Capitol Tech U Doctoral Presentation - April 2024.pptx
Concealed Weapon Detection
1. A Seminar Report
On
CONCEALED WEAPON DETECTION USING
DIGITAL IMAGE PROCESSING
Subject: Seminar-2
Paper Code: ECE 892
University Institute of Technology
The University of Burdwan
By
Srijan Kumar Jha
Roll no.-2016-2002
Regn. No.-A5032 of 16-17
DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING
UNIVERSITY INSTITUTE OF TECHNOLOGY
THE UNIVERSITY OF BURDWAN
GOLAPBAGH(NORTH),BURDWAN-713104,WB
3. ABSTRACT:
In the present scenario, bomb blasts are rampant all around the world. Bombs went
off in buses and underground stations, killed many and left many injured. Bomb
blasts cannot be predicted beforehand. This report is all about the technology which
predicts the suicide bombers and explosion of weapons through IMAGING FOR
CONCLEAD WEAPON DETECTION, the sensor improvements, how the imaging
takes place and the challenges. And we also describe techniques for simultaneous
noise suppression, object enhancement of video data and show some mathematical
results.
The detection of weapons concealed underneath a person’s clothing is very
much important to the improvement of the security of the general public as well as
the safety of public assets like airports, buildings, and railway stations etc. Manual
screening procedures for detecting concealed weapons such as handguns, knives, and
explosives are common in controlled access settings like airports, entrances to
sensitive buildings and public events. It is desirable sometimes to be able to detect
concealed weapons from a standoff distance, especially when it is impossible to
arrange the flow of people through a controlled procedure.
4. INTRODUCTION:
Till now the detection of concealed weapons is done by manual screening
procedures. To control the explosives in some places like airports, sensitive
buildings, famous constructions etc. But these manual screening procedures are not
giving satisfactory results, because this type of manual screenings procedures screens
the person when the person is near the screening machine and also sometimes it
gives wrong alarm indications so we are need of a technology that almost detects the
weapon by scanning. This can be achieved by imaging for concealed weapons.
The goal is the eventual deployment of automatic detection and recognition of
concealed weapons. it is a technological challenge that requires innovative solutions
in sensor technologies and image processing.
The problem also presents challenges in the legal arena; a number of sensors
based on different phenomenology as well as image processing support are being
developed to observe objects underneath people’s clothing.
5. IMAGE PROCESSING:
The term digital image processing refers to scientific and technical pursuits.
Processing of an image includes improvement in its appearance and efficient
representation. Image processing is widely being used today. Interest in digital image
processing stems from two principle applications areas: improvement of pictorial
information for human interpretation, and processing of scenes data for autonomous
machine perception.
HOW IT IS DONE?
This involves converting the visual informa-tion into discrete form suitable for
computer processing. Computer processes discrete information of an image using
various mathematical tools MORPHOLOGY is one of them, which offers a broad set
of image processing operations that process images based on shapes. Morphological
operations apply a structuring element to an input image, creating an output image
of the same size.
Manual screenings procedures screens the person when the person is near the
screening machine and also some-times it gives wrong alarm indications so we are
need of a technology that almost detects the weapon by scanning which achieved by
imaging for concealed weapons.
IMAGING SENSORS:
These imaging sensors developed for CWD applications depending on their
portability, proximity and whether they use active or passive illuminations. The
different types of imaging sensors for CWD based areas be
1.)INFRARED IMAGERS:
Infrared imagers utilize the temperature distribution information of the target to
form an image. Normally they are used for a variety of night-vision applications, such
as viewing vehicles and people. The underlying theory is that the infrared radiation
emitted by the human body is absorbed by clothing and then re-emitted by it. As a
result, infrared radiation can be used to show the image of a concealed weapon only
6. when the clothing is tight, thin, and stationary. For normally loose clothing, the
emitted infrared radiation will be spread over a larger clothing area, thus decreasing
the ability to image a weapon.
2.) PMW IMAGING SENSORS:
FIRST GENERATION:
Passive Millimeter Wave (PMW) sensors measure the apparent temperature through
the energy that is emitted or reflected by sources. The output of the sensors is a
function of the emissive of the objects in the MMW spectrum as measured by the
receiver. Clothing penetration for concealed weapon detection is made possible by
MMW sensors due to the low emissive and high reflectivity of objects like metallic
guns.
.
SECOND GENERATION:
Recent advances in MMW sensor technology have led to video-rate (30 frames/s)
MMW cameras. One such camera is the pupil-plane array from Terex Enterprises. It
is a 94-GHz radiometric pupil-plane imaging system that employs frequency
scanning to achieve vertical resolution and uses an array of 32 individual wave-guide
antennas for horizontal resolution. This system collects up to 30frames/s of MMW
data. Following figure shows the visible and second-generation MMW images of an
individual hiding a gun underneath his jacket.
CWD THROUGH IMAGE FUSION:
By fusing passive MMW image data and its corresponding infrared (IR) or electro-optical
(EO) image, more complete information can be obtained.
The information can then be utilized to facilitate concealed weapon detection.
Fusion of an IR image revealing a concealed weapon and its corresponding MMW image has
been shown to facilitate extraction of the concealed weapon. This is illustrated in the example
given in following figure Shows an image taken from a regular CCD camera, and Figure
shows a corresponding MMW image. If either one of these two images alone is presented to a
human operator, it is difficult to recognize the weapon concealed underneath the rightmost
person’s clothing. If a fused image as shown in Figure is presented, a human operator is able
to respond with higher accuracy. This demonstrates the benefits of image fusion for the CWD
application, which integrates complementary information from multiple types of sensors.
7. IMAGING PROCESSING ARCHITECTURE:
Input Output
Output to a Human
Operator
PREPROCESSING
Automatic Weapon
Detection
Enhancement
And/or
Denoising
Registration
and Fusion
Feature
Extraction/
Description
Weapon
Recognition
. . . . .
8. 1.)IMAGE DENOISING & ENHANCEMENT THROUGH
WAVELETS:
We describe a technique for simultaneous noise suppression and object
enhancement of passive MMW video data. De-noising of the video sequences can
be achieved temporally or spatially. First temporal de-noising is achieved by
motion compensated filtering, which estimates the motion trajectory of each pixel
by various algorithms and then conducts a 1-D filtering along the trajectory. This
reduces the blurring effect that occurs when temporal filtering is performed
without regard to object motion between frames
2.) REGISTRATION OF MULTI SENSOR IMAGES:
Use of multiple sensors may increase the efficacy of a CWD system. The first step
toward image fusion is a precise alignment of images (i.e., image registration).In
registration approach images taken at the same time from different but nearly
collocated (adjacent parallel) sensors based on the maximization of mutual
information (MMI) criterion. MMI states that two images are registered when
their mutual information (MI) reaches its maximum value. For the registration of
IR images and the corresponding MMW images of the first generation. At the first
stage, two human silhouette extraction algorithms were developed, followed by a
binary correlation to coarsely register the two images. This provides an initial
search point close to the final solution for the second stage of the registration
algorithm based on the MMI criterion. In this manner, any local optimizer can be
employed to maximize the MI measure
3.) IMAGE DECOMPOSITION:
First, an image pyramid is constructed for each source image by applying the
wavelet transform to the source images. This transform domain representation
emphasizes important details of the source images at different scales, which is
useful for choosing the best fusion rules. Then, using a feature Selection rule, a
fused pyramid is formed for the composite image from the pyramid coefficients of
9. the source images. The simplest feature selection rule is choosing the maximum
of the two corresponding transform values. This allows the integration of details
into one image from two or more images. Finally, the composite image is
obtained by taking an inverse pyramid transform of the composite wavelet
representation. The process can be applied to fusion of multiple source imagery.
This type of method has been used to fuse IR and MMW images for CWD
application.
4.)AUTOMATIC WEAPON DETECTION:
After preprocessing, the images /video sequences can be displayed for operator-
assisted weapon detection or fed into a weapon detection module for automated
weapon detection. Toward this aim, several steps are required, including object
extraction, shape description, and weapon recognition.
5.)SEGMENTATION FOR OBJECT EXTRACTION:
Object extraction is an important step towards automatic recognition of a weapon,
regardless of whether or not the image fusion step is involved. It has been
successfully used to extract the gun shape from the fused IR and MMW images. This
could not be achieved using the original images alone. Segmented result from the
fused IR and MMW image is shown in Figure 6.
10. Proposed Method:
In the proposed technique for CWD we consider two types of image – a visual image
and an IR image. Visual image is nothing but an RGB image which has three main
colour components Red, Green and Blue. Since the human visual system is very
sensitive to colours this image creates a natural perception of an object to human
vision but not helps so much in the detection of concealed weapon. For this we
consider IR image as second input. It basically depends on high thermal emissivity of
the body. Basically the infrared radiation emitted by the body is absorbed by clothing
and then reemitted by it, is sensed by the infrared sensors. Due to difference in
thermal emissivity we can realize the hidden object but since the background is
almost black this image cannot help in CWD alone.
Resize two input images:
Since these two input images are taken from two different image sensing devices so
they are of different size. So we first resize these two types of images because the
image fusion and other operations are not possible if the sizes are not same.
Combine two images:
Perform the addition operation between visual and IR (visual + IR) images to get the
IvIR image. But the resultant image does not give enough information. Then we
complement the IR image (IIRc) to remove the background darkness. IR image lies
the intensity between 0 to 255 intensity thus complement means subtracting all
matrix component from 255 and we get complemented form or reverse form of the
IR image. Then add visual image and complemented IR image (visual +
complemented IR) and get a resultant image which is denoted by IvIRc.
Conversion of IR to HSV:
Then we convert IR image into HSV colour model (IIR_HSV) because components
of IR image are all correlated with the amount of light hitting the object, and
therefore with each other, image descriptions in terms of those components make
object discrimination difficult. Descriptions in terms of hue/lightness/saturation are
often more relevant.
Fused two images:
After converting HSV model the image is now three components. Now we can use
fusion technique because two images have the same dimension with same size and
we use DWT fusion technique between HSV colour image (IIR_HSV) and combined
image IvIRc. Processing for showing the weapon clearly in the visual image: Then
this fused image converted into gray scale image. Now we use Otsu’s local
thresholding technique for binarizing fused gray scale image.
Then Extract the weapon portion by calculating all connected area component and
remove too small component and also too large component according to the area
values.
To show the weapon in the actual RGB visual image we multiply the weapon’s binary
images with three dimensional RGB image. Basically the element wise multiplication
is performed between two matrices.
11. Now contour detection is used to detect edges of weapon from the weapon binary
image and we use canny edge detector for detecting the edges.
Then this binarizes contour image is divided into three components and multiply as
before and we get contour with visual RGB image where we can detect the concealed
weapon under the person’s clothes very clearly.
Here below is the flow diagram of our proposed method is shown.
ALGORITHM:
Step 1: Take a visual image (basically, RGB image) and an infrared (IR)
image as input.
Step 2: Resize this two image so that they have same size.
Step 3: Combine i.e. add resized Visual and IR image.
Step 4: Complement the IR image.
Step 5: Combine i.e. add resized Visual image and complemented IR
image.
Step 6: Convert the visual RGB image to its HSV format.
Step 7: Perform DWT fusion on Step 5’s combined image and Step 6’s
converted HSV image.
Step 8: Convert the fused image into its gray scale format.
Step 9: Binarize the Fused image.
Step 10: Detect the weapon from that image.
Step 11: Combine this detected weapon with visual image.
Step 12: For detecting the weapon clearly we find out the contour of the
weapon.
Step 13: Then combine the contour of the weapon with visual image.
Step 14: End
13. Figure 7: HSV image Figure 8: Fused image Figure 9: Fused gray image
Fig 10: Weapon in binary image Fig 11: Weapon in visual image Fig 12 : Contour of the Weapon image
14. Figure 13: Contour with Visual Image
ADVANTAGES:
It will help the low enforcement to be able to detect and respond to weapons at a
sufficient distance to allow officers to make decisions and take actions that deal
safely with the situation.
CHALLENGES:
There are several challenges ahead. One critical issue is the challenge of performing
detection at a distance with high probability of detection and low probability of false
alarm.
Yet another difficulty to be surmounted is forging portable multi-sensor instruments.
Also, detection systems go hand in hand with subsequent response by the operator,
and system development should take into account the overall context of deployment.
15. CONCLUSION:
Imaging techniques based on a combination of sensor technologies and processing
will potentially play a key role in addressing the concealed weapon detection
problem. Recent advances in MMW sensor technology have led to video-rate (30
frames/s) MMW cameras. However, MMW cameras alone cannot provide useful
information about the detail and location of the individual being monitored. To
enhance the practical values of passive MMW cameras, sensor fusion approaches
using MMW and IR, or MMW and EO cameras are being described. By integrating
the complementary information from different sensors, a more effective CWD system
is expected.
16. REFERENCES & BIBLIOGRAPHY:
N.G.Paulter, “Guide to the technologies of concealed weapon imaging and detection,” NIJ
Guide 602-00,2001
An Article from “IEEE SIGNAL PROCESSING MAGAZINE” March 2005 pp. 52-61
R. C. Gonzalez, R. E. Woods, “Digital Image Processing", Second Edition, Prentice
Hall, New Jersey 2002.