Ee 417 Senior Design

  • 586 views
Uploaded on

Airport Security Weapon Identification

Airport Security Weapon Identification

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
586
On Slideshare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
4
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Embry-Riddle Aeronautical University
    Ee 417 senior design
    Instructor:
    Dr. Brian Butka
  • 2. Student: Jerod Crouch
    Major: Electrical Engineering
    Concentration:
    Aerospace systems
  • 3. GOAL
    Enhance current airport security measures by providing a means to detect non-metallic weapons that airline passengers could carry aboard an aircraft.
  • 4. A Systems Approach to problem solving
    Identify the Need.
    Synthesize Solutions
    Analyze the Solutions
    Evaluate the results
    Make a Selection
    CONTINUE TO ITERATE THIS CYCLE
  • 5. GOAL
    Enhance current airport security measures by providing a means to detect non-metallic weapons that airline passengers would carry aboard an aircraft.
    Need
  • 6. Need
    A system that will detect non-metallic weapons hidden by passengers boarding an aircraft.
    Requirements
  • 7. Requirements
    The System shall detect and identify a plastic gun hidden under a polyester shirt of an airline passenger traveling commercially in the United States.
    Synthesize Solutions
  • 8. Current Airport Security Measures
    TSA Transportation Security Administration
    http://www.tsa.gov/travelers/airtravel/prohibited/permitted-prohibited-items.shtm Administration
    Lists more than 50 prohibited items to be carried on the plane.
  • 9. Current Airport Security Measures
  • 10. Current Airport Security
    Clearly there is a gap in the screening process and improved screening methods need to be developed. This reinforces our need to:
    Develop A system that will detect non-metallic weapons hidden by passengers boarding an aircraft.
  • 11. Research
    Concepts:
    Use a combination of Infrared light and a Charge Coupled Device (CCD) to create a x-ray vision effect to detect weapons that could be hidden by passengers attempting to board an aircraft.
    Use advanced image processing to automatically detect hidden objects.
    Synthesize
  • 12. Testing the Concept
    The visible part of the spectrum falls between the wavelengths of 430nm~690nm.
    Infrared rays have a much larger wavelength and they are divided into
    Near Infrared Rays (690nm-4000nm)
    Extreme Infrared Rays (over 4000nm)
    Analyze
  • 13. Testing the Concept
    The Electro-Magnetic Spectrum
  • 14. Testing the Concept
    Unlike Ultraviolet and visible rays, infrared rays tend to penetrate any medium rather easily because of their large wavelengths.
  • 15. Testing the Concept
    When sunlight is shown through a prism, it is refracted at an angle according to its wavelength. The blue end of the visible spectrum has the shortest wavelength, so it is refracted the most. At the other end of the spectrum, beyond the red visible
    light the infrared rays are barely refracted at all because of their long wavelength.
  • 16. Testing the Concept
    In our application, the infrared light given it’s long wavelength would pass through the clothing material, but would fail to penetrate the subjects body. The infrared light would then be reflected back to the CCD and a digital image would be captured.
    Analyze
  • 17. System Diagram
    Infra Red Source
    Computer
    CCD
    Filter
    Person
  • 18. The Charge Coupled Device
  • 19. The CCD
    In a CCD for capturing images, there is a photoactive region (an epitaxial layer of silicon), and a transmission region made out of a shift register.
    An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensor. Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift register). The last capacitor in the array dumps its charge into a charge amplifier, which converts the charge into a voltage.
    By repeating this process, the controlling circuit converts the entire semiconductor contents of the array to a sequence of voltages, which it samples, digitizes and stores in some form of memory.
  • 20. The CCD
    Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. One other consequence of their sensitivity to infrared is that infrared from remote controls will often appear on CCD-based digital cameras or camcorders if they don't have infrared blockers.
  • 21. Testing the Concept
    Experimentation:
    Sony Night Shot Camera
  • 22. Testing the Concept
    The Sony Night-Shot camera has very good sensitivity to near infrared energy. The Night Shot mode on the Sony camera removes the infrared filter and allows the full spectrum of light to enter the CCD. This allows you to take video in very low light conditions. We however, want to take video/pictures in a high intensity infrared environment.
  • 23. Testing the Concept
    In order to allow the CCD in the Sony camera to work in a high intensity near infrared light environment, filters had to be added to block out some of the visible light. Recall, that in night shot mode the standard filters are removed from the CCD. In a bright light environment the picture would be washed out and look completely white.
  • 24. Testing the Concept
    Combinations of a 3.5 inch floppy media and a developed piece of 35mm film were used for experimentation.
    Analyze
  • 25. Testing the Concept
    Insert result pictures
    Evaluate
  • 26. Testing the Concept
    Matching the intensity of the infrared energy and the sensitivity of the CCD was found to be a critical ratio in tuning in a transparent image.
  • 27. Integrating the System
    Finding the right CCD.
    Wfine or Exview ?? Two different technologies from Sony.
    With the introduction of EXview HAD (Hole-Accumulation Diode) CCDs, Sony improved the quantum efficiency (QE) in the near-infrared (NIR) region. Since NIR photons are absorbed at deeper levels in the silicon, using thicker silicon in the chip increases the probability of photon-silicon interaction
    and thus further increases QE.
    Wfine
    Interline CCDs that are run in progressive-scan mode, have square pixels,
    and have a Bayer color filter array are labeled Wfine by Sony. These
    devices are used extensively in consumer markets and are perfect for
    photo-documentation applications.
  • 28. Integrating the System
    Quantum EfficiencyThe quantum efficiency (Q.E.) of a sensor describes its response to different wavelengths of light. Standard front-illuminated sensors, for example, are more sensitive to green, red, and infrared wavelengths (in the 500 to 800 nm range) than they are to blue wavelengths (400 - 500 nm).
  • 29.
  • 30. Integrating the System
    The goal of the CCD is to digitize the image so we can use the digital image with image processing software.
    Due to time constraints and budgetary requirements the Fire-I OEM Firewire Board Camera was chosen for this project. This camera is built around a Texas Instruments 1394 digital camera chipset and Sony Wfine CCD sensor. Why??
  • 31. Testing the Concept
    Digital Image Processing
    Can specific threats be identified reliably?
    Synthesize
  • 32. Digital Image Processing
    What methods and tools to use to process our digital image?
    National Instruments Vision Assistant 8.6 was chosen.
    Flexibility
    Power
    Availability
    Most powerful and widely available tool for image processing on the market.
    Analyze
  • 33. Digital Image Processing
    Methods?
    Particle Analysis or “Blob Analysis”
    Edge Detection
    Pattern Matching
    Geometric Matching
    Dimensional Measurements
    Golden Template Comparison
    Analyze
  • 34. Digital Image Processing
    Edge Detection
    You use the edge detection tools to identify and locate discontinuities in the pixel intensities of an image. The discontinuities are typically associated with abrupt changes in pixel intensity values that characterize the boundaries of objects in a scene.
  • 35. Digital Image Processing
    Pattern Matching
    When using pattern matching, you create a template that represents the object for which you are searching. Your machine vision application then searches for instances of the template in each acquired image, calculating a
    score for each match. This score relates how closely the template resembles the located matches. Alignment is key!
  • 36. Digital Image Processing
    Particle Analysis or “Blob Analysis” was chosen due to the wide range of tools available to decompose the image down to the part of the image that we are most concerned about.
    Particle Analysis allows us to apply many custom filters to the image.
  • 37. Particle Analysis
    Particle analysis consists of a series of processing operations and analysis
    functions that produce specific information about the particles in an image. A particle is a contiguous region of nonzero pixels. You can extract particles from a grayscale image by thresholding the image into background and foreground states. Zero valued pixels are in the background state, and all nonzero valued pixels are in the foreground. In a binary image, the background pixels are zero, and every non-zero pixel is part of a binary object.
    You perform a particle analysis to detect connected regions or groupings of pixels in an image and then make selected measurements of those regions.
    Using particle analysis, you can detect and analyze any two-dimensional
    shape in an image. With this information, you can detect flaws on silicon
    wafers, detect soldering defects on electronic boards, or locate objects in
    motion control applications when there is significant variance in part shape or orientation.
  • 38. Conducting Particle Analysis
    Step 1 -- Acquiring the Image
    Step 2 -- Histographing to Identify the Threshold Values
    Step 3 -- Thresholding to Create a Binary Image
    Step 4 -- Filtering to Remove Noise and Particles on the Border of the Image
    Step 5 -- Particle (Blob) Analysis to Count Cells
  • 39. Testing the Concept
    Digital Image Processing
    Can specific threats be identified reliably?
  • 40. Testing the Concept
    Analyze
  • 41. Testing the Concept
  • 42.
  • 43.
  • 44.
  • 45.
  • 46. Evaluate
  • 47. Particle Analysis
    Concerns ?
    Orientation ?
    Lighting and Distance ?
  • 48.
  • 49. Table of objects
  • 50.
  • 51.
  • 52.
  • 53.
  • 54. Integrating the System
    Now that we have initially tested our concept and met with some success, it was time to figure out how to integrate the parts of the system such that we could attempt to achieve our goal.
  • 55. Integrating the System
    Infra Red Source
    Computer
    CCD
    Filter
    Person
  • 56. Integrating the System
    Lighting
  • 57. Integrating the System
    Lighting
  • 58. Integrating the System
  • 59. Integrating the System
    Operating on the Image
  • 60.
  • 61.
  • 62. Thresholding
    Thresholding segments an image into a particle region—which contains the objects under inspection and a background region based on the pixel intensities within the image. The resulting image is a binary image.
    We use thresholding to extract areas that correspond to significant structures in
    an image and to focus analysis on these areas. Thresholding an image is often the first step in machine vision applications that perform image analysis on binary images, such as particle analysis. Particles are characterized by an intensity range. They are composed of pixels with gray-level values belonging to a given threshold interval. Thresholding sets all pixels that belong to a range of pixel values, called the threshold interval, to 1 or a user-defined value, and it sets all other pixels in the image to 0. Pixels inside the threshold interval are considered part of a particle. Pixels outside the threshold interval are considered part of the background.
  • 63.
  • 64. Morphological Opeators
    Morphological operators that change the shape of particles process a pixel based on its number of neighbors and the values of those neighbors. A neighbor is a pixel whose value affects the values of nearby pixels during certain image processing functions. Morphological transformations use a 2D binary mask called a structuring element to define the size and effect of the neighborhood on each pixel, controlling the effect of the binary morphological functions on the shape and the boundary of a particle.
  • 65. Morphological Erosion
    For a given pixel P0, the structuring element is centered on P0. The pixels masked by a coefficient of the structuring element equal to 1 are then referred as Pi.
    • If the value of one pixel Pi is equal to 0, then P0 is set to 0, else P0 is set to 1.
    • If AND(Pi) = 1, then P0 = 1, else P0 = 0.
  • 66.
  • 67.
  • 68.
  • 69.
  • 70. Digital Image Processing
    Particle filtration :
    Also called particle measurements, again allows another means to further discriminate against the objects in the binary image. There are 49 different particle filters that can be applied; in this case, I measured the area of each of the objects in the binary image. When this filter is applied the smaller objects drop out. It is critical that images like this one have a dedicated distance from the objects to camera. Also, it is exceedingly helpful to only have in the image what you really are concerned with and no other superfluous background images.
  • 71.
  • 72.
  • 73.
  • 74.
  • 75. Results/Conclusions
    The research shows that it is possible to create an X-ray effect with near infrared light. When using an appropriate CCD that will pick up near infrared a gray scale image can be captured and manipulated with vision software. Although there are many tools in the NI suite that could be used for this application it appears that the Particle Analysis tools lend themselves better to this application than pattern matching.
  • 76. Results/Conclusions
    It is possible that better results could be achieved by using a digital camera that incorporates the Sony Exview HAD CCD Sensor. The Chameleon 1.3 MP CCD from Point Grey would be a great starting point in continuing the research effort.
    In using any vision application, being able to repeat your results is critical. This is a function of distance, background and lighting effects. Keeping these variables as constant as possible will yield more consistent results.