The document describes an image exploitation system developed for airborne surveillance using unmanned aerial vehicles. The system processes real-time video and flight data acquired by the UAV. It uses image processing algorithms like enhancement, filtering, target tracking, and geo-referencing of targets. The system displays mission parameters in real-time on multiple displays. It was found to be qualified for surveillance applications. Sample results are presented.
A Hardware Model to Measure Motion Estimation with Bit Plane Matching AlgorithmTELKOMNIKA JOURNAL
The multistep approach involving combination of techniques is referred as motion estimation.
The proposed approach is an adaptive control system to measure the motion from starting point to limit of
search. The motion patterns are used to analyze and avoid stationary regions of image. The algorithm
proposed is robust efficient and the calculations justify its advantages. The motivation of the work is to
maximize the encoding speed and visual quality with the help of motion vector algorithm. In this work a
hardware model is developed in which a frame of pictures are captured and sent via serial port to the system.
MATLAB simulation tool is used to detect the motion among the picture frame. Once any motion is detected
that signal is sent to the hardware which will give the appropriate sign accordingly. This system is developed
on two platforms (hardware as well software) to estimate and measure the motion vectors
ISPRS: COMPARISON OF MULTIPLE IMUs IN AN EXPERIMENTAL FLIGHT TESTLaura Samsó, MSc
Laura Samsó, Mariano Wis, Ismael Colomina
GP-IMU-Bench experiment consists of simultaneous acquisition of data from multiple inertial units under the same
dynamic and static conditions. To accomplish those conditions, all the sensors are fixed on a platform that is directly
mounted into an airplane. This configuration permits all the inertial units to be able to sense the same movements. The
aim of this experiment is to obtain a set of data that allows establishing some comparisons among the IMUs that the IG
owns. The results of this experiment are very helpful to evaluate which is the best kind of IMU to be mounted on any
remote sensor.
In order to get these datasets, a series of HW and SW modifications were applied on IG’s TAG system for acquiring
the data from the IMUs simultaneously. Therefore, this paper goes through these modifications made on the system
with a more detailed description of the experiment. Some preliminary results of the comparison are shown.
Intelligent Parking Space Detection System Based on Image Segmentationijsrd.com
This paper aims to present an intelligent system for parking space detection based on image segmentation technique that capture and process the brown rounded image drawn at parking lot and produce the information of the empty car parking spaces. It will be display at the display unit that consists of seven segments in real time. The seven segments display shows the number of current available parking lots in the parking area. This proposed system, has been developed in software platform.
Real-time traffic sign detection and recognition using Raspberry Pi IJECEIAES
Nowadays, the number of road accident in Malaysia is increasing expeditiously. One of the ways to reduce the number of road accident is through the development of the advanced driving assistance system (ADAS) by professional engineers. Several ADAS system has been proposed by taking into consideration the delay tolerance and the accuracy of the system itself. In this work, a traffic sign recognition system has been developed to increase the safety of the road users by installing the system inside the car for driver’s awareness. TensorFlow algorithm has been considered in this work for object recognition through machine learning due to its high accuracy. The algorithm is embedded in the Raspberry Pi 3 for processing and analysis to detect the traffic sign from the real-time video recording from Raspberry Pi camera NoIR. This work aims to study the accuracy, delay and reliability of the developed system using a Raspberry Pi 3 processor considering several scenarios related to the state of the environment and the condition of the traffic signs. A real-time testbed implementation has been conducted considering twenty different traffic signs and the results show that the system has more than 90% accuracy and is reliable with an acceptable delay.
A Hardware Model to Measure Motion Estimation with Bit Plane Matching AlgorithmTELKOMNIKA JOURNAL
The multistep approach involving combination of techniques is referred as motion estimation.
The proposed approach is an adaptive control system to measure the motion from starting point to limit of
search. The motion patterns are used to analyze and avoid stationary regions of image. The algorithm
proposed is robust efficient and the calculations justify its advantages. The motivation of the work is to
maximize the encoding speed and visual quality with the help of motion vector algorithm. In this work a
hardware model is developed in which a frame of pictures are captured and sent via serial port to the system.
MATLAB simulation tool is used to detect the motion among the picture frame. Once any motion is detected
that signal is sent to the hardware which will give the appropriate sign accordingly. This system is developed
on two platforms (hardware as well software) to estimate and measure the motion vectors
ISPRS: COMPARISON OF MULTIPLE IMUs IN AN EXPERIMENTAL FLIGHT TESTLaura Samsó, MSc
Laura Samsó, Mariano Wis, Ismael Colomina
GP-IMU-Bench experiment consists of simultaneous acquisition of data from multiple inertial units under the same
dynamic and static conditions. To accomplish those conditions, all the sensors are fixed on a platform that is directly
mounted into an airplane. This configuration permits all the inertial units to be able to sense the same movements. The
aim of this experiment is to obtain a set of data that allows establishing some comparisons among the IMUs that the IG
owns. The results of this experiment are very helpful to evaluate which is the best kind of IMU to be mounted on any
remote sensor.
In order to get these datasets, a series of HW and SW modifications were applied on IG’s TAG system for acquiring
the data from the IMUs simultaneously. Therefore, this paper goes through these modifications made on the system
with a more detailed description of the experiment. Some preliminary results of the comparison are shown.
Intelligent Parking Space Detection System Based on Image Segmentationijsrd.com
This paper aims to present an intelligent system for parking space detection based on image segmentation technique that capture and process the brown rounded image drawn at parking lot and produce the information of the empty car parking spaces. It will be display at the display unit that consists of seven segments in real time. The seven segments display shows the number of current available parking lots in the parking area. This proposed system, has been developed in software platform.
Real-time traffic sign detection and recognition using Raspberry Pi IJECEIAES
Nowadays, the number of road accident in Malaysia is increasing expeditiously. One of the ways to reduce the number of road accident is through the development of the advanced driving assistance system (ADAS) by professional engineers. Several ADAS system has been proposed by taking into consideration the delay tolerance and the accuracy of the system itself. In this work, a traffic sign recognition system has been developed to increase the safety of the road users by installing the system inside the car for driver’s awareness. TensorFlow algorithm has been considered in this work for object recognition through machine learning due to its high accuracy. The algorithm is embedded in the Raspberry Pi 3 for processing and analysis to detect the traffic sign from the real-time video recording from Raspberry Pi camera NoIR. This work aims to study the accuracy, delay and reliability of the developed system using a Raspberry Pi 3 processor considering several scenarios related to the state of the environment and the condition of the traffic signs. A real-time testbed implementation has been conducted considering twenty different traffic signs and the results show that the system has more than 90% accuracy and is reliable with an acceptable delay.
PROGRAMMED TARGET RECOGNITION FRAMEWORKS FOR UNDERWATER MINE CLASSIFICATIONEditor IJCTER
This paper manages a few unique commitments to a programmed target acknowledgment (ATR) framework, which is connected to submerged mine grouping. The commitments focus on highlight determination and object arrangement. Initial, an advanced channel technique is intended for the component choice. Second, in the progression of article arrangement, a group learning plan in the structure of the Dempster–Shafer hypothesis is acquainted with wire the outcomes acquired by various classifiers. This combination can enhance the arrangement execution. We propose a sensible development of the essential conviction task
Development of portable automatic number plate recognition (ANPR) system on R...IJECEIAES
ANPR system is used in automating access control and security such as identifying stolen cars in real time by installing it to police patrol cars, and detecting vehicles that are overspeeding on highways. However, this technology is still relatively expensive; in November 2014, the Royal Malaysian Police (PDRM) purchased and installed 20 units of ANPR systems in their patrol vehicles costing nearly RM 30 million. In this paper a cheaper alternative of a portable ANPR system running on a Raspberry Pi with OpenCV library is presented. Once the camera captures an image, image desaturation, filtering, segmentation and character recognition is all done on the Raspberry Pi before the extracted number plate is displayed on the LCD and saved to a database. The main challenges in a portable application include crucial need of an efficient code and reduced computational complexity while offering improved flexibility. The performance time is also presented, where the whole process is run with a noticeable 3 seconds delay in getting the final output.
Design and development of DrawBot using image processing IJECEIAES
Extracting text from an image and reproducing them can often be a laborious task. We took it upon ourselves to solve the problem. Our work is aimed at designing a robot which can perceive an image shown to it and reproduce it on any given area as directed. It does so by first taking an input image and performing image processing operations on the image to improve its readability. Then the text in the image is recognized by the program. Points for each letter are taken, then inverse kinematics is done for each point with MATLAB/Simulink and the angles in which the servo motors should be moved are found out and stored in the Arduino. Using these angles, the control algorithm is generated in the Arduino and the letters are drawn.
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...csandit
In today’s technological life, everyone is quite familiar with the importance of security
measures in our lives. So in this regard, many attempts have been made by researchers and one
of them is flying robots technology. One well-known usage of flying robot, perhaps, is its
capability in security and care measurements which made this device extremely practical, not
only for its unmanned movement, but also for the unique manoeuvre during flight over the
arbitrary areas. In this research, the automatic landing of a flying robot is discussed. The
system is based on the frequent interruptions that is sent from main microcontroller to camera
module in order to take images; these images have been distinguished by image processing
system based on edge detection, after analysing the image the system can tell whether or not to
land on the ground. This method shows better performance in terms of precision as well as
experimentally.
Implementation of Object Tracking for Real Time VideoIDES Editor
Real-time tracking of object boundaries is an
important task in many vision applications. Here we propose
an approach to implement the level set method. This approach
does not need to solve any partial differential equations (PDFs),
thus reducing the computation dramatically compared with
optimized narrow band techniques proposed before. With our
approach, real-time level-set based video tracking can be
achieved.
Humans have evolved to better survive and have evolved their invention. In today’s age, a
large number of robots are placed in many areas replacing manpower in severe or dangerous
workplaces. Moreover, the most important thing is to take care of this technology for developing
robots progresses. This paper proposes an autonomous moving system which automatically finds its
target from a scene, lock it and approach towards its target and hits through a shooting mechanism.
The main objective is to provide reliable, cost effective and accurate technique to destroy an unusual
threat in the environment using image processing.
Simultaneous Mapping and Navigation For Rendezvous in Space ApplicationsNandakishor Jahagirdar
To design and develop an image processing algorithm that can identify the target spacecraft docking station as well as the distance, location and angle of the docking station with respect to the chaser vehicle. Making a use of the image from single camera.
Applications of Image Processing and Real-Time embedded Systems in Autonomous...CSCJournals
As many of the latest technologists have predicted, Self-driving autonomous cars are going to be the future in the transportation sector. Many of the billion dollar companies including Google, Uber, Apple, NVIDIA, and Tesla are pioneering in this field to invent fully autonomous vehicles. This paper presents a literature review on some of the important segments in an autonomous vehicle development arena which touches real time embedded systems applications. This paper surveyed research papers on the technologies used in autonomous vehicles which includes lane detection, traffic signal identification, and speed bump detection. The paper focuses on the significance of image processing and real time embedded systems in driving the automotive industry towards autonomy and high security pathways.
International Journal of Computational Engineering Research(IJCER) ijceronline
nternational Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
The active safety systems used in automotive field are largely exploiting lane detection technique for warning the vehicle drivers to correct any unintended road departure and to reach fully autonomous vehicles. Due to its ability, to be programmed, to perform complex mathematical functions and its characterization of high speed processing, Field Programmable Gate Array (FPGA) could cope with the requirement of lane detection implementation and application. In the present work, lane detection is implemented using FPGA for day vision. This necessitates utilization of image processing techniques like filtering, edge detection and thresholding. The lane detection is performed by firstly capturing the image from a video camera and converted to gray scale. Then, a noise filtering process for gray image is performed using Gaussian and average filter. Methods from first and second order edge detection techniques have been selected for the purpose of lane edge detection. The effect of manually changing the threshold level on image enhancement has been examined. The results showed that raising threshold level would better enhance the image. The type of FPGA device used in the present work is Altera DE2. Firstly, the version DE2 Cyclone II start with (11xxxxxx-xxxx) together with Genx camera has been used. This camera supports both formats NTSC and PAL, while the above version of FPGA backups only NTSC format. The software of lane detection is designed and coded using Verilog language.
Advantages of chat support to your tour and travel websiteShubhangi Swami
Live chat support for Tour and travel website, its advantages and easy application.
Live chat has came up as wonderful solution for contacting with customers over other ways of communication. Find out how it can be helpful for your tour and travel website...
PROGRAMMED TARGET RECOGNITION FRAMEWORKS FOR UNDERWATER MINE CLASSIFICATIONEditor IJCTER
This paper manages a few unique commitments to a programmed target acknowledgment (ATR) framework, which is connected to submerged mine grouping. The commitments focus on highlight determination and object arrangement. Initial, an advanced channel technique is intended for the component choice. Second, in the progression of article arrangement, a group learning plan in the structure of the Dempster–Shafer hypothesis is acquainted with wire the outcomes acquired by various classifiers. This combination can enhance the arrangement execution. We propose a sensible development of the essential conviction task
Development of portable automatic number plate recognition (ANPR) system on R...IJECEIAES
ANPR system is used in automating access control and security such as identifying stolen cars in real time by installing it to police patrol cars, and detecting vehicles that are overspeeding on highways. However, this technology is still relatively expensive; in November 2014, the Royal Malaysian Police (PDRM) purchased and installed 20 units of ANPR systems in their patrol vehicles costing nearly RM 30 million. In this paper a cheaper alternative of a portable ANPR system running on a Raspberry Pi with OpenCV library is presented. Once the camera captures an image, image desaturation, filtering, segmentation and character recognition is all done on the Raspberry Pi before the extracted number plate is displayed on the LCD and saved to a database. The main challenges in a portable application include crucial need of an efficient code and reduced computational complexity while offering improved flexibility. The performance time is also presented, where the whole process is run with a noticeable 3 seconds delay in getting the final output.
Design and development of DrawBot using image processing IJECEIAES
Extracting text from an image and reproducing them can often be a laborious task. We took it upon ourselves to solve the problem. Our work is aimed at designing a robot which can perceive an image shown to it and reproduce it on any given area as directed. It does so by first taking an input image and performing image processing operations on the image to improve its readability. Then the text in the image is recognized by the program. Points for each letter are taken, then inverse kinematics is done for each point with MATLAB/Simulink and the angles in which the servo motors should be moved are found out and stored in the Arduino. Using these angles, the control algorithm is generated in the Arduino and the letters are drawn.
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...csandit
In today’s technological life, everyone is quite familiar with the importance of security
measures in our lives. So in this regard, many attempts have been made by researchers and one
of them is flying robots technology. One well-known usage of flying robot, perhaps, is its
capability in security and care measurements which made this device extremely practical, not
only for its unmanned movement, but also for the unique manoeuvre during flight over the
arbitrary areas. In this research, the automatic landing of a flying robot is discussed. The
system is based on the frequent interruptions that is sent from main microcontroller to camera
module in order to take images; these images have been distinguished by image processing
system based on edge detection, after analysing the image the system can tell whether or not to
land on the ground. This method shows better performance in terms of precision as well as
experimentally.
Implementation of Object Tracking for Real Time VideoIDES Editor
Real-time tracking of object boundaries is an
important task in many vision applications. Here we propose
an approach to implement the level set method. This approach
does not need to solve any partial differential equations (PDFs),
thus reducing the computation dramatically compared with
optimized narrow band techniques proposed before. With our
approach, real-time level-set based video tracking can be
achieved.
Humans have evolved to better survive and have evolved their invention. In today’s age, a
large number of robots are placed in many areas replacing manpower in severe or dangerous
workplaces. Moreover, the most important thing is to take care of this technology for developing
robots progresses. This paper proposes an autonomous moving system which automatically finds its
target from a scene, lock it and approach towards its target and hits through a shooting mechanism.
The main objective is to provide reliable, cost effective and accurate technique to destroy an unusual
threat in the environment using image processing.
Simultaneous Mapping and Navigation For Rendezvous in Space ApplicationsNandakishor Jahagirdar
To design and develop an image processing algorithm that can identify the target spacecraft docking station as well as the distance, location and angle of the docking station with respect to the chaser vehicle. Making a use of the image from single camera.
Applications of Image Processing and Real-Time embedded Systems in Autonomous...CSCJournals
As many of the latest technologists have predicted, Self-driving autonomous cars are going to be the future in the transportation sector. Many of the billion dollar companies including Google, Uber, Apple, NVIDIA, and Tesla are pioneering in this field to invent fully autonomous vehicles. This paper presents a literature review on some of the important segments in an autonomous vehicle development arena which touches real time embedded systems applications. This paper surveyed research papers on the technologies used in autonomous vehicles which includes lane detection, traffic signal identification, and speed bump detection. The paper focuses on the significance of image processing and real time embedded systems in driving the automotive industry towards autonomy and high security pathways.
International Journal of Computational Engineering Research(IJCER) ijceronline
nternational Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
The active safety systems used in automotive field are largely exploiting lane detection technique for warning the vehicle drivers to correct any unintended road departure and to reach fully autonomous vehicles. Due to its ability, to be programmed, to perform complex mathematical functions and its characterization of high speed processing, Field Programmable Gate Array (FPGA) could cope with the requirement of lane detection implementation and application. In the present work, lane detection is implemented using FPGA for day vision. This necessitates utilization of image processing techniques like filtering, edge detection and thresholding. The lane detection is performed by firstly capturing the image from a video camera and converted to gray scale. Then, a noise filtering process for gray image is performed using Gaussian and average filter. Methods from first and second order edge detection techniques have been selected for the purpose of lane edge detection. The effect of manually changing the threshold level on image enhancement has been examined. The results showed that raising threshold level would better enhance the image. The type of FPGA device used in the present work is Altera DE2. Firstly, the version DE2 Cyclone II start with (11xxxxxx-xxxx) together with Genx camera has been used. This camera supports both formats NTSC and PAL, while the above version of FPGA backups only NTSC format. The software of lane detection is designed and coded using Verilog language.
Advantages of chat support to your tour and travel websiteShubhangi Swami
Live chat support for Tour and travel website, its advantages and easy application.
Live chat has came up as wonderful solution for contacting with customers over other ways of communication. Find out how it can be helpful for your tour and travel website...
Do you have cyber secured chat software? Security and Reliability have always been the matter of issue in the world of Internet. With advancement of technology the Cyber thieves have also advanced. Before commencing any online act you always make sure of the safety and reliability matters. Live chat being the trendy and easy means of business communication is in high demand. It has become the need of the hour, but you need to cross check security and reliability matters before you choose your live chat partner. Here are some important points you should ask your Live chat software provider for having 100% secure chat.
Live chat has been phenomenally accepted by the customers and the entrepreneurs as a new source of communication. More and more Online businesses are embedding this little tool on their websites day by day. From, bringing traffic to the sites to the final sale, it helps the entrepreneurs at every step. For customers it is the best source to get the hold of the seller. It turns out to be in vogue these days. So, from the point view of customers as well as entrepreneurs here are some points which defines the reason of its popularity in form of Infographics
Background differencing algorithm for moving object detection using system ge...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Remote HD and 3D image processing challenges in Embedded Systems
The Modern Applications has increased the complexity and demands of the video processing features and subsequent image data transfer.
The Image Processing mission will typically comprise three key elements: data capture, data processing, and data transmission. In addition, applications such as Recognition and data fusion have a need for time sensitivity, spatial awareness, and mutual awareness to correctly understand and utilize the data.
Low-latency processing and transmission are key performance metrics, particularly where there is a human operator and key decision maker situated in a location remote from the point of data gathering. An examination of key considerations – sensor processing location trends, video fusion, and video compression and bandwidth, in addition to Size, Weight, and Power
OBSERVATIONAL DISCRETE LINES FOR THE DETECTION OF MOVING VEHICLES IN ROAD TRA...ijcseit
The paper deals with the development of mathematical models and algorithms for video processing in
digital video surveillance systems to detect moving objects. The model and algorithm can be applied in
video surveillance systems to identify moving objects on a surveillance area. The reduction of the
calculations for the segmentation of video is considered and describes the algorithm of Observational
discrete lines for the detection and tracking of moving objects is proposed in this article.
OBSERVATIONAL DISCRETE LINES FOR THE DETECTION OF MOVING VEHICLES IN ROAD TRA...ijcseit
The paper deals with the development of mathematical models and algorithms for video processing in digital video surveillance systems to detect moving objects. The model and algorithm can be applied in video surveillance systems to identify moving objects on a surveillance area. The reduction of the
calculations for the segmentation of video is considered and describes the algorithm of Observational discrete lines for the detection and tracking of moving objects is proposed in this article.
OBSERVATIONAL DISCRETE LINES FOR THE DETECTION OF MOVING VEHICLES IN ROAD TRA...ijcseit
The paper deals with the development of mathematical models and algorithms for video processing in digital video surveillance systems to detect moving objects. The model and algorithm can be applied in
video surveillance systems to identify moving objects on a surveillance area. The reduction of the calculations for the segmentation of video is considered and describes the algorithm of Observational discrete lines for the detection and tracking of moving objects is proposed in this article.
2. Figure 1 Block diagram of imaging system
SYSTEM DEVELOPMENT
The hardware architecture of Image exploitation
system core consists of a Computing System (Industrial
PC) with multiple imaging boards, frame grabber card,
graphics card to support multiple displays, and network
interface card along with supporting peripherals. The
Host PC is connected to 3 Embedded Vision Processor
(EVP) cards (256 MB RAM) with add-on modules and a
Frame Grabber card (FG) .The EVPs and the FG boards
are interconnected through the on-board Auxiliary Bus to
ensure data transfers are carried out independently of
the PCI bus as given below in Figure 2.
Figure 2 Host Interface
The Frame Grabber digitizes the incoming PAL analog
video (color/grayscale) of 25 frames/sec (1 Frame
768x576) and transfers it (120 MB/sec) to Host PC via
PCI bus or to embedded processor via dedicated data
bus called AB (Auxiliary Bus). AB is a dedicated bus
that makes transfer of video from Frame grabber to
embedded processor at a very high speed (200 MB/sec)
and bandwidth with out depending or affecting the Host
PC. The output of the video capture and processing are
viewed on the HOST PC’s monitor through the HOST
PC’s Graphics card (Dual display 256 MB DDR memory)
and the output of each embedded processor is viewed
on separate monitors connected and controlled by
graphics card on embedded processor board. Real-time
imaging applications require high data bandwidth from
the video source to the display output [4]. These
systems also demand very low latencies from the
processing system because a human operator is badly
affected by any apparent lag between the input image
and the presented image when in a moving platform
(such as an aircraft). This means that the processing
delays in the system need to be minimized and often
kept below 12.5 fps (which is a typical frame time). Add
to this further complex operations (Image processing
algorithms) need to be applied to each and every pixel,
perhaps several times. Real time Video image
processing is realized by using the EVP, optimized
image processing library and distributing the image
processing functions to multiple EVP. Image acquisition
and processing is done in parallel and the data goes to
host as well as EVP. One such framework is to use the
processing ability of multiple embedded processors
where 80 ms is the required time to process 1 frame and
3 embedded boards are used for fast processing of
images. Further an auxiliary bus with multicast mode
allows images to be sent for concurrent processing on
two or more embedded processors .The Software
architecture of the system with application layer,
hardware layer and interface layer is given in Figure 3 as
below.
Figure 3 Software Architecture Diagram
APPLICATION SOFTWARE
Image exploitation system contains application software
which extracts and exploits the information of aerial
imagery obtained from onboard sensors mounted on
UAV or other reconnaissance platforms. The output is
obtained from the exploitation of aerial video imagery
captured by camera mounted on UAV with Flight
telemetry data .Imagery intelligence is obtained by
applying image processing techniques and algorithms
3. where target feature information is extracted
automatically from real time video data for image
analysis. The basic input to the application software is
the flight video image and data captured by the
unmanned mission. The system processes the images
using Image processing application software which
displays input and processed images for analysis. Some
of the processing functions required by users are as
follows:
1. Mission settings, acquisition of video flight data
and image display.
2. Video Image analysis using Image Processing
functions such as Enhancements, edge
detection, filtering, zooming, etc.
3. User interaction and processing such as single
frame and multiple frame target calculation,
terrain measurements, area based retrieval, etc.
4. Real time target tracking from video images.
5. Real time Plotting of flight parameters (Health).
6. Creating ,updating and retrieval of Imagery
intelligence database
7. Printing and saving the image file data in a CD.
METHODOLGY
The methodology is based on the development of
fully automated decision criteria tools [3, 5 and 6] for
feature information extraction from video images which
involves various image processing algorithms such as:
1. Video Image enhancement techniques for selective
(linear and non-linear enhancements) and
automatic enhancements to search for a target
region in the video imagery. In selective mode of
enhancement user has an option to select standard
enhancement techniques such as Histogram
equalization, contrast stretching, contrast stretching
with clipping population on a single tail or both tails
of the image histogram. Enhancement techniques
are also applied on a moving window of user
specified size in the video image to assist user in
selection of target. Once the target is selected by
user, various parameters for input image are
calculated such as average intensity, standard
deviation and Intensity histogram. Adaptive video
Image enhancement techniques are applied for real
time enhancement using lookup table [LUTs].
Lookup table is created based on derived Image
statistics [Range, entropy, coefficient of variation,
mean and contrast] obtained from real time video
Image data as given in table 1.
Table 1 Image Information Statistics
Analyzing various data sets it is found that contrast
information derived is more stable than other image
statistics as it depends more on grey level range in
the scene data. Contrast information is derived from
real time video image data as : Contrast (%) = Max
GL –Min GL / Max GL + Min GL where Max GL and
Min GL are the maximum and minimum gray level
from input image. LUTs are created based on the
input image contrast computed and corresponding
LUTs are assigned in real time for Image
enhancement.
2. Target region processing {TRP] tools to process
user selected region of the video frame .Option to
select various Image filtering techniques (Low
pass and high pass, median) ,image sharpening
tools, selected area zooming , Edge boundary
information of the target area by various edge
detection (Sobel , Canny ,Laplace ,Robert, etc )
and to display all the processed output .
3 Target tracking tool to track a particular target in the
video frame using template pattern matching
algorithm .The algorithm is based on normalized
cross correlation method. The Position of target in
the first image is identified as “Window area” and the
target in the subsequent frame is identified as
“Search area” .The problem is then to estimate the
position of the template in subsequent images.
Normalized correlation template matching algorithm
is used to search target in the subsequent frame and
to estimate the position of the target accurately.
4 Algorithm for computation of geo –referenced
position of the target selected by the user using
the flight data. By click of mouse on target, the
look angle, slant range, easting/Northing or
Latitude and longitude of the selected area is
displayed on Map.
5 Algorithm for retrieval of area based target within
a user specified area. Retrieves the targets that lie
within the user defined area or the maximum
Range of search. Area to be specified in terms of
Circular range around the current UAV position.
6 Tool for Real time Flight parameters display such
as Roll, Pitch, azimuth, altitude and heading angle
of the Vehicle.
7 Algorithm for Terrain measurement and to
measure specific terrain features. Measures
includes linear distance on ground, tracing of
curvilinear path, area coverage of interesting
region and perimeter.
RESULTS
The basic input to the Image exploitation system
consists of mission path, probable target locations and
map data. During a mission Image exploitation system
4. acquires image data from the camera and flight
parameters from onboard systems. Frame grabber
digitizes image data and transfers it to Host PC and
multiple embedded processor boards. Application
software in host uses multiple embedded processor
boards to achieve real time image processing, analysis
and display of various parameters to user during
mission. Application software is tested using the
playback flight video image and data captured by the
unmanned mission in a form of DVD connected to the
embedded vision system. The system processes the
images using Image processing application software in
the form GUI menu which displays input and processed
images for analysis.
Video data with flight parameters is processed in real
time where Roll, Pitch, heading, height of the the vehicle,
azimuth, elevation are displayed in meter display as
given in Figure 4.
.
Figure 4 Real Time Video Image display
. A sample result of a moving window area enhancement
is given in figure 5.
Figure 5. Moving window Area Enhancement
Target region processing of a target region in the
video frames with variable window sizes and image
Processing algorithms such as edge detection
/enhancement/ Zooming/sharpening tools, etc is
applied for selected area .A sample results of
processing is given in figure 6.
Figure 6 Selected area Image Processing
The geo-referenced location of the target position on the
ground is computed from a single and multiple frames
associated with the target using flight data parameters.
Once the user clicks a Target, the Look Angle, Slant
Range, Easting/ Northing or Latitude and Longitude of
the selected target is displayed with the target image as
given in figure 7.
Figure 7 Target Position computation
A Target Tracking function is developed to track the
particular target(s) in the video frame using template
matching techniques. Window frame area of target frame
is matched with searched area window of the next frame
using normalized cross correlation as a similarity
measure. A sample result of tracking is given in Figure 8
as below.
Figure 8 Sample Target Tracking
Target area measurements on ground are computed
using the geo-referenced values. Target area is selected
by the user on a freezed frame. Distance, area and
perimeter of the target are computed using Euclidian
distance and it is displayed on map/ zoomed map. User
has an option to select and view previous video frames
of the incoming video with specified frame interval for
analysis. Whenever user acquires a target in the host,
this module displays the list of last few target images.
Further user can retrieve targets that lay within the user
defined or the maximum range of search .When user
access this module, dialog is displayed with UAV current
position and all target locations on the map. Retrieve
target list is maintained where user can see the target
details: Target name, easting, Northing, distance from
the current location, look angle ,azimuth and elevation
with the target images in the form of context diagram as
given in figure 9
5. .
Figure 9 Target Image Retrieval
An Image intelligence database is generated for Targets
such as geo-location of the target, normalized view of
the target, history information, Video clipping, and
auxiliary data like survey maps, satellite image and
digital elevation model, etc.
CONCLUSION
This study demonstrate the development of a
image exploitation system with potential of using image
processing tools for processing real time video and flight
data for an unmanned aerial vehicle. The image
exploitation system is developed using multiple
embedded processing boards connected to the host and
results of processing is displayed in multiple displays
using graphic card. Real time Video Image acquisition
and image analysis using multiple embedded processor
with optimized image algorithms library is realized .The
work involves the design and development of the system
meeting project specific requirements and integrating the
hardware, software features. The system is tested using
simulated flight data collected for a surveillance mission.
The advantages of this architecture being scalable and
flexible to increase embedded processor boards as per
requirement. To maximize productivity and minimize the
decision making cycle of the request, algorithm
parallelism effort will seek to achieve better latency.
Further exploration of multiple sensor and study of fusion
techniques may improve target detection performance.
ACKNOWLEDGMENTS
Authors gratefully acknowledge Dr. Jharna Majumdar
for her constant encouragement and technical guidance
throughout the project.
REFERENCES
[1] M.Kontitsis, Kimon P. Valavanis et al. A UAV Vision
System for Airborne Surveillance, Proceedings of
the 2004 IEEE, International Conference on Robotics
& Automation, New Orleans, LA, April 2004 P-77-83.
[2] P. Doherty et. al, The WITAS unmanned aerial vehicle
Project .In. Proc. of the 14th
European conf. of. Artificial
Intelligence, 2000.
[3] Brian Hoerl, ARIES migrates Image Exploitation to
UAVs, VME bus Systems, Mercury /Aug 2005
[4] Jamie Heather & Moira Smith, Analysis of Registration
requirements & Techniques for Imaging Sensor Suites
on UVs, 1st SEAS DTC Technical Conference
Edinburgh 2006,
[5] R. Gonzalez and R. Woods, Digital image
Processing, Addison Wesley, 1992.
[6] Paul Robertson et al. Adaptive image analysis for
Aerial Surveillance, IEEE Intelligent systems,
IEEE 1999, P-1094