The document describes a method for front and rear vehicle detection using hypothesis generation and verification. In the hypothesis generation stage, potential vehicles are identified using shadow, texture, and symmetry clues. In the hypothesis verification stage, Pyramid Histograms of Oriented Gradients features are extracted and dimensionally reduced using PCA. Genetic algorithm and linear SVM are then used to improve feature performance and classification accuracy, achieving over 97% correct classification on test images.
VEHICLE CLASSIFICATION USING THE CONVOLUTION NEURAL NETWORK APPROACHJANAK TRIVEDI
We present vehicle detection classification using the Convolution
Neural Network (CNN) of the deep learning approach. The automatic vehicle
classification for traffic surveillance video systems is challenging for the Intelligent
Transportation System (ITS) to build a smart city. In this article, three different
vehicles: bike, car and truck classification are considered for around 3,000 bikes,
6,000 cars, and 2,000 images of trucks. CNN can automatically absorb and extract
different vehicle dataset’s different features without a manual selection of features.
The accuracy of CNN is measured in terms of the confidence values of the detected
object. The highest confidence value is about 0.99 in the case of the bike category
vehicle classification. The automatic vehicle classification supports building an
electronic toll collection system and identifying emergency vehicles in the traffic
OpenCVand Matlab based Car Parking System Module for Smart City using Circle ...JANAK TRIVEDI
finding parking availability for a specific time period is
a very tedious job in urban areas. The Indian government now
focusing on t he smart city project, already they published city
name for a n upcoming smart city project. In smart city
application , intelligent transportation system (ITS) plays an
important role- in that finding parking place, specifically for the
car owner to avoid time computation, as well as congestion in
traffic is going to be very important. In this article, we propose
an intelligent car parking system for the smart city using Circle
Hough Transform (CHT).
Classification and Detection of Vehicles using Deep Learningijtsrd
The vehicle classification and detecting its license plate are important tasks in intelligent security and transportation systems. The traditional methods of vehicle classification and detection are highly complex which provides coarse grained results due to suffering from limited viewpoints. Because of the latest achievements of Deep Learning, it was successfully applied to image classification and detection of objects. This paper presents a method based on a convolutional neural network, which consists of two steps vehicle classification and vehicle license plate recognition. Several typical neural network modules have been applied in training and testing the vehicle Classification and detection of license plate model, such as CNN convolutional neural networks , TensorFlow, Tesseract OCR. The proposed method can identify the vehicle type, number plate and other information accurately. This model provides security and log details regarding vehicles by using AI Surveillance. It guides the surveillance operators and assists human resources. With the help of the original training dataset and enriched testing dataset, the algorithm can obtain results with an average accuracy of about 97.32 in the classification and detection of vehicles. By increasing the amount of the data, the mean error and misclassification rate gradually decreases. So, this algorithm which is based on Deep Learning has good superiority and adaptability. When compared to the leading methods in the challenging Image datasets, our deep learning approach obtains highly competitive results. Finally, this paper proposes modern methods for the improvement of the algorithm and prospects the development direction of deep learning in the field of machine learning and artificial intelligence. Madde Pavan Kumar | Dr. K. Manivel | N. Jayanthi "Classification & Detection of Vehicles using Deep Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30353.pdf Paper Url :https://www.ijtsrd.com/engineering/software-engineering/30353/classification-and-detection-of-vehicles-using-deep-learning/madde-pavan-kumar
Vehicle detection is an important issue in driver assistance systems and self-guided vehicles that includes
two stages of hypothesis generation and verification. In the first stage, potential vehicles are hypothesized
and in the second stage, all hypothesis are verified. The focus of this work is on the second stage. We
extract Pyramid Histograms of Oriented Gradients (PHOG) features from a traffic image as candidates of
feature vectors to detect vehicles. Principle Component Analysis (PCA) and Linear Discriminant Analysis
(LDA) are applied to these PHOG feature vectors as dimension reduction and feature selection tools
parallelly. After feature fusion, we use Genetic Algorithm (GA) and cosine similarity-based K Nearest
Neighbor (KNN) classification to improve the performance and generalization of the features. Our tests
show good classification accuracy of more than 97% correct classification on realistic on-road vehicle
images.
"Detecting road lane is one of the key processes in vision based driving assistance system and autonomous vehicle system. The main purpose of the lane detection process is to estimate car position relative to the lane so that it can provide a warning to the driver if the car starts departing the lane. This process is useful not only to enhance safe driving but also in self driving car system. A novel approach to lane detection method using image processing techniques is presented in this research. The method minimizes the complexity of computation by the use of prior knowledge of color, intensity and the shape of the lane marks. By using prior knowledge, the detection process requires only two different analyses which are pixel intensity analysis and color component analysis. The method starts with searching a strong pair of edges along the horizontal line of road image. Once the strong edge is detected the process continues with color analysis on pixels that lie between the edges to check whether the pixels belong to a lane or not. The process is repeated for different positions of horizontal lines covering the road image. The method was successfully tested on selected 20 road images collected from internet. Ery M. Rizaldy | J. M. Nursherida | Abdul Rahim Sadiq Batcha ""Reduced Dimension Lane Detection Method"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Special Issue | International Conference on Advanced Engineering and Information Technology , November 2018, URL: https://www.ijtsrd.com/papers/ijtsrd19136.pdf
Paper URL: https://www.ijtsrd.com/engineering/civil-engineering/19136/reduced-dimension-lane-detection-method/ery-m-rizaldy"
Vehicle detection and tracking techniques a concise reviewsipij
Vehicle detection and tracking applications play an important role for civilian and military applications
such as in highway traffic surveillance control, management and urban traffic planning. Vehicle detection
process on road are used for vehicle tracking, counts, average speed of each individual vehicle, traffic
analysis and vehicle categorizing objectives and may be implemented under different environments
changes. In this review, we present a concise overview of image processing methods and analysis tools
which used in building these previous mentioned applications that involved developing traffic surveillance
systems. More precisely and in contrast with other reviews, we classified the processing methods under
three categories for more clarification to explain the traffic systems.
Vehicle License Plate Recognition (VLPR) is an important system for harmonious traffic. Moreover this system is helpful in many fields and places as private and public entrances, parking lots, border control and theft control. This paper presents a new framework for Sudanese VLPR system. The proposed framework uses Multi Objective Particle Swarm Optimization (MOPSO) and Connected Component Analysis (CCA) to extract the license plate. Horizontal and vertical projection will be used for character segmentation and the final recognition stage is based on the Artificial Immune System (AIS). A new dataset that contains samples for the current shape of Sudanese license plates will be used for training and testing the proposes framework.
Real-time parking slot availability for Bhavnagar, using statistical block ma...JANAK TRIVEDI
Purpose-The purpose of this paper is to find a real-time parking location for a four-wheeler. Design/methodology/approach-Real-time parking availability using specific infrastructure requires a high cost of installation and maintenance cost, which is not affordable to all urban cities. The authors present statistical block matching algorithm (SBMA) for real-time parking management in small-town cities such as Bhavnagar using an in-built surveillance CCTV system, which is not installed for parking application. In particular, data from a camera situated in a mall was used to detect the parking status of some specific parking places using a region of interest (ROI). The method proposed computes the mean value of the pixels inside the ROI using blocks of different sizes (8 Â 10 and 20 Â 35), and the values were compared among different frames. When the difference between frames is more significant than a threshold, the process generates "no parking space for that place." Otherwise, the method yields "parking place available." Then, this information is used to print a bounding box on the parking places with the color green/red to show the availability of the parking place. Findings-The real-time feedback loop (car parking positions) helps the presented model and dynamically refines the parking strategy and parking position to the users. A whole-day experiment/validation is shown in this paper, where the evaluation of the method is performed using pattern recognition metrics for classification: precision, recall and F1 score. Originality/value-The authors found real-time parking availability for Himalaya Mall situated in Bhavnagar, Gujarat, for 18th June 2018 video using the SBMA method with accountable computational time for finding parking slots. The limitations of the presented method with future implementation are discussed at the end of this paper.
VEHICLE CLASSIFICATION USING THE CONVOLUTION NEURAL NETWORK APPROACHJANAK TRIVEDI
We present vehicle detection classification using the Convolution
Neural Network (CNN) of the deep learning approach. The automatic vehicle
classification for traffic surveillance video systems is challenging for the Intelligent
Transportation System (ITS) to build a smart city. In this article, three different
vehicles: bike, car and truck classification are considered for around 3,000 bikes,
6,000 cars, and 2,000 images of trucks. CNN can automatically absorb and extract
different vehicle dataset’s different features without a manual selection of features.
The accuracy of CNN is measured in terms of the confidence values of the detected
object. The highest confidence value is about 0.99 in the case of the bike category
vehicle classification. The automatic vehicle classification supports building an
electronic toll collection system and identifying emergency vehicles in the traffic
OpenCVand Matlab based Car Parking System Module for Smart City using Circle ...JANAK TRIVEDI
finding parking availability for a specific time period is
a very tedious job in urban areas. The Indian government now
focusing on t he smart city project, already they published city
name for a n upcoming smart city project. In smart city
application , intelligent transportation system (ITS) plays an
important role- in that finding parking place, specifically for the
car owner to avoid time computation, as well as congestion in
traffic is going to be very important. In this article, we propose
an intelligent car parking system for the smart city using Circle
Hough Transform (CHT).
Classification and Detection of Vehicles using Deep Learningijtsrd
The vehicle classification and detecting its license plate are important tasks in intelligent security and transportation systems. The traditional methods of vehicle classification and detection are highly complex which provides coarse grained results due to suffering from limited viewpoints. Because of the latest achievements of Deep Learning, it was successfully applied to image classification and detection of objects. This paper presents a method based on a convolutional neural network, which consists of two steps vehicle classification and vehicle license plate recognition. Several typical neural network modules have been applied in training and testing the vehicle Classification and detection of license plate model, such as CNN convolutional neural networks , TensorFlow, Tesseract OCR. The proposed method can identify the vehicle type, number plate and other information accurately. This model provides security and log details regarding vehicles by using AI Surveillance. It guides the surveillance operators and assists human resources. With the help of the original training dataset and enriched testing dataset, the algorithm can obtain results with an average accuracy of about 97.32 in the classification and detection of vehicles. By increasing the amount of the data, the mean error and misclassification rate gradually decreases. So, this algorithm which is based on Deep Learning has good superiority and adaptability. When compared to the leading methods in the challenging Image datasets, our deep learning approach obtains highly competitive results. Finally, this paper proposes modern methods for the improvement of the algorithm and prospects the development direction of deep learning in the field of machine learning and artificial intelligence. Madde Pavan Kumar | Dr. K. Manivel | N. Jayanthi "Classification & Detection of Vehicles using Deep Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30353.pdf Paper Url :https://www.ijtsrd.com/engineering/software-engineering/30353/classification-and-detection-of-vehicles-using-deep-learning/madde-pavan-kumar
Vehicle detection is an important issue in driver assistance systems and self-guided vehicles that includes
two stages of hypothesis generation and verification. In the first stage, potential vehicles are hypothesized
and in the second stage, all hypothesis are verified. The focus of this work is on the second stage. We
extract Pyramid Histograms of Oriented Gradients (PHOG) features from a traffic image as candidates of
feature vectors to detect vehicles. Principle Component Analysis (PCA) and Linear Discriminant Analysis
(LDA) are applied to these PHOG feature vectors as dimension reduction and feature selection tools
parallelly. After feature fusion, we use Genetic Algorithm (GA) and cosine similarity-based K Nearest
Neighbor (KNN) classification to improve the performance and generalization of the features. Our tests
show good classification accuracy of more than 97% correct classification on realistic on-road vehicle
images.
"Detecting road lane is one of the key processes in vision based driving assistance system and autonomous vehicle system. The main purpose of the lane detection process is to estimate car position relative to the lane so that it can provide a warning to the driver if the car starts departing the lane. This process is useful not only to enhance safe driving but also in self driving car system. A novel approach to lane detection method using image processing techniques is presented in this research. The method minimizes the complexity of computation by the use of prior knowledge of color, intensity and the shape of the lane marks. By using prior knowledge, the detection process requires only two different analyses which are pixel intensity analysis and color component analysis. The method starts with searching a strong pair of edges along the horizontal line of road image. Once the strong edge is detected the process continues with color analysis on pixels that lie between the edges to check whether the pixels belong to a lane or not. The process is repeated for different positions of horizontal lines covering the road image. The method was successfully tested on selected 20 road images collected from internet. Ery M. Rizaldy | J. M. Nursherida | Abdul Rahim Sadiq Batcha ""Reduced Dimension Lane Detection Method"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Special Issue | International Conference on Advanced Engineering and Information Technology , November 2018, URL: https://www.ijtsrd.com/papers/ijtsrd19136.pdf
Paper URL: https://www.ijtsrd.com/engineering/civil-engineering/19136/reduced-dimension-lane-detection-method/ery-m-rizaldy"
Vehicle detection and tracking techniques a concise reviewsipij
Vehicle detection and tracking applications play an important role for civilian and military applications
such as in highway traffic surveillance control, management and urban traffic planning. Vehicle detection
process on road are used for vehicle tracking, counts, average speed of each individual vehicle, traffic
analysis and vehicle categorizing objectives and may be implemented under different environments
changes. In this review, we present a concise overview of image processing methods and analysis tools
which used in building these previous mentioned applications that involved developing traffic surveillance
systems. More precisely and in contrast with other reviews, we classified the processing methods under
three categories for more clarification to explain the traffic systems.
Vehicle License Plate Recognition (VLPR) is an important system for harmonious traffic. Moreover this system is helpful in many fields and places as private and public entrances, parking lots, border control and theft control. This paper presents a new framework for Sudanese VLPR system. The proposed framework uses Multi Objective Particle Swarm Optimization (MOPSO) and Connected Component Analysis (CCA) to extract the license plate. Horizontal and vertical projection will be used for character segmentation and the final recognition stage is based on the Artificial Immune System (AIS). A new dataset that contains samples for the current shape of Sudanese license plates will be used for training and testing the proposes framework.
Real-time parking slot availability for Bhavnagar, using statistical block ma...JANAK TRIVEDI
Purpose-The purpose of this paper is to find a real-time parking location for a four-wheeler. Design/methodology/approach-Real-time parking availability using specific infrastructure requires a high cost of installation and maintenance cost, which is not affordable to all urban cities. The authors present statistical block matching algorithm (SBMA) for real-time parking management in small-town cities such as Bhavnagar using an in-built surveillance CCTV system, which is not installed for parking application. In particular, data from a camera situated in a mall was used to detect the parking status of some specific parking places using a region of interest (ROI). The method proposed computes the mean value of the pixels inside the ROI using blocks of different sizes (8 Â 10 and 20 Â 35), and the values were compared among different frames. When the difference between frames is more significant than a threshold, the process generates "no parking space for that place." Otherwise, the method yields "parking place available." Then, this information is used to print a bounding box on the parking places with the color green/red to show the availability of the parking place. Findings-The real-time feedback loop (car parking positions) helps the presented model and dynamically refines the parking strategy and parking position to the users. A whole-day experiment/validation is shown in this paper, where the evaluation of the method is performed using pattern recognition metrics for classification: precision, recall and F1 score. Originality/value-The authors found real-time parking availability for Himalaya Mall situated in Bhavnagar, Gujarat, for 18th June 2018 video using the SBMA method with accountable computational time for finding parking slots. The limitations of the presented method with future implementation are discussed at the end of this paper.
CANNY EDGE DETECTION BASED REAL-TIME INTELLIGENT PARKING MANAGEMENT SYSTEMJANAK TRIVEDI
Real-time traffic monitoring and parking are very important aspects
for a better social and economic system. Python-based Intelligent Parking
Management System (IPMS) module using a USB camera and a canny edge
detection method was developed. The current situation of real-time parking slot
was simultaneously checked, both online and via a mobile application, with a
message of Parking “Available” or “Not available” for 10 parking slots. In
addition, at the time entering in parking module, gate open and at the time of exit
parking module, the gate closes automatically using servomotor and sensors.
Results are displayed in figures with the proposed method flow chart
COMPARATIVE STUDY ON VEHICLE DETECTION TECHNIQUES IN AERIAL SURVEILLANCEIJCI JOURNAL
Aerial surveillance system becomes a great trendy on past decades. Aerial surveillance vehicle tracking techniques plays a vital role and give rising to optimistic techniques continuously. This system can be very handy in various applications such as police, traffic monitoring, natural disaster and military. It is often covers large area and providing better perspective of moving objects. The detection of moving vehicle can be both from the dynamic aerial imagery, wide area motion imagery or images under low resolution and also the static in nature. It has been very difficult issue whether identify the object in the air view, the camera angles, movement objects and motionless object. This paper deals with comparative study on various vehicle detection and tracking approach in aerial videos with its experimental results and measures working condition, hit rate and false alarm rate
A VISION-BASED REAL-TIME ADAPTIVE TRAFFIC LIGHT CONTROL SYSTEM USING VEHICULA...JANAK TRIVEDI
In India, traffic control management is a difficult task due to an increment in the number of vehicles for the same infrastructure and systems. In the smart-city project, the Adaptive Traffic Light Control System (ATLCS) is one of the major research concerns for an Intelligent Transportation System (ITS) development to reduce traffic congestion and accidents, create a healthy environment, etc. Here, we have proposed a Vehicular Density Value (VDV) based adaptive traffic light control system method for 4-way intersection points using a selection of rotation, area of interest, and Statistical Block Matching Approach (SBMA). Graphical User Interface (GUI) and Hardware-based results are shown in the result section. We have compared, the normal traffic light control system with the proposed adaptive traffic light control system in the results section. The same results are verified using a hardware (raspberry-pi) device with different sizes, colors, and shapes of vehicles using the same method.
A Novel Multiple License Plate Extraction Technique for Complex Background in...CSCJournals
License plate recognition (LPR) is one of the most important applications of applying computer techniques towards intelligent transportation systems (ITS). In order to recognize a license plate efficiently, location and extraction of the license plate is the key step. Hence finding the position of a license plate in a vehicle image is considered to be the most crucial step of an LPR system, and this in turn greatly affects the recognition rate and overall speed of the whole system. This paper mainly deals with the detecting license plate location issues in Indian traffic conditions. The vehicles in India sometimes bare extra textual regions such as owner’s name, symbols, popular sayings and advertisement boards in addition to license plate. Situation insists for accurate discrimination of text class and fine aspect ratio analysis. In addition to this additional care taken up in this paper is to extract license plate of motorcycle (size of plate is small and double row plate), car (single as well as double row type), transport system such as bus, truck, (dirty plates) as well as multiple license plates present in an image frame under consideration. Disparity of aspect ratios is a typical feature of Indian traffic. Proposed method aims at identifying region of interest by performing a sequence of directional segmentation and morphological processing. Always the first step is of contrast enhancement, which is accomplished by using sigmoid function. In the subsequent steps, connected component analysis followed by different filtering techniques like aspect ratio analysis and plate compatible filter technique is used to find exact license plate. The proposed method is tested on large database consisting of 750 images taken in different conditions. The algorithm could detect the license plate in 742 images with success rate of 99.2%.
Projection Profile Based Number Plate Localization and Recognition csandit
This paper proposes algorithms to localize vehicle
number plates from natural background
images, to segment the characters from the localize
d number plates and to recognize the
segmented characters. The reported system is tested
on a dataset of 560 sample images
captured with different background under various il
luminations. The performance accuracy of
the proposed system has been calculated at each sta
ge, which is 97.1%, 95.4% and 95.72% for
localisation & extraction, character segmentation a
nd character recognition respectively. The
proposed method is also capable of localising and r
ecognising multiple number plates in
images.
Leader follower formation control of ground vehicles using camshift based gui...ijma
Autonomous ground vehicles have been designed for the purpose of that relies on ranging and bearing
information received from forward looking camera on the Formation control . A visual guidance control
algorithm is designed where real time image processing is used to provide feedback signals. The vision
subsystem and control subsystem work in parallel to accomplish formation control. A proportional
navigation and line of sight guidance laws are used to estimate the range and bearing information from the
leader vehicle using the vision subsystem. The algorithms for vision detection and localization used here
are similar to approaches for many computer vision tasks such as face tracking and detection that are
based color-and texture based features, and non-parametric Continuously Adaptive Mean-shift algorithms
to keep track of the leader. This is being proposed for the first time in the leader follower framework. The
algorithms are simple but effective for real time and provide an alternate approach to traditional based
approaches like the Viola Jones algorithm. Further to stabilize the follower to the leader trajectory, the
sliding mode controller is used to dynamically track the leader. The performance of the results is
demonstrated in simulation and in practical experiments.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
Q-Learnıng Based Real Tıme Path Plannıng for Mobıle Robotsijtsrd
Decision making and movement control are used for mobile robots to perform the given tasks. This study presents a real time application in which the robotic system estimates the shortest way from robot's current location to target point via Q learning algorithm and makes decision to go the target point on the estimated path by using movement control. Q Learning algorithm is known as a Reinforcement Learning RL algorithm. In this study, it is used as a core algorithm for estimation of the path that is optimum way for mobile robot in an environment. The environment is viewed by a camera. This study includes three phases. Firstly, the map and the locations of all objects including a mobile robot, obstacles and target point in the environment are determined by using image processing. Secondly, Q Learning algorithm is applied for the problem of the estimation of the shortest way from the current location of the robot to target point. Finally, a mobile robot with three omni wheels was developed. Experiments were carried out using this robot. Two different experiments are performed in experimental environment. The results obtained are shared at the end of the paper. Halil Cetin | Akif Durdu | M. Fatih Aslan | M. Mustafa Kelek "Q-Learnıng Based Real Tıme Path Plannıng for Mobıle Robots" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29625.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/29625/q-learn%C4%B1ng-based-real-t%C4%B1me-path-plann%C4%B1ng-for-mob%C4%B1le-robots/halil-cetin
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
AN INNOVATIVE RESEARCH FRAMEWORK ON INTELLIGENT TEXT DATA CLASSIFICATION SYST...ijaia
Recent years have witnessed an astronomical growth in the amount of textual information available both on the web and institutional wise document repositories. As a result, text mining has become extremely prevalent and processing of textual information from such repositories got the focus of the current age researchers. Indeed, in the researcher front of text analysis, there are numerous cutting edge applications are available for text mining. More specifically, the classification oriented text mining has been gaining more attention as it concentrates measures like coverage and accuracy. Along with the huge volume of data, the aspirations of the user are growing far higher than the human capacity, thus, an utomated and competitive intelligent systems are essential for reliable text analysis. Towards this, the authors in the present paper propose an Intelligent Text Data Classification System
(ITDCS) which is designed in the light of biological nature of genetic approach and able to acquire
computational intelligence accurately. Initially, ITDCS focusses on preparing structured data from the huge volume of unstructured data with its procedural steps and filter methods. Subsequently, it emphasises on classifying the text data into labelled classes using KNN classification based on the selection of best features derived by genetic algorithm. In this process, it specially concentrates on adding the power of
intelligence to the classifier using together with the biological parts namely, encoding strategy, fitness function and operators of genetic algorithm. The integration of all biological zomponents of genetic algorithm in ITDCS significantly improves the accuracy and reduces the misclassification rate in classifying the text data
A Path Planning Technique For Autonomous Mobile Robot Using Free-Configuratio...CSCJournals
This paper presents the implementation of a novel technique for sensor based path planning of autonomous mobile robots. The proposed method is based on finding free-configuration eigen spaces (FCE) in the robot actuation area. Using the FCE technique to find optimal paths for autonomous mobile robots, the underlying hypothesis is that in the low-dimensional manifolds of laser scanning data, there lies an eigenvector which corresponds to the free-configuration space of the higher order geometric representation of the environment. The vectorial combination of all these eigenvectors at discrete time scan frames manifests a trajectory, whose sum can be treated as a robot path or trajectory. The proposed algorithm was tested on two different test bed data, real data obtained from Navlab SLAMMOT and data obtained from the real-time robotics simulation program Player/Stage. Performance analysis of FCE technique was done with existing four path planning algorithms under certain working parameters, namely computation time needed to find a solution, the distance travelled and the amount of turning required by the autonomous mobile robot. This study will enable readers to identify the suitability of path planning algorithm under the working parameters, which needed to be optimized. All the techniques were tested in the real-time robotic software Player/Stage. Further analysis was done using MATLAB mathematical computation software.
Feature selection approach in animal classificationsipij
In this paper, we propose a model for automatic classification of Animals using different classifiers Nearest
Neighbour, Probabilistic Neural Network and Symbolic. Animal images are segmented using maximal
region merging segmentation. The Gabor features are extracted from segmented animal images.
Discriminative texture features are then selected using the different feature selection algorithm like
Sequential Forward Selection, Sequential Floating Forward Selection, Sequential Backward Selection and
Sequential Floating Backward Selection. To corroborate the efficacy of the proposed method, an
experiment was conducted on our own data set of 25 classes of animals, containing 2500 samples. The
data set has different animal species with similar appearance (small inter-class variations) across different
classes and varying appearance (large intra-class variations) within a class. In addition, the images of
flowers are of different poses, with cluttered background under different lighting and climatic conditions.
Experiment results reveal that Symbolic classifier outperforms Nearest Neighbour and Probabilistic Neural
Network classifiers.
Beamforming with per antenna power constraint and transmit antenna selection ...sipij
In this paper, transmit beamforming and antenna selection techniques are presented for the Cooperative
Distributed Antenna System. Beamforming technique with minimum total weighted transmit power
satisfying threshold SINR and Per-Antenna Power constraints is formulated as a convex optimization
problem for the efficient performance of Distributed Antenna System (DAS). Antenna Selection technique is
implemented in this paper to select the optimum Remote Antenna Units from all the available ones. This
achieves the best compromise between capacity and system complexity. Dual polarized and Triple
Polarized systems are considered. Simulation results prove that by integrating Beamforming with DAS
enhances its performance. Also by using convex optimization in Antenna Selection enhances the
performance of multi polarized systems.
CANNY EDGE DETECTION BASED REAL-TIME INTELLIGENT PARKING MANAGEMENT SYSTEMJANAK TRIVEDI
Real-time traffic monitoring and parking are very important aspects
for a better social and economic system. Python-based Intelligent Parking
Management System (IPMS) module using a USB camera and a canny edge
detection method was developed. The current situation of real-time parking slot
was simultaneously checked, both online and via a mobile application, with a
message of Parking “Available” or “Not available” for 10 parking slots. In
addition, at the time entering in parking module, gate open and at the time of exit
parking module, the gate closes automatically using servomotor and sensors.
Results are displayed in figures with the proposed method flow chart
COMPARATIVE STUDY ON VEHICLE DETECTION TECHNIQUES IN AERIAL SURVEILLANCEIJCI JOURNAL
Aerial surveillance system becomes a great trendy on past decades. Aerial surveillance vehicle tracking techniques plays a vital role and give rising to optimistic techniques continuously. This system can be very handy in various applications such as police, traffic monitoring, natural disaster and military. It is often covers large area and providing better perspective of moving objects. The detection of moving vehicle can be both from the dynamic aerial imagery, wide area motion imagery or images under low resolution and also the static in nature. It has been very difficult issue whether identify the object in the air view, the camera angles, movement objects and motionless object. This paper deals with comparative study on various vehicle detection and tracking approach in aerial videos with its experimental results and measures working condition, hit rate and false alarm rate
A VISION-BASED REAL-TIME ADAPTIVE TRAFFIC LIGHT CONTROL SYSTEM USING VEHICULA...JANAK TRIVEDI
In India, traffic control management is a difficult task due to an increment in the number of vehicles for the same infrastructure and systems. In the smart-city project, the Adaptive Traffic Light Control System (ATLCS) is one of the major research concerns for an Intelligent Transportation System (ITS) development to reduce traffic congestion and accidents, create a healthy environment, etc. Here, we have proposed a Vehicular Density Value (VDV) based adaptive traffic light control system method for 4-way intersection points using a selection of rotation, area of interest, and Statistical Block Matching Approach (SBMA). Graphical User Interface (GUI) and Hardware-based results are shown in the result section. We have compared, the normal traffic light control system with the proposed adaptive traffic light control system in the results section. The same results are verified using a hardware (raspberry-pi) device with different sizes, colors, and shapes of vehicles using the same method.
A Novel Multiple License Plate Extraction Technique for Complex Background in...CSCJournals
License plate recognition (LPR) is one of the most important applications of applying computer techniques towards intelligent transportation systems (ITS). In order to recognize a license plate efficiently, location and extraction of the license plate is the key step. Hence finding the position of a license plate in a vehicle image is considered to be the most crucial step of an LPR system, and this in turn greatly affects the recognition rate and overall speed of the whole system. This paper mainly deals with the detecting license plate location issues in Indian traffic conditions. The vehicles in India sometimes bare extra textual regions such as owner’s name, symbols, popular sayings and advertisement boards in addition to license plate. Situation insists for accurate discrimination of text class and fine aspect ratio analysis. In addition to this additional care taken up in this paper is to extract license plate of motorcycle (size of plate is small and double row plate), car (single as well as double row type), transport system such as bus, truck, (dirty plates) as well as multiple license plates present in an image frame under consideration. Disparity of aspect ratios is a typical feature of Indian traffic. Proposed method aims at identifying region of interest by performing a sequence of directional segmentation and morphological processing. Always the first step is of contrast enhancement, which is accomplished by using sigmoid function. In the subsequent steps, connected component analysis followed by different filtering techniques like aspect ratio analysis and plate compatible filter technique is used to find exact license plate. The proposed method is tested on large database consisting of 750 images taken in different conditions. The algorithm could detect the license plate in 742 images with success rate of 99.2%.
Projection Profile Based Number Plate Localization and Recognition csandit
This paper proposes algorithms to localize vehicle
number plates from natural background
images, to segment the characters from the localize
d number plates and to recognize the
segmented characters. The reported system is tested
on a dataset of 560 sample images
captured with different background under various il
luminations. The performance accuracy of
the proposed system has been calculated at each sta
ge, which is 97.1%, 95.4% and 95.72% for
localisation & extraction, character segmentation a
nd character recognition respectively. The
proposed method is also capable of localising and r
ecognising multiple number plates in
images.
Leader follower formation control of ground vehicles using camshift based gui...ijma
Autonomous ground vehicles have been designed for the purpose of that relies on ranging and bearing
information received from forward looking camera on the Formation control . A visual guidance control
algorithm is designed where real time image processing is used to provide feedback signals. The vision
subsystem and control subsystem work in parallel to accomplish formation control. A proportional
navigation and line of sight guidance laws are used to estimate the range and bearing information from the
leader vehicle using the vision subsystem. The algorithms for vision detection and localization used here
are similar to approaches for many computer vision tasks such as face tracking and detection that are
based color-and texture based features, and non-parametric Continuously Adaptive Mean-shift algorithms
to keep track of the leader. This is being proposed for the first time in the leader follower framework. The
algorithms are simple but effective for real time and provide an alternate approach to traditional based
approaches like the Viola Jones algorithm. Further to stabilize the follower to the leader trajectory, the
sliding mode controller is used to dynamically track the leader. The performance of the results is
demonstrated in simulation and in practical experiments.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
Q-Learnıng Based Real Tıme Path Plannıng for Mobıle Robotsijtsrd
Decision making and movement control are used for mobile robots to perform the given tasks. This study presents a real time application in which the robotic system estimates the shortest way from robot's current location to target point via Q learning algorithm and makes decision to go the target point on the estimated path by using movement control. Q Learning algorithm is known as a Reinforcement Learning RL algorithm. In this study, it is used as a core algorithm for estimation of the path that is optimum way for mobile robot in an environment. The environment is viewed by a camera. This study includes three phases. Firstly, the map and the locations of all objects including a mobile robot, obstacles and target point in the environment are determined by using image processing. Secondly, Q Learning algorithm is applied for the problem of the estimation of the shortest way from the current location of the robot to target point. Finally, a mobile robot with three omni wheels was developed. Experiments were carried out using this robot. Two different experiments are performed in experimental environment. The results obtained are shared at the end of the paper. Halil Cetin | Akif Durdu | M. Fatih Aslan | M. Mustafa Kelek "Q-Learnıng Based Real Tıme Path Plannıng for Mobıle Robots" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29625.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/29625/q-learn%C4%B1ng-based-real-t%C4%B1me-path-plann%C4%B1ng-for-mob%C4%B1le-robots/halil-cetin
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
AN INNOVATIVE RESEARCH FRAMEWORK ON INTELLIGENT TEXT DATA CLASSIFICATION SYST...ijaia
Recent years have witnessed an astronomical growth in the amount of textual information available both on the web and institutional wise document repositories. As a result, text mining has become extremely prevalent and processing of textual information from such repositories got the focus of the current age researchers. Indeed, in the researcher front of text analysis, there are numerous cutting edge applications are available for text mining. More specifically, the classification oriented text mining has been gaining more attention as it concentrates measures like coverage and accuracy. Along with the huge volume of data, the aspirations of the user are growing far higher than the human capacity, thus, an utomated and competitive intelligent systems are essential for reliable text analysis. Towards this, the authors in the present paper propose an Intelligent Text Data Classification System
(ITDCS) which is designed in the light of biological nature of genetic approach and able to acquire
computational intelligence accurately. Initially, ITDCS focusses on preparing structured data from the huge volume of unstructured data with its procedural steps and filter methods. Subsequently, it emphasises on classifying the text data into labelled classes using KNN classification based on the selection of best features derived by genetic algorithm. In this process, it specially concentrates on adding the power of
intelligence to the classifier using together with the biological parts namely, encoding strategy, fitness function and operators of genetic algorithm. The integration of all biological zomponents of genetic algorithm in ITDCS significantly improves the accuracy and reduces the misclassification rate in classifying the text data
A Path Planning Technique For Autonomous Mobile Robot Using Free-Configuratio...CSCJournals
This paper presents the implementation of a novel technique for sensor based path planning of autonomous mobile robots. The proposed method is based on finding free-configuration eigen spaces (FCE) in the robot actuation area. Using the FCE technique to find optimal paths for autonomous mobile robots, the underlying hypothesis is that in the low-dimensional manifolds of laser scanning data, there lies an eigenvector which corresponds to the free-configuration space of the higher order geometric representation of the environment. The vectorial combination of all these eigenvectors at discrete time scan frames manifests a trajectory, whose sum can be treated as a robot path or trajectory. The proposed algorithm was tested on two different test bed data, real data obtained from Navlab SLAMMOT and data obtained from the real-time robotics simulation program Player/Stage. Performance analysis of FCE technique was done with existing four path planning algorithms under certain working parameters, namely computation time needed to find a solution, the distance travelled and the amount of turning required by the autonomous mobile robot. This study will enable readers to identify the suitability of path planning algorithm under the working parameters, which needed to be optimized. All the techniques were tested in the real-time robotic software Player/Stage. Further analysis was done using MATLAB mathematical computation software.
Feature selection approach in animal classificationsipij
In this paper, we propose a model for automatic classification of Animals using different classifiers Nearest
Neighbour, Probabilistic Neural Network and Symbolic. Animal images are segmented using maximal
region merging segmentation. The Gabor features are extracted from segmented animal images.
Discriminative texture features are then selected using the different feature selection algorithm like
Sequential Forward Selection, Sequential Floating Forward Selection, Sequential Backward Selection and
Sequential Floating Backward Selection. To corroborate the efficacy of the proposed method, an
experiment was conducted on our own data set of 25 classes of animals, containing 2500 samples. The
data set has different animal species with similar appearance (small inter-class variations) across different
classes and varying appearance (large intra-class variations) within a class. In addition, the images of
flowers are of different poses, with cluttered background under different lighting and climatic conditions.
Experiment results reveal that Symbolic classifier outperforms Nearest Neighbour and Probabilistic Neural
Network classifiers.
Beamforming with per antenna power constraint and transmit antenna selection ...sipij
In this paper, transmit beamforming and antenna selection techniques are presented for the Cooperative
Distributed Antenna System. Beamforming technique with minimum total weighted transmit power
satisfying threshold SINR and Per-Antenna Power constraints is formulated as a convex optimization
problem for the efficient performance of Distributed Antenna System (DAS). Antenna Selection technique is
implemented in this paper to select the optimum Remote Antenna Units from all the available ones. This
achieves the best compromise between capacity and system complexity. Dual polarized and Triple
Polarized systems are considered. Simulation results prove that by integrating Beamforming with DAS
enhances its performance. Also by using convex optimization in Antenna Selection enhances the
performance of multi polarized systems.
Global threshold and region based active contour model for accurate image seg...sipij
In this contribution, we develop a novel global threshold-based active contour model. This model deploys a new
edge-stopping function to control the direction of the evolution and to stop the evolving contour at weak or
blurred edges. An implementation of the model requires the use of selective binary and Gaussian filtering
regularized level set (SBGFRLS) method. The method uses either a selective local or global segmentation
property. It penalizes the level set function to force it to become a binary function. This procedure is followed by
using a regularisation Gaussian. The Gaussian filters smooth the level set function and stabilises the evolution
process. One of the merits of our proposed model stems from the ability to initialise the contour anywhere inside
the image to extract object boundaries. The proposed method is found to perform well, notably when the
intensities inside and outside the object are homogenous. Our method is applied with satisfactory results on
various types of images, including synthetic, medical and Arabic-characters images.
Lossless image compression using new biorthogonal waveletssipij
Even though a large number of wavelets exist, one needs new wavelets for their specific applications. One
of the basic wavelet categories is orthogonal wavelets. But it was hard to find orthogonal and symmetric
wavelets. Symmetricity is required for perfect reconstruction. Hence, a need for orthogonal and symmetric
arises. The solution was in the form of biorthogonal wavelets which preserves perfect reconstruction
condition. Though a number of biorthogonal wavelets are proposed in the literature, in this paper four new
biorthogonal wavelets are proposed which gives better compression performance. The new wavelets are
compared with traditional wavelets by using the design metrics Peak Signal to Noise Ratio (PSNR) and
Compression Ratio (CR). Set Partitioning in Hierarchical Trees (SPIHT) coding algorithm was utilized to
incorporate compression of images.
Image retrieval and re ranking techniques - a surveysipij
There is a huge amount of research work focusing on the searching, retrieval and re-ranking of images in
the image database. The diverse and scattered work in this domain needs to be collected and organized for
easy and quick reference.
Relating to the above context, this paper gives a brief overview of various image retrieval and re-ranking
techniques. Starting with the introduction to existing system the paper proceeds through the core
architecture of image harvesting and retrieval system to the different Re-ranking techniques. These
techniques are discussed in terms of approaches, methodologies and findings and are listed in tabular form
for quick review.
Robust content based watermarking algorithm using singular value decompositio...sipij
Nowadays, image content is frequently subject to different malicious manipulations. To protect images
from this illegal manipulations computer science community have recourse to watermarking techniques. To
protect digital multimedia content we need just to embed an invisible watermark into images which
facilitate the detection of different manipulations, duplication, illegitimate distributions of these images. In
this work a robust watermarking technique is presented that embedding invisible watermarks into colour
images the singular value decomposition bloc by bloc of a robust transform of images that is the Radial
symmetry transform. Each bit of the watermark is inserted in a bloc of eight pixels large of the blue
channel a high singular value of the corresponding bloc into the radial symmetry map. We justified the
insertion in the blue channel by our feeble sensibility to perturbations in this colour channel of images. We
present also results obtained with different tests. We had tested the imperceptibility of the mark using this
approach and also its robustness face to several attacks.
A voting based approach to detect recursive order number of photocopy documen...sipij
Photocopy documents are very common in our normal life. People are permitted to carry and present
photocopied documents to avoid damages to the original documents. But this provision is misused for
temporary benefits by fabricating fake photocopied documents. Fabrication of fake photocopied document
is possible only in 2nd and higher order recursive order of photocopies. Whenever a photocopied document
is submitted, it may be required to check its originality. When the document is 1st order photocopy, chances
of fabrication may be ignored. On the other hand when the photocopy order is 2nd or above, probability of
fabrication may be suspected. Hence when a photocopy document is presented, the recursive order number
of photocopy is to be estimated to ascertain the originality. This requirement demands to investigate
methods to estimate order number of photocopy. In this work, a voting based approach is used to detect the
recursive order number of the photocopy document using probability distributions exponential, extreme
values and lognormal distributions is proposed. A detailed experimentation is performed on a generated
data set and the method exhibits efficiency close to 89%.
IDENTIFICATION OF SUITED QUALITY METRICS FOR NATURAL AND MEDICAL IMAGESsipij
To assess quality of the denoised image is one of the important task in image denoising application.
Numerous quality metrics are proposed by researchers with their particular characteristics till today. In
practice, image acquisition system is different for natural and medical images. Hence noise introduced in
these images is also different in nature. Considering this fact, authors in this paper tried to identify the
suited quality metrics for Gaussian, speckle and Poisson corrupted natural, ultrasound and X-ray images
respectively. In this paper, sixteen different quality metrics from full reference category are evaluated with
respect to noise variance and suited quality metric for particular type of noise is identified. Strong need to
develop noise dependent quality metric is also identified in this work.
Contrast enhancement using various statistical operations and neighborhood pr...sipij
Histogram Equalization is a simple and effective contrast enhancement technique. In spite of its popularity
Histogram Equalization still have some limitations –produces artifacts, unnatural images and the local
details are not considered, therefore due to these limitations many other Equalization techniques have been
derived from it with some up gradation. In this proposed method statistics play an important role in image
processing, where statistical operations is applied to the image to get the desired result such as
manipulation of brightness and contrast. Thus, a novel algorithm using statistical operations and
neighborhood processing has been proposed in this paper where the algorithm has proven to be effective in
contrast enhancement based on the theory and experiment.
A Novel Uncertainty Parameter SR ( Signal to Residual Spectrum Ratio ) Evalua...sipij
Usually, hearing impaired people use hearing aids which are implemented with speech enhancement
algorithms. Estimation of speech and estimation of nose are the components in single channel speech
enhancement system. The main objective of any speech enhancement algorithm is estimation of noise power
spectrum for non stationary environment. VAD (Voice Activity Detector) is used to identify speech pauses
and during these pauses only estimation of noise. MMSE (Minimum Mean Square Error) speech
enhancement algorithm did not enhance the intelligibility, quality and listener fatigues are the perceptual
aspects of speech. Novel evaluation approach SR (Signal to Residual spectrum ratio) based on uncertainty
parameter introduced for the benefits of hearing impaired people in non stationary environments to control
distortions. By estimation and updating of noise based on division of original pure signal into three parts
such as pure speech, quasi speech and non speech frames based on multiple threshold conditions. Different
values of SR and LLR demonstrate the amount of attenuation and amplification distortions. The proposed
method will compared with any one method WAT(Weighted Average Technique) Hence by using
parameters SR (signal to residual spectrum ratio) and LLR (log like hood ratio), MMSE (Minim Mean
Square Error) in terms of segmented SNR and LLR.
Application of parallel algorithm approach for performance optimization of oi...sipij
This paper gives a detailed study on the performance of image filter algorithm with various parameters
applied on an image of RGB model. There are various popular image filters, which consumes large amount
of computing resources for processing. Oil paint image filter is one of the very interesting filters, which is
very performance hungry. Current research tries to find improvement in oil paint image filter algorithm by
using parallel pattern library. With increasing kernel-size, the processing time of oil paint image filter
algorithm increases exponentially. I have also observed in various blogs and forums, the questions for
faster oil paint have been asked repeatedly.
An intensity based medical image registration using genetic algorithmsipij
Medical imaging plays a vital role to create images of human body for clinical purposes. Biomedical
imaging has taken a leap by entering into the field of image registration. Image registration integrates the
large amount of medical information embedded in the images taken at different time intervals and images
at different orientations. In this paper, an intensity-based real-coded genetic algorithm is used for
registering two MRI images. To demonstrate the efficiency of the algorithm developed, the alignment of the
image is altered and algorithm is tested for better performance. Also the work involves the comparison of
two similarity metrics, and based on the outcome the best metric suited for genetic algorithm is studied.
Parallax Effect Free Mosaicing of Underwater Video Sequence Based on Texture ...sipij
In this paper, we present feature-based technique for construction of mosaic image from underwater video
sequence, which suffers from parallax distortion due to propagation properties of light in the underwater
environment. The most of the available mosaic tools and underwater image mosaicing techniques yields
final result with some artifacts such as blurring, ghosting and seam due to presence of parallax in the input
images. The removal of parallax from input images may not reduce its effects instead it must be corrected
in successive steps of mosaicing. Thus, our approach minimizes the parallax effects by adopting an efficient
local alignment technique after global registration. We extract texture features using Centre Symmetric
Local Binary Pattern (CS-LBP) descriptor in order to find feature correspondences, which are used further
for estimation of homography through RANSAC. In order to increase the accuracy of global registration,
we perform preprocessing such as colour alignment between two selected frames based on colour
distribution adjustment. Because of existence of 100% overlap in consecutive frames of underwater video,
we select frames with minimum overlap based on mutual offset in order to reduce the computation cost
during mosaicing. Our approach minimizes the parallax effects considerably in final mosaic constructed
using our own underwater video sequences.
Offline handwritten signature identification using adaptive window positionin...sipij
The paper presents to address this challenge, we have proposed the use of Adaptive Window Positioning
technique which focuses on not just the meaning of the handwritten signature but also on the individuality
of the writer. This innovative technique divides the handwritten signature into 13 small windows of size nxn
(13x13). This size should be large enough to contain ample information about the style of the author and
small enough to ensure a good identification performance. The process was tested with a GPDS dataset
containing 4870 signature samples from 90 different writers by comparing the robust features of the test
signature with that of the user’s signature using an appropriate classifier. Experimental results reveal that
adaptive window positioning technique proved to be the efficient and reliable method for accurate
signature feature extraction for the identification of offline handwritten signatures .The contribution of this
technique can be used to detect signatures signed under emotional duress
Speaker Identification From Youtube Obtained Datasipij
An efficient, and intuitive algorithm is presented for the identification of speakers from a long dataset (like
YouTube long discussion, Cocktail party recorded audio or video).The goal of automatic speaker
identification is to identify the number of different speakers and prepare a model for that speaker by
extraction, characterization and speaker-specific information contained in the speech signal. It has many
diverse application specially in the field of Surveillance , Immigrations at Airport , cyber security ,
transcription in multi-source of similar sound source, where it is difficult to assign transcription arbitrary.
The most commonly speech parameterization used in speaker verification, K-mean, cepstral analysis, is
detailed. Gaussian mixture modeling, which is the speaker modeling technique is then explained. Gaussian
mixture models (GMM), perhaps the most robust machine learning algorithm has been introduced to
examine and judge carefully speaker identification in text independent. The application or employment of
Gaussian mixture models for monitoring & Analysing speaker identity is encouraged by the familiarity,
awareness, or understanding gained through experience that Gaussian spectrum depict the characteristics
of speaker's spectral conformational pattern and remarkable ability of GMM to construct capricious
densities after that we illustrate 'Expectation maximization' an iterative algorithm which takes some
arbitrary value in initial estimation and carry on the iterative process until the convergence of value is
observed We have tried to obtained 85 ~ 95% of accuracy using speaker modeling of vector quantization
and Gaussian Mixture model ,so by doing various number of experiments we are able to obtain 79 ~ 82%
of identification rate using Vector quantization and 85 ~ 92.6% of identification rate using GMM modeling
by Expectation maximization parameter estimation depending on variation of parameter.
Review of ocr techniques used in automatic mail sorting of postal envelopessipij
This paper presents a review of various OCR techniq
ues used in the automatic mail sorting process. A
complete description on various existing methods fo
r address block extraction and digit recognition th
at
were used in the literature is discussed. The objec
tive of this study is to provide a complete overvie
w about
the methods and techniques used by many researchers
for automating the mail sorting process in postal
service in various countries. The significance of Z
ip code or Pincode recognition is discussed.
A combined method of fractal and glcm features for mri and ct scan images cla...sipij
Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
Monitoring traffic in urban areas is an important task for intelligent transport applications to alleviate the traffic problems like traffic jams and long trip times. The traffic flow in urban areas is more complicated than the traffic flow in highway, due to the slow movement of vehicles and crowded traffic flows in urban areas. In this paper, a vehicle detection and classification system at intersections is proposed. The system consists of three main phases: vehicle detection, vehicle tracking and vehicle classification. In the vehicle detection, the background subtraction is utilized to detect the moving vehicles by employing mixture of Gaussians (MoGs) algorithm, and then the removal shadow algorithm is developed to improve the detection phase and eliminate the undesired detected region (shadows). After the vehicle detection phase, the vehicles are tracked until they reach the classification line. Then the vehicle dimensions are utilized to classify the vehicles into three classes (cars, bikes, and trucks). In this system, there are three counters; one counter for each class. When the vehicle is classified to a specific class, the class counter is incremented by one. The counting results can be used to estimate the traffic density at intersections, and adjust the timing of traffic light for the next light cycle. The system is applied to videos obtained by stationary cameras. The results obtained demonstrate the robustness and accuracy of the proposed system.
MODEL BASED TECHNIQUE FOR VEHICLE TRACKING IN TRAFFIC VIDEO USING SPATIAL LOC...mlaij
In this paper, we proposed a novel method for visible vehicle tracking in traffic video sequence using model based strategy combined with spatial local features. Our tracking algorithm consists of two components: vehicle detection and vehicle tracking. In the detection step, we subtract the background and obtained candidate foreground objects represented as foreground mask. After obtaining foreground mask of
candidate objects, vehicles are detected using Co-HOG descriptor. In the tracking step, vehicle model is
constructed based on shape and texture features extracted from vehicle regions using Co-HOG and CSLBP method. After constructing the vehicle model, for the current frame, vehicle features are extracted from each vehicle region and then vehicle model is updated. Finally, vehicles are tracked based on the similarity measure between current frame vehicles and vehicle models. The proposed algorithm is evaluated based on precision, recall and VTA metrics obtained on GRAM-RTM dataset and i-Lids dataset. The experimental results demonstrate that our method achieves good accuracy.
TRAFFIC-SIGN RECOGNITION FOR AN INTELLIGENT VEHICLE/DRIVER ASSISTANT SYSTEM U...cseij
In order to be deployed in driving environments, Intelligent transport system (ITS) must be able to
recognize and respond to exceptional road conditions such as traffic signs, highway work zones and
imminent road works automatically. Recognition of traffic sign is playing a vital role in the intelligent
transport system, it enhances traffic safety by providing drivers with safety and precaution information
about road hazards. To recognize the traffic sign, the system has been proposed with three phases. They
are Traffic board Detection, Feature extraction and Recognition. The detection phase consists of RGBbased
colour thresholding and shape analysis, which offers robustness to differences in lighting situations.
A Histogram of Oriented Gradients (HOG) technique was adopted to extract the features from the
segmented output. Finally, traffic signs recognition is done by k-Nearest Neighbors (k-NN) classifiers. It
achieves an classification accuracy upto 63%.
Real Time Myanmar Traffic Sign Recognition System using HOG and SVMijtsrd
Traffic sign recognition is one of the most important research topics for enabling autonomous vehicle driving systems. In order to be deployed in driving environments, intelligent transport system must be able to recognize and respond to exceptional road conditions such as traffic signs, highway work zones and imminent road works automatically. In this paper, Real time Myanmar Traffic Sign Recognition System RMTSRS is proposed. The incoming video stream is fed into computer vision. Then each incoming frames are segmented using color threshold method for traffic sign detection. A Histogram of Oriented Gradients HOG technique is used to extract the features from the segmented traffic sign and then RMTSRS classifies traffic sign types using Support Vector Machine SVM . The system achieves classification accuracy up to 98 . Myint Tun | Thida Lwin "Real-Time Myanmar Traffic Sign Recognition System using HOG and SVM" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27929.pdfPaper URL: https://www.ijtsrd.com/computer-science/real-time-computing/27929/real-time-myanmar-traffic-sign-recognition-system-using-hog-and-svm/myint-tun
PROJECTION PROFILE BASED NUMBER PLATE LOCALIZATION AND RECOGNITIONcscpconf
This paper proposes algorithms to localize vehicle number plates from natural background images to segment the characters from the localized number plates and to recognize the
segmented characters. The reported system is tested on a dataset of 560 sample images captured with different background under various illuminations. The performance accuracy of the proposed system has been calculated at each stage, which is 97.1%, 95.4% and 95.72% for
localisation & extraction, character segmentation and character recognition respectively. The proposed method is also capable of localising and recognising multiple number plates in images.
Application of improved you only look once model in road traffic monitoring ...IJECEIAES
The present research focuses on developing an intelligent traffic management solution for tracking the vehicles on roads. Our proposed work focuses on a much better you only look once (YOLOv4) traffic monitoring system that uses the CSPDarknet53 architecture as its foundation. Deep-sort learning methodology for vehicle multi-target detection from traffic video is also part of our research study. We have included features like the Kalman filter, which estimates unknown objects and can track moving targets. Hungarian techniques identify the correct frame for the object. We are using enhanced object detection network design and new data augmentation techniques with YOLOv4, which ultimately aids in traffic monitoring. Until recently, object identification models could either perform quickly or draw conclusions quickly. This was a big improvement, as YOLOv4 has an astoundingly good performance for a very high frames per second (FPS). The current study is focused on developing an intelligent video surveillance-based vehicle tracking system that tracks the vehicles using a neural network, image-based tracking, and YOLOv4. Real video sequences of road traffic are used to test the effectiveness of the method that has been suggested in the research. Through simulations, it is demonstrated that the suggested technique significantly increases graphics processing unit (GPU) speed and FSP as compared to baseline algorithms.
VDIS: A System for Morphological Detection and Identification of Vehicles in ...rsmahabir
With the growth of urban centers worldwide, the number of vehicles in and around these areas has also increased. Traffic-related data plays an important role in spatial planning, for example, optimizing road networks and in the estimation or simulation of air and noise pollution. This information is important as it reflects the changes taking place around us. Additionally, data collected can be used for a wide array of applications including law enforcement, fleet management, and supporting other analyses at varying scales. In this paper, we present a method for the detection and identification of vehicles from low altitude, high spatial resolution Red Blue Green (RGB) images, utilizing both object spectra and image morphology. Results show an identification performance upwards of 62% with false positives occurring from the use of images with sun glare and vehicles with similar spectra values.
Statistics indicate that most road accidents occur due to a lack of time to react to instant traffic. This problem can be addressed with self-driving vehicles with the application of automated systems to detect such traffic events. The Autonomous Vehicle Navigation System (ATS) has been a standard in the Intelligent Transport System (ITS) and many Driver Assistance Systems (DAS) have been adopted to support these Advanced Autonomous Vehicles (IAVs). To develop these recognition systems for automated self-driving cars, it's important to monitor and operate in real-time traffic events. It requires the correct detection and response of traffic event an automated vehicle. In this paper proposed to develop such a system by applying image recognition to detect and respond to a road blocker by means of real-time distance measurement. To study the performance by measuring accuracy and precision of road blocker detection system and distance calculation, various experiments were conducted by using Shalom frame dataset and detection accuracy, precision of 99%, 100%, while distance calculation 97%, 99% has been achieved by this approach.
Vehicle Tracking Using Kalman Filter and Featuressipij
Vehicle tracking has a wide variety of applications. The image resolution of the video available from most traffic camera system is low. In many cases for tracking multi object, distinguishing them from another isn’t easy because of their similarity. In this paper we describe a method, for tracking multiple objects, where the objects are vehicles. The number of vehicles is unknown and varies. We detect all moving objects, and for tracking of vehicle we use the kalman filter and color feature and distance of it from one frame to the next. So the method can distinguish and tracking all vehicles individually. The proposed algorithm can be applied to multiple moving objects.
A computer vision-based lane detection approach for an autonomous vehicle usi...Md. Faishal Rahaman
Lane detection systems play a critical role in ensuring safe and secure driving by alerting the driver of lane departures. Lane detection may also save passengers' lives if they go off the road owing to driver distraction. The article presents a three-step approach for detecting lanes on high-speed video pictures in real-time and invariant lighting. The first phase involves doing appropriate prepossessing, such as noise reduction, RGB to grey-scale conversion, and binarizing the input picture. Then, a polygon area in front of the vehicle is picked as the zone of interest to accelerate processing. Finally, the edge detection technique is used to acquire the image's edges in the area of interest, and the Hough transform is used to identify lanes on both sides of the vehicle. The suggested approach was implemented using the IROADS database as a data source. The recommended method is effective in various daylight circumstances, including sunny, snowy, and rainy days, as well as inside tunnels. The proposed approach processes frame on average in 28 milliseconds and have a detection accuracy of 96.78 per cent, as shown by implementation results. This article aims to provide a simple technique for identifying road lines on high-speed video pictures utilizing the edge feature.
Neural Network based Vehicle Classification for Intelligent Traffic Controlijseajournal
Nowadays, number of vehicles has been increased and traditional systems of traffic controlling couldn’t be
able to meet the needs that cause to emergence of Intelligent Traffic Controlling Systems. They improve
controlling and urban management and increase confidence index in roads and highways. The goal of this
article is vehicles classification base on neural networks. In this research, it has been used a immovable
camera which is located in nearly close height of the road surface to detect and classify the vehicles. The
algorithm that used is included two general phases; at first, we are obtaining mobile vehicles in the traffic
situations by using some techniques included image processing and remove background of the images and
performing edge detection and morphology operations. In the second phase, vehicles near the camera are
selected and the specific features are processed and extracted. These features apply to the neural networks
as a vector so the outputs determine type of vehicle. This presented model is able to classify the vehicles in
three classes; heavy vehicles, light vehicles and motorcycles. Results demonstrate accuracy of the
algorithm and its highly functional level.
A computer vision-based lane detection technique using gradient threshold and...IJECEIAES
Automatic lane detection for driver assistance is a significant component in developing advanced driver assistance systems and high-level application frameworks since it contributes to driver and pedestrian safety on roads and highways. However, due to several limitations that lane detection systems must rectify, such as the uncertainties of lane patterns, perspective consequences, limited visibility of lane lines, dark spots, complex background, illuminance, and light reflections, it remains a challenging task. The proposed method employs vision-based technologies to determine the lane boundary lines. We devised a system for correctly identifying lane lines on a homogeneous road surface. Lane line detection relies heavily on the gradient and hue lightness saturation (HLS) thresholding which detects the lane line in binary images. The lanes are shown, and a sliding window searching method is used to estimate the color lane. The proposed system achieved 96% accuracy in detecting lane lines on the different roads, and its performance was assessed using data from several road image databases under various illumination circumstances.
License Plate Recognition using Morphological Operation. Amitava Choudhury
This paper describes an efficient technique of locating and
extracting license plate and recognizing each segmented
character. The proposed model can be subdivided into four
parts- Digitization of image, Edge Detection, Separation of
characters and Template Matching. In this work, we propose a
method which is based on morphological operations where
different Structuring Elements (SE) are used to maximally
eliminate non-plate region and enhance plate region.
Character segmentation is done using Connected Component
Analysis. Correlation based template matching technique is
used for recognition of characters. This system is
implemented using MATLAB7.4.0. The proposed system is
mainly applicable to Indian License Plates.
Similar to FRONT AND REAR VEHICLE DETECTION USING HYPOTHESIS GENERATION AND VERIFICATION (20)
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
FRONT AND REAR VEHICLE DETECTION USING HYPOTHESIS GENERATION AND VERIFICATION
1. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
DOI : 10.5121/sipij.2013.4403 31
FRONT AND REAR VEHICLE DETECTION USING
HYPOTHESIS GENERATION AND VERIFICATION
Nima Khairdoost, S. Amirhassan Monadjemi and Kamal Jamshidi
Department of Computer Engineering, Faculty of Engineering,
University of Isfahan, Isfahan, 81746, Iran
{n.kheirdoost, monadjemi, jamshidi}@eng.ui.ac.ir
ABSTRACT
Vehicle detection in traffic scenes is an important issue in driver assistance systems and self-guided
vehicles that includes two stages of Hypothesis Generation (HG) and Hypothesis Verification (HV). The
both stages are important and challenging. In the first stage, potential vehicles are hypothesized and in the
second stage, all hypotheses are verified and classified into vehicle and non-vehicle classes. In this paper,
we present a method for detecting front and rear on-road vehicles without lane information and prior
knowledge about the position of the road. In the HG stage, a three-step method including shadow, texture
and symmetry clues is applied. In the HV stage, we extract Pyramid Histograms of Oriented Gradients
(PHOG) features from a traffic image as basic features to detect vehicles. Principle Component Analysis
(PCA) is applied to these PHOG feature vectors as a dimension reduction tool to obtain the PHOG-PCA
vectors. Then, we use Genetic Algorithm (GA) and linear Support Vector Machine (SVM) to improve the
performance and generalization of the PHOG-PCA features. Experimental results of the proposed HV
stage showed good classification accuracy of more than 97% correct classification on realistic on-road
vehicle dataset images and also it has better classification accuracy in comparison with other approaches.
KEYWORDS
Vehicle Detection, Hypothesis Generation, Hypothesis Verification, PHOG, PCA, GA, Linear SVM,
Feature Weighting
1. INTRODUCTION
Each year, on average, at least 1.2 million people die as a result of worldwide vehicle accidents
and they injure at least 10 million people. It is predicted that damage property, hospital bill and
other costs associated with vehicle accidents will add up to 1-3 percentage of the world's domestic
product [1].
There are at least three reasons for the increasing research in this area: 1) the statistics show that
most deaths in vehicle accidents caused by car collision with other vehicles, 2) improved machine
vision algorithms, 3) availability of low cost high computational power [1]. Consequently, the
development of on-board automotive driver assistance systems with aiming to alert a driver about
possible collision with other vehicles and also driving environment has attracted a lot of attention
over the last 20 years among vehicle manufactures, safety experts and universities. Several
national and international companies have launched over the past several years to research new
technologies for reducing accidents and improving safety [2].
Robust and reliable vehicle detection in images is the critical step for these systems and self-
guided vehicles as well as traffic controllers. This is a very challenging task since it is not only
affected by the size, shape, color, and pose of vehicles, but also by lighting conditions, weather,
dynamic environments and the surface of different roads. A vehicle detection system must also
2. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
32
distinguish vehicles from all other visual patterns which exist in the world, such as similar
looking rectangular objects [3].
Almost every vehicle detection system includes two basic stages: 1) Hypothesis Generation (HG)
which hypothesized all regions in the image that potentially contain a vehicle, and 2) Hypothesis
Verification (HV) which verifies the hypotheses [4,5].
Various HG methods have been suggested in the literature and can be classified in three basic
categories [1]: 1) knowledge-based, 2) stereo-based and 3) motion-based. Knowledge-based
methods employ information about color and vehicle shape as well as general information about
the context such as: a) shadow [6,7], b) symmetry [8,9,10], c) horizontal/vertical edges [11,5,12],
d) color [13,14], e) texture [15,16], and f) vehicle lights [17,18]. Stereo-based approaches usually
employ the Inverse Perspective Mapping (IPM) to estimate the locations of people, vehicles and
obstacles [19,20,21] in the images. Motion-based approaches detect object such as people,
vehicles and obstacles using optical flow [22,23]. However, generating a displacement vector for
each pixel is a time-consuming task and impractical for real time systems. To attack this problem,
discrete methods employ the image features such as color blobs [24] or local intensity minima
and maxima [25].
In the HV stage, correctness of hypotheses are verified and sorted into vehicle and non-vehicle
classes. The HV approaches can be divided into two categories [1]: 1) template-based and 2)
appearance-based. The template-based methods employ the predefined patterns of the vehicle
class and perform correlation between the template and the image. In [26], a HV algorithm was
proposed based on the presence of the license plates and rear windows. This can be considered as
a loose template of vehicle class. Handmann et al. [27] attempted to employ the template of 'U'
shape which describes the bottom and sides edge of a vehicle. During verification, if they could
find the 'U' shape, the image region was considered as a vehicle.
In the appearance-based methods, the characteristics of the vehicle appearance are learned from a
set of training images which capture the variability in the vehicle class. Usually, the variability of
the non-vehicle class is also modelled to improve performance. To begin, each training image is
presented by a set of global or local features [4]. Then, the decision boundary between vehicle
and non-vehicle classes is learned either by training a classifier (e.g. Support Vector Machine,
Adaboost and Neural Network) or by modelling the probability distribution of the features in each
class (e.g. employing the Bayes rule assuming Gaussian distributions) [28,29,30]. In [31],
Principle Component Analysis (PCA) was used for feature extraction and linear Support Vector
Machine (SVM) for classification of vehicle images. Goerick et al. [32] employed Local
Orientation Code (LOC) to extract the edge information of ROI and NNs to learn the
characteristics of vehicles. In [33], a multilayer feedforward neural network-based method was
proposed with the linear output layer for vehicle detection. Features extraction by application of
Gabor filters was investigated in [34], Gabor filters provide a mechanism to extract the line and
edge information by tuning orientation and changing the scale. In [35], an Adaboost classifier
[36] trained on Haar features was used to classify detections. Papageorgiou and Poggio [37] have
presented by acquisition of Haar wavelet transform for feature extraction and SVMs for
classification. In [12], multiple detectors were built with employing Haar wavelets, Gabor filters,
PCA, truncated wavelets, and a combination of wavelet and Gabor features using SVM and
neural networks classifiers. A comparison of feature and classifier performance was presented,
the conclusion was the feature fusion of the Haar and Gabor features can result in robust
detection. In [38], a similar work was performed. Negri et al. [38] compared the performance of
vehicle detectors with Adaboost classification that was trained using the Haar-like features, a
histogram of oriented gradient features, and a fusion of them. The conclusion was that a feature
fusion can be valuable. A statistical method was used in [39], performing vehicle detection
employing PCA and independent component analysis (ICA) to classify on a statistical model and
its speed was increased by modelling the PCA and ICA vectors with a weighted Gaussian mixture
3. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
33
model. In [40], a general object detection scheme was proposed using PCA and Genetic
Algorithm (GA) for feature extraction and feature subset selection respectively.
In this paper, in the HG stage, three different knowledge-based cues are applied to take the
advantages of all the cues and also besides these, the data are prepared for the HV stage. Also, in
the HV stage, a four-step method is used to optimize the features and rise the classification
accuracy. The rest of the paper is organized as follows: in Section 2, the proposed method is
described in detail. Our experimental results and comparisons are mentioned in section 3. The last
section is the conclusion and future works.
2. PROPOSED METHOD
As before, the vehicle detection system includes two stages of the hypothesis generation and
verification, in this section we describe the solutions that are employed in two stages.
2.1. Hypothesis Generation (HG)
Three different clues of shadow, texture and symmetry are used in this stage. The procedure starts
with detecting shadow underneath a vehicle and using the aspect ratio, the ROI is determined.
The advantage of shadow is that all of vehicles will be detected [41,42]. In the next step, the ROI
is tested whether it has enough entropy or not. For this purpose, the rows with low entropy are
removed from the ROI. If too few rows remain, the detected shadow is rejected (the shadow does
not belong to a vehicle). Finally, the horizontal symmetry of the remaining rows of the ROI is
verified. In this step, asymmetric ROI is classified as background (which means the detected
shadow is rejected) and symmetric ROI is considered as a hypothesis. In this step, besides
separating the symmetric ROIs from background, the boundaries of the symmetric ROI is
modified and it also improves the performance of the HV stage. This stage was inspired by the
work of [42]. The three mentioned steps are described in the following.
2.1.1. Shadow
Shadow is the first clue that is searched in the image. Using shadow, the positions of the present
vehicles can be indicated that is based on the fact that the shadow underneath a vehicle is darker
than the observed surface of the road. After detecting the shadow underneath a vehicle, the region
above the shadow is considered as a ROI that will be further analyzed.
The presented work in [7] forms the basis for detecting the shadow underneath a vehicle. There is
no lower boundary for the intensity of the shadow underneath a vehicle but based on the intensity
distribution of the road surface, an upper boundary can be defined for it, although it will not be
fixed. The value of this threshold depends on the color of the surface of the road and its
illumination.
The intensity distribution of the surface of the road is estimated without pre-knowledge about the
location of the road in the image. It is achieved by means of a simple algorithm for detecting the
free-driving-space. The free-driving-space is defined as the part of the observed road directly in
front of the camera. For estimation of the free-driving-space, to begin, edges in the image are
estimated, then the space is discovered, defined by the lowest central homogeneous region in the
image and delimited by the edges (Figure 1).
We assume that the intensity values of the road surface are normally distributed and estimate the
mean value m and variance σ of the distribution. The upper bound for the shadows can be
considered Threshsh =m-3σ. Although there are alternative approaches to estimate the free-
driving-space with more refinement but this procedure is sufficient to estimate m and σ for our
application.
For detecting the shadows underneath vehicles we need to estimate the edges that correspond to
the transitions from the road surface to the dark areas underneath the vehicles. These edges are
4. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
34
usually horizontal. For this purpose, to begin, the points of the image are found that their intensity
values are lower than the threshold (the dark areas). Next, within the image is searched for
vertical transitions from brighter intensity values to the darker ones (scanning the image bottom-
up). This operation can be done efficiently by simply shifting the image vertically and then
subtracting the two images (Figure 2). By this implementation, the obtained image has fewer
horizontal edges than in the case of using a normal edge detector.
The equation which gives the thresholded image is as follows.
≥−+∧≤= Thresh),(I)1,(IThreshv)I(u,:1
else:0
),( vsh vuvuvuD (1)
where Threshv is the intensity-value difference for the vertical edge estimation. The
horizontal line segments are discovered in two successive lines. By thresholding the
length of a line segment, the lines belonging to potential vehicles can be roughly
separated and the remaining line segments are considered the background. Finally, the
region above detected shadow is considered as a ROI. The ROI is estimated by the aspect
ratio and slightly wider regions are used for the further analysis of each ROI (Figure 2.c).
(a) (b) (c) (d)
Figure 1. Estimation procedure for the intensity distribution of the 'free-driving-space'.(a) the
original image. (b) the edges estimation in the image. (c) the rough estimation of the free-driving-
space. (d) the histogram of the intensity distribution of the free-driving-space. A Gaussian is fit to
this histogram.
(a) (b) (c)
Figure 2. Hypotheses generated based on shadow. (a) pixels identified as shadow based on
Threshsh. (b) the shadow points based on Threshsh and Threshv that exhibit a vertical transitions
from a brighter intensity values to the darker ones (scanning the image bottom-up). (c) scanning
the image bottom-up, a ROI is defined above horizontal detected shadow presented in Figure b. A
ROI is only further analyzed if its size indicates the possibility to contain a vehicle and in this
case slightly wider regions (green boxes) are used for the ROIs for the further analysis.
2.1.2. Texture
Local entropy based on the information theory presented by Shannon in [43] can be considered as
a measure for the information content. In [15], texture was employed on the grounds that their
vehicle detection algorithm intended to focus on the parts in the image with high information
content.
5. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
35
For our application, the entropy is employed to investigate the ROIs that are detected by the
previous step (shadow). If a vehicle is observed in the ROI, it is expected in horizontal direction
in between boundaries of the ROI, the estimated entropy will be high. For this purpose, the local
entropy is computed along the lines of the ROI and the lines that indicate lack of entropy are
removed from the ROI. If the number of remaining lines is lower than a threshold, the detected
shadow line will be removed which means, it does not belong to a vehicle. In this case, we do not
remove the whole of the segment lines in the ROI since a part of another potential vehicle may
exist in the ROI and by removing its segment lines, the vehicle may be not detected by the
shadow step in the further analysis.
The entropy is defined as follows:
)(log)()(
x
rpk x
rp
x
rH ∑−= (2)
where p(rx) is a probability distribution of rx that in here is local intensities. The intensity
distribution p(rx) of a line in a ROI is derived from its histogram of intensities.
2.1.3. Symmetry
Good horizontal symmetry can be obtained in uniform background areas. Therefore, estimating
their symmetry would be useless. The preceding steps of the HG procedure already reject the
uniform regions before entering to the symmetry step. Besides this, these steps prepare the data
for the symmetry step in such a way that this step can be done efficiently. Furthermore,
employing the preceding steps causes the data which should be analyzed in the symmetry step are
decreased. This makes the process of finding the axis and width of the symmetry efficient.
In here, the estimation method of the local horizontal symmetry will be explained briefly. The
notation is as follows:
G(u) : a one dimensional function defined over a line of the ROI (that obtained from the
texture step)
wmax : the width of the ROI
w : the width of the symmetry interval
xs : location of a potential symmetry axis
Any function G(u) can be broken into a sum of its even part Ge(u) and its odd part Go(u):
)()()( u
o
Gu
e
GuG +=
2
)()(
)(
uGuG
u
e
G
−+
=
/2]
max
/2,
max
[-,
2
)()(
)( wwu
uGuG
u
o
G ∈
−−
=
(3)
For shifting the origin of the function G(u) to any potential symmetry axis xs, substitution u = x -
xs is used. The even function of G(x) = G(x-xs) for a given interval of width w about symmetry
axis xs is defined as:
≤−−+−= w/2))()(()2/1(
0
),,( s
xxifx
s
xG
s
xxG
oherwise
w
s
xxE (4)
The odd function of Go(x) is defined as:
≤−−−= w/2))(-)(()2/1(
0
),,( s
xxifx
s
xG
s
xxG
oherwise
w
s
xxO (5)
6. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
36
For any pair {xs,w}, the values of either E(x,xs,w) or O(x,xs,w) are expressed appropriately by their
energy content. However, there is a problem in this case since the mean value of the odd function
is always zero, whereas the mean value of the even function in general is some positive number.
Therefore a normalized even function that its mean value is zero is defined as follows:
dxw
w
w
s
xxE
w
w
s
xxEw
s
xx
n
E ∫−
−= 2/
2/
),,(
1
),,(),,( (6)
The normalized measure is constructed with En and O for the degree of symmetry S(xs,w):
∫ ∫+
∫ ∫−
=
dxw
s
xxOdxw
s
xx
n
E
dxw
s
xxOdxw
s
xx
n
E
w
s
xS
2),,(2),,(
2),,(2),,(
),( (7)
S(xs,w) indicates the measure of symmetry for any symmetry axis xs with the symmetry
width w which has the following property:
1),(1 ≤≤− wxS s (8)
Furthermore, S = 1 in the case of ideal symmetry, S = 0 for asymmetry and S = -1 for
ideal anti-symmetry.
S(xs,w) provides us the measure for symmetry regardless of the width of the interval being
considered. For example, in the case of estimating the same measure of symmetry for two
different axis of symmetry, one should select the estimation corresponding to the largest width w.
To account the width, the measure SA(xs,w) is defined:
max
max
ww,)1),((
2
),( <+= wxS
w
w
wxSA ss (9)
For detecting two dimensional symmetry, SumSA(xs,w) should be computed as follows:
(10)
where n is the number of the ROI lines. For each symmetry axis, w is increased to wmax and the
maximum value of SumSA(xs,w) is recorded. After doing this for each symmetry axis, we find the
maximum value of the recorded values. The xs and w corresponding to the maximum value will be
the symmetry axis of the ROI and the interval width respectively which are denoted as {ݔ s, ݓෝ}.
Then, we look for more accurate location of the symmetrical part observed within the ROI and its
corresponding measure of symmetry. For example, more areas close to the boundaries of the ROI
belong to the background instead of the vehicle. For this purpose, we remove the lines in the
lower and upper quart of the ROI which exhibit relatively low symmetry values SA(xs,w).
With denoting the height of the ROI by h, we determine the upper and lower boundaries for the
refined symmetry region:
+−
= ∑
=
∈
i
hj
sj
hhi
upper
hi
wxSA
i
4
3],
4
3
[
1
4
3
)ˆ,ˆ(
max and
+−
= ∑=∈
h
ij
sj
hi
lower
ih
wxSA
i
4
1
]
4
1
,1[
1
4
1
)ˆ,ˆ(
max (11)
Finally, we modify the estimation of the interval width. For this purpose, ݓෝ is increased to wmax
and the w corresponding to the maximum value of SA(ݔොs,w) will be the modified symmetry width.
This provides us with the smallest box that contains the symmetrical part observed within the
ROI. We call this box Sym.ROI. We use (12) to find the symmetry measure of the Sym.ROI that is
denoted as SM.ROI.
∑= +−
=
upper
lower
i
ij lowerupper
sj
ii
wxSA
1
)ˆ,ˆ(
SM.ROI (12)
∑=
=
n
l
sls wxSAwxSumSA
1
),(),(
7. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
37
If SM.ROI value is lower than a pre defined threshold Threshsym , the detected shadow line will be
removed that means Sym.ROI contains no vehicle and otherwise Sym.ROI will be considered as a
hypothesis. Figure 3 shows a detailed block diagram of the proposed HG stage.
Figure 3. the HG stage (including the shadow, texture and symmetry steps)
detect shadow pixels
scanning shadow lines bottom-up
shadow line L
detect free-driving-space
Is width L sufficient
with considering
perspective view ?
define ROI above L
remove rows in ROI with
entropy < Threshe
find horizontal symmetry
symmetry axis xs
width w symmetry interval
construct SumSA(xs,w) and
determine the most
significant symmetry axis ݔොs
and width ݓෝ
determine the lower and
upper boundaries of the
symmetry region and modify
ݓෝ Sym.ROI
determine the symmetry
measure SM.ROI of
Sym.ROI
SM.ROI >
Threshsym ?
remove shadow lines in ROI
consider Sym.ROI as a hypothesis
and pass to the HV stage
remove shadow line L
yes
no
yes
no
no
yes
number of
remaining rows
> ThreshminRows
8. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
38
2.2. Hypothesis Verification (HV)
The framework in this stage is feature extraction from the hypotheses and classification them into
vehicle and non-vehicle classes. Therefore the performance of this stage is directly depended to
employ a classifier that is well trained by the appropriate features. For achieving this purpose, we
propose a framework that is shown in Figure 4. Pyramid Histograms of Oriented Gradients
(PHOG) features are extracted from an image dataset as the primitive features since they have
shown good results in object detection [40], facial expression recognition [44], human motion
classification [45] and image categorization [46]. Then Gaussian low-pass filter is applied on the
image. Following this, the size of the obtained image is reduced and the PHOG features are
extracted again from this image. This work will be to improve the classification accuracy since it
leads to extract other effective features from the image. To improve the classification accuracy
more and reduce the dimensionality, we also apply PCA to these PHOG features to generate what
we call the PHOG-PCA feature vector. Then, we divide the samples into two parts of Training
Data and Test Data as shown in Figure 4.
It is well known that feature weighting is effective for pattern classification as shown in
[47,48,49]. It is expected that the classification accuracy can be further improved by weighting
the proper first PHOG-PCA features since some local regions are less relevant for vehicle
detection than the others. For this purpose, we use a GA feature weightener. The Training Data is
divided into two parts of data1 and data2. We employ linear SVM for vehicle/ non-vehicle
classification which is trained with the data1 and then the data2 is used for validation of the
classifier. The classification accuracy is returned to the GA as one of the fitness factors. After the
convergence of the GA, linear SVM is trained regarding the Optimum Weights and the Training
Data. Next, we test it with the Test Data and the classification accuracy of the proposed HV stage
is obtained. The overview of the HV stage is shown in Figure 4.
Figure 4. the HV stage
9. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
39
2.2.1. Pyramid Histograms of Oriented Gradients (PHOG)
PHOG descriptor is a spatial pyramid representation of HOG descriptor, and reached good
performance in many studies, e.g. [50,51,52]. In this paper, the PHOG features are extracted from
vehicle and non-vehicle samples to represent by their local shape and spatial layout. As illustrated
in Figure 5, the PHOG descriptor consists of a histogram of orientation gradients over each image
sub-region at each resolution.
For extracting the PHOG features, the edge contours are extracted by acquisition of the Canny
edge detector for entire image as shown in Figure 5. Following this, each image is divided into
cells at several pyramid level. The grid at resolution level l has 2
cells along each dimension. The
orientation gradients are computed using a 3×3 Sobel mask without Gaussian smoothing.
Histogram of edge orientations within each cell is quantized into K bins. Each bin in the
histogram represents the number of edges that have orientations within a certain angular range.
The histograms of the same level are concatenated into one vector. The final PHOG descriptor for
an image is a concatenation of all vectors at each pyramid resolution that introduces the spatial
information of the image [50]. Consequently, level 0 is represented by a K-vector corresponding
to the K bins of the histogram, level 1 by a 4K-vector, and the PHOG descriptor of the entire
image is a vector with dimensionality K ∑ 4
∈ . The PHOG descriptor is normalized to sum to
unity that ensures images with more edges are not weighted more strongly than others. Figure 5
shows the PHOG descriptor procedure and the PHOG features of the example images. As can be
seen, vehicle images have similar PHOG representations whereas non-vehicle images have
different PHOG representations far enough from the vehicle ones.
z
Figure 5. Shape spatial pyramid representation. Top row: a vehicle image and grids for levels l = 0 to l = 2;
Below: histogram representations corresponding to each level. The final PHOG vector is a weighted
concatenation of vectors (histograms) for all levels. Remaining rows: another vehicle image and a non-
vehicle image, together with their histogram representations.
2.2.2. Gaussian Low-pass Filter
By eliminating the high frequencies of the image, the image is blurred. Gaussian low-pass filter
does this by beneficiary of a Gaussian function. It is widely used to reduce the image noise and
detail. It is also used in computer vision as a pre-processing step in order to enhance image
structures at different scales. In two dimensions, the Gaussian low-pass filter can be expressed as:
10. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
40
ܩሺ,ݔ ݕሻ =
ଵ
ଶగఙమ ݁
ି
ೣమశమ
మమ (13)
where x is the distance from the origin in the horizontal axis, y is the distance from the origin in
the vertical axis, and σ is the standard deviation of the Gaussian distribution. To construct
Gaussian low-pass filter, two parameters x and s are used. The values of x and s indicate the size
of the filter mask and the sigma respectively, and the sigma indicates the filter frequency. Any
value of x is more, the filter mask will be bigger and also any value of s is more, the filtered
frequency will be increased. Figure 6 shows the results of applying Gaussian low-pass filter on
three sample vehicle and non-vehicle images and also the PHOG features representation of the
filtered images.
(a) (b) (c) (d) (e)
Figure 6. the PHOG features representation for sample vehicle and non-vehicle images after applying
Gaussian low-pass filter. Column (a): the original vehicle and non-vehicle images; Column (b): the results
of applying Gaussian low-pass filter on the images of column (a); Columns (c),(d) and (e): the PHOG
features representation of the corresponding filtered images of column (b).
2.2.3. Principal Component Analysis (PCA)
The total number of the extracted PHOG features is rather high. Also, these features are likely
irrelevant and redundant. PCA was applied in [53,54] for reducing the dimensionality of the
feature vectors. PCA can be defined as the orthogonal projection of the input data onto a lower
dimensional linear subspace, such that the variance of the projected samples is maximized.
Dimension reduction and noise reduction are two advantages of employing PCA. In this paper,
we utilize this idea to reduce the dimensionality of the feature vectors. The PCA algorithm can be
summarized in the following:
Let {ݔ|i = 1, . . .,N} be a set of M-dimensional vectors. We compute the mean vector of input
vectors that is defined as ̅ݔ =
ଵ
ே
∑ ݔ୧
ே
ୀଵ and then we compute the covariance matrix Σ that is
defined as follows:
Σ =
ଵ
ே
∑ ሺݔ − ̅ݔሻே
୬ୀଵ ሺݔ − ̅ݔሻ்
(14)
By solving the eigen equations of the covariance matrix Σ, the optimum projection matrix U is
obtained
ΣU = ΛU, (U்ܷ
= I) (15)
11. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
41
and then the PCA scores for any PHOG features can be computed by using the following
equation. We called these new features PHOG-PCA features.
y = ்ܷ
ሺݔ − ̅ݔሻ (16)
In order to reduce the dimensionality, we just keep the first d principal axis that they keep the
significant discriminant information.
2.2.4. Genetic PHOG-PCA Feature Weighting
GA is a probabilistic optimization algorithm and a branch of evolutionary algorithms. In the past,
it has been used to solve different problems such as object detection [40], face recognition
[55,56], vehicle detection [4], image annotation [57], gender classification [58] and target
recognition [59].
In this study, we utilized the GA for the PHOG-PCA features weighting to reduce the
classification error of the classifier. Thus, we formed a population consisting of the chromosomes
representing the weights for the features of the two classes of vehicle and non-vehicle and used
them in the GA process. The best chromosome is the one leading to the lowest test classification
error. The procedure in finding the optimum weights via the GA is as follows:
1) Feature weighting encoding: Let the number of the PHOG-PCA features be L, so each
chromosome will be represented with L genes that each gene take values from the range of
[0-5], which in our study is divided into 10 discrete levels.
2) Calculating the fitness of these chromosomes: We forced weights 0.5 in value to 0 during our
trial. These embellishments resulted in GA-optimized classifiers with reduced feature sets.
With some training data and regarding non-zero weights, the linear SVM is trained using the
chromosome whose fitness value is to be calculated. Then, some test data is presented to the
trained classifier and classification accuracy is calculated in percentage form. The fitness
function is as follows:
Fitnessሺcሻ = CA4ሺcሻ – α ሺ
ሺୡሻ
ሻ ሺ17ሻ
where c is the chromosome, CA(c) is the classification accuracy using the linear SVM
classifier. α represents the tradeoff between the two criteria (using α =0.01). N(c) is the
number of non-zero weights. Finally, L is the total number of the features (which is fixed at
315 for all experiments). In our experiments, the classification accuracy is often more than
75%. So we used CA4
(c) instead of CA(c) because it can be more distinctive fitter
chromosome than others.
3) Initial population: All the genes of the first chromosome are ‘5’, which means the weights of
all the features are equal. The other chromosomes are generated randomly. In all of our
experiments, we used 1000 generations and a population size of 800. In most cases, the GA
converged in less than 1000 generations.
4) Crossover: We used uniform crossover, in this case each bit of the offspring is selected
randomly from the corresponding bits of the parents. The crossover rate used in all of our
experiments was 0.9.
5) Mutation: We choose uniform mutation that is, each bit has the same mutation probability.
The mutation rate used in all of our experiments was 0.08.
6) Elitism: We used the elitism strategy to prevent fitness of the next generation be smaller than
the largest fitness of the current generation, the best 40 chromosomes are preserved for the
next generation automatically.
12. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
42
3. EXPERIMENTAL RESULTS
3.1. Dataset
The vehicle dataset used contains 1646 non-vehicle images and 1648 front and rear view vehicle
images. Some of these images are from the MIT vehicle dataset and the Caltech-101 dataset,
while the rest images have been gathered with different types, poses and colors (although all
images were converted to grayscale). Some of the images contain the vehicle and other
background objects. We converted all images to jpg format and normalized size of each image to
128×128 pixels (see Figure 7).
Figure 7. Some vehicle and non-vehicle training sample images
3.2. Experiments
In our experiments, we used the linear SVM classifier and the extracted PHOG features from all
collected images with 3 levels of pyramids and 40 orientation bins in the range of [0, 360] in each
level. Therefore, the 3 level PHOG descriptor of an image is an 840-vector.
Also, we used a 7-fold cross-validation to estimate both the accuracy and generality of the linear
SVM classifier. In this case, all of the examples are partitioned into 7 subsamples and the 7th
subsample is retained as Test data while the remaining 6 subsamples are used as Training Data.
The cross-validation is then repeated 7 times with all of the 7 subsamples used exactly once as the
test data. It should be mentioned that with the aim of comparing the results of the different steps
of the HV stage, we used the same folds in all following experiments for the cross validation. In
the first experiment, we applied the PHOG descriptors and the linear SVM classifier. Table 1
shows the result.
Table 1. the classification results with the PHOG features extracted from the dataset images
Number
of
Features
True
Positive
(%)
True
Negative
(%)
Classification
Accuracy
(%)
840 96.06 92.59 94.32
In second experiment, to improve the classification accuracy, the Gaussian low-pass filter is
applied on the dataset images and then the size of the obtained images is reduced to 64×64 pixels.
Next, the PHOG features are extracted again from these images. Table 2 compares the results of
the classification using the linear SVM classifier and the extracted PHOG features from the
dataset images as well as the filtered images. In this table, K, L, X and S are the number of bins,
the number of pyramid levels, the size of the filter and the standard deviation respectively.
According to Table 2, we find that employing the PHOG features extracted from the dataset
images and also the filtered images will climb the classification accuracy compared with
employing only the PHOG features extracted from the original dataset images. As another result
of this table, exploiting the features extracted with the 3 levels of pyramids has always better
performance compared with using the 2 levels of pyramids. It is also observed while using the 3
13. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
43
levels of pyramids, it is often better to use k=20 rather than k=40. As final result in this step, the
best classification accuracy (95.69 %) is obtained using k=20, L=2, X=5 and S=5. In this case, the
total number of features is equal to (840+420=)1260 and we apply these features in the next step.
Table 2. the classification results with the PHOG features extracted from the dataset images and also the
filtered images
K L X S
True
Positive
(%)
True
Negative
(%)
Classification
Accuracy
(%)
20 2 10 15 97.45 93.68 95.57
20 1 10 15 96.84 93.01 94.93
20 2 10 10 97.51 93.68 95.60
20 1 10 10 96.72 92.89 94.81
40 2 10 15 97.52 93.59 95.56
40 1 10 15 96.97 92.95 94.96
40 2 10 10 97.21 94.11 95.66
40 1 10 10 96.91 92.77 94.84
20 2 5 5 97.51 93.86 95.69
20 1 5 5 97.09 92.77 94.93
40 2 5 5 97.27 94.06 95.67
40 1 5 5 96.84 92.89 94.87
20 2 5 10 97.51 93.74 95.63
20 1 5 10 97.21 93.30 95.26
40 2 5 10 97.33 93.80 95.57
40 1 5 10 97.27 93.13 95.20
20 2 5 2 97.21 94.11 95.66
20 1 5 2 96.78 93.01 94.90
40 2 5 2 97.09 94.20 95.65
40 1 5 2 96.72 93.07 94.90
It should be mentioned that for increasing the classification accuracy more, the Gaussian low-pass
filter was applied again on the filtered images and then the PHOG features were extracted from
them but in this case, the classification accuracy was not increased and even it often led to
reduction of the classification accuracy so we avoid mentioning these experiments and their
results.
In third experiment, we applied PCA to the PHOG features for improving the classification
accuracy more and reducing the dimensionality in two different cases. In the first case, PCA was
applied to the PHOG features extracted from the original dataset images. In the second case, PCA
was applied to the PHOG features extracted from the original dataset images and also the filtered
images. Table 3 shows the results corresponding to the second case with employing different
number of the first PHOG-PCA features. The best classification accuracy in the first case was
obtained equal to 96.05 % with employing the 370 first PHOG-PCA features (from 840 features).
Since the second case resulted in the better results, we avoid mentioning other results for the first
case.
The results of the two cases show that employing PCA in general will amplify the classification
accuracy. In the second case (Table 3) with employing the 315 first PHOG-PCA features (from
1260 features), we achieve the best classification accuracy (96.84 %) that is better than the best
classification accuracy in the first case. In the second case, besides increasing classification
accuracy, another significant point is, even though the number of features is more than the first
case but the number of effective features has decreased from 370 to 315 features that is shown
employing the features extracted from the filtered images is a good idea so we use these features
in the next step.
14. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
44
Table 3. the classification results with employing PCA to reduce the dimensionality of the PHOG
descriptors extracted from the dataset images as well as the filtered images
Number
of
Features
True
Positive
(%)
True
Negative
(%)
Classification
Accuracy
(%)
100 97.63 94.71 96.17
200 97.88 95.44 96.66
250 98.12 95.38 96.75
270 98.12 95.38 96.75
300 98.12 95.50 96.81
315 98.18 95.50 96.84
330 98.18 95.50 96.84
350 98.18 95.50 96.84
400 98.18 95.50 96.84
450 98.05 95.52 96.79
500 98.05 95.52 96.79
600 98.05 95.46 96.76
In fourth experiment, we used the GA with the configuration mentioned in section 2.2.4 to
optimize the weights of these features. We also used the same folds in the previous experiments
for the cross validation and in this case, we used 5 folds of Training Data (that we called Data1)
for training of linear SVM and 1 fold of Training Data (that we called Data2) for validation of
the learned classifier to guide the GA during the weight optimization. After the convergence of
the GA, we trained the linear SVM with the Training Data regarding the optimum weights and
tested it by application of Test Data. Applying the feature weighting led to decline in the number
of the features from 315 to 303 as well as improvement of the classification accuracy by 0.92 %.
Table 4 shows the result.
Table 4. the classification results with employing the GA to weight the PHOG-PCA features
(proposed HV method )
Number
of
Features
True
Positive
(%)
True
Negative
(%)
Classification
Accuracy
(%)
303 98.48 97.03 97.76
Figure 8 shows the results of applying our proposed method on some sample images.
Figure 8. Applying our proposed method on some sample images to detect on-road vehicles. Red rectangle:
The ROI has been considered as non-vehicle by the entropy step; Yellow rectangle: The ROI has passed the
entropy step but it has been considered as non-vehicle by the symmetry step; Green rectangle: The ROI has
passed the symmetry step too but it has been considered as non-vehicle by the classifier; Blue rectangle:
The ROI has passed all the steps and it has been classified as a vehicle
15. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
45
The results of applying our proposed method on another sample image is shown in Figure 9. As
can be seen, the vehicle in the right side has been considered as non-vehicle mistakenly by the
entropy step since the vehicle does not have a rich texture, ( it has few edges) and also a non-
vehicle ROI in the left side has been considered as vehicle mistakenly.
Figure 9. Two examples of false detections
With the aim to investigate more about the accuracy of the proposed HV stage, we compare it
with the methods presented in [40,60]. We implemented the method presented in [40]. Next, we
applied it on our dataset images. In [40], to begin, the features are extracted from the dataset
images by PCA and then the images are classified by the SVM classifier. Table 5 shows the
results of classifying with the different number of the first PCA features.
Table 5. the classification results using the different number of the first PCA features
Number
of
Features
True
Positive
(%)
True
Negative
(%)
Classification
Accuracy
(%)
50 88.77 87.31 88.04
100 89.02 88.56 88.79
150 89.62 89.06 89.34
200 90.47 89.35 89.91
250 91.09 89.80 90.45
300 90.47 89.77 90.12
In [40], after extracting the features, the genetic algorithm is employed to select features from the
proper first PCA features. According to Table 5, the 250 first PCA features shows the better
performance and since still other useful features may be between 250 to 300 first features so we
used the GA to select an optimum subset of the 300 first PCA features. It should be mentioned
that we also used the same folds in the previous experiments for the cross validation. Table 6
shows the result.
Table 6. the classification result using the selected PCA features by the GA
Number of
Features
True
Positive
(%)
True
Negative
(%)
Classification
Accuracy
(%)
82 95.45 94.41 94.93
It can be seen in Table 6, the classification accuracy using the selected features by the GA has
reached to 94.93 % while our proposed method shows 97.76 %. So the classification accuracy has
been increased by 02.83 % according to our approach.
In the following, we also compare our HV stage with the method presented in [60]. The dataset
used in [60] is shown in Table 7.
16. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
46
Table 7. the vehicles dataset [60]
Dataset
#vehicle
images
#non-
vehicle
images
Source
Training 1154 1154 CALLTECH2001
Validation 155 256 CALLTECH1999
Testiing 120 180
GRAZ + INRIA +
their own images
In [60], an analysis of ways to integrate the features and classifiers has been presented for vehicle
recognition in presence of illumination noise unseen in the training stage. They found that
employing their ensemble method has better performance than the others. In this case, Local
Receptive Field (LRF) features are classified by a Multi Layer Perceptron (MLP) classifier and
once again by a SVM classifier and also HOG features are classified by a SVM classifier. Next,
to integrate the classifiers, they used Heuristic Majority Voting (Heuristic MV) approach. To
construct the effect of penumbra and white saturation (the two effects prone to happen in outdoor
environments) two artificially light transformations over the dataset was applied (see Figure 10).
Then, the classification accuracy was obtained over the testing datasets. Table 8 summarizes their
results. Consequently, the highest classification accuracy belongs to Heuristic MV approach with
an average accuracy 91.4 %.
Figure 10. 32×32 pixel image samples. In the left column, some original samples under normal condition;
in the middle and right columns, the respective images under white saturation and penumbra
Table 8. the classification accuracy over the testing dataset [60]
Classifier
Normal
(%)
Penumbra
(%)
Saturation
(%)
Average (%)
LRF/MLP 87.0 88.0 85.7 86.9
LRF/SVM 90.0 90.0 83.7 87.9
HOG/SVM 92.0 78.0 84.3 84.76
Heuristic MV 94.3 91.7 88.3 91.4
We applied our proposed HV method over the datasets, Table 9 shows the results.
Table 9. the classification accuracy of the proposed HV method over the same testing dataset
Classifier
Normal
(%)
Penumbra
(%)
Saturation
(%)
Average (%)
Proposed HV
method
96.0 92.7 90.3 93.0
17. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
47
Table 9 shows that by employing the proposed HV method, the classification accuracy has been
enhanced by 1.7, 1.0 and 2.0 % compared to Heuristic MV approach over normal, Penumbra and
white saturation datasets respectively. This shows the proposed HV method has better
performance over noisy images in addition to normal images. The proposed HV method gained
an average accuracy of 93.0 %, which means the average accuracy has been grown by 1.6 %
compared to Heuristic MV approach.
4. CONCLUSION AND FUTURE WORKS
Generally, in this paper, a two-stage approach has been proposed to robustly detect preceding
vehicles (front and rear vehicle view). The first stage is the hypothesis generation (HG) which
contained a combination of the three clues of shadow, entropy and symmetry without prior
knowledge about the position of the road. In the hypothesis verification (HV) stage, all the
hypotheses were verified by a strong classifier. For this purpose, we have proposed a four-step
method for classifying the vehicle candidate images into vehicle and non-vehicle classes. To
begin, we extracted the PHOG features from an image dataset as well as the image obtained from
employing the Gaussian filter on the image as the primitive features. Next, we applied the PCA to
reduce the dimensionality of the PHOG descriptors and produce the reduced PHOG-PCA
features. Finally, we used the GA to find the optimum weights for these features with respect to
both the classification accuracy and the number of used features to improve their performance
and generalization. Our tests showed the HG stage has the ability to detect the approximate
location of the vehicles with appropriate accuracy and also the HV stage achieved 97.76 %
classification accuracy on the realistic on-road vehicle images.
Combining multiple cues can be useful for developing more robust and reliable systems. We have
also used the combination of the shadow, entropy and symmetry cues in the HG stage in this
study. In past, combining multiple cues has produced promising results (e.g., combination of
Local Orientation Coding (LOC), entropy, and shadow [27], color and shape [61], shape,
symmetry, and shadow [62] and motion with appearance [63]). For future work, combining
different cues can be explored to achieve effective fusion algorithm as well as cues which are fast
to compute.
In HV stage, in previous works, concentration has been usually on feature extraction while many
of the features are not relevant and it has great impact on the classification accuracy. Therefore,
applying a feature selection or weighting strategy seems to be beneficial. For future work, the
other features such as Gabor, Haar-Like, Wavelet can also be extracted and then feature selection
or weighting is applied on concatenation of their normalized vectors. Moreover, the proposed HV
stage can also be exerted on the concatenation. In this study, we used the PHOG features as a
spatial descriptor and by beneficiary of Gabor features as a frequency descriptor besides the
PHOG features, better results can be achieved since in this case, the image is described in both
spatial and frequency domain. As a result, we can benefit from the advantage of both. In another
case, after extracting features with different types (e.g. Gabor, PHOG), the features of each type
can be classified by separated classifiers and then the outputs of the classifiers can be integrated
(by investigating different methods) and finally the classification is done.
To complete the proposed system, detecting the passing vehicles can be added by acquisition of
moving information. Moreover, detecting vehicles in night time can also be appended [64]. For
this purpose, to begin, by determining the light condition of above region of the image, the day or
night can be realized. Then, certain algorithm is employed to detect vehicles in each case. Finally,
we also pointed out adding detection of other traffic objects such as pedestrians, motor cycles and
traffic signs to the proposed system for its completeness.
18. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
48
REFERENCES
[1] Z. Sun, G. Bebis, and R. Miller, "On-road Vehicle Detection: A review," IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp. 694-711, 2006.
[2] W. Jones, “Building safer cars,” IEEE Spectrum, vol. 39, no. 1, pp. 82–85, Jan. 2002.
[3] F. Han, Y. Shan, R. Cekander, H. S. Sawhney, and R. Kumar, "A Two-Stage Approach to People and
Vehicle Detection with HOG-Based SVM," in PerMIS proceeding, 2006, pp. 133-140.
[4] Z. Sun, G. Bebis, and R. Miller, "On-Road Vehicle Detection Using Evolutionary Gabor Filter
Optimization," IEEE Trans. Intelligent Transportation Systems, vol. 6, no. 2, pp. 125-137, 2005.
[5] Z. Sun, R. Miller, G. Bebis, and D. DiMeo, "A Real-Time Precrash Vehicle Detection System," Proc.
IEEE Int’l Workshop Application of Computer Vision, Dec. 2002.
[6] E. Dickmanns et al., "The Seeing Passenger Car ‘Vamors-P’," in Proc. Int’l Symp. Intelligent
Vehicles, 1994, pp. 24-26.
[7] C. Tzomakas and W. Seelen, "Vehicle Detection in Traffic Scenes Using Shadows," Technical Report
98-06, Institut fur Neuroinformatik, Ruht-Universitat, Bochum, Germany, 1998.
[8] T. Zielke, M. Brauckmann, and W. V. Seelen, "Intensity and Edge-Based Symmetry Detection with
an Application to Car-Following," Computer Vision, Graphics, and Image Processing: Image
Understanding, vol. 58, no. 2, pp. 177-190, 1993.
[9] A. Bensrhair, M. Bertozzi and A. Broggi, "A Cooperative Approach to Vision-based Vehicle
Detection," in IEEE Intelligent Transportation Systems Conference Proceedings, 2001.
[10] A. Broggi, P. Cerri and P. C. Antonello , "Multi-Resolution Vehicle Detection using Artificial
Vision," in Proceedings of IEEE Intelligent Vehicles Symposium, 2004.
[11] N. Matthews, P. An, D. Charnley and C. Harris, “Vehicle detection and recognition in greyscale
imagery,” Control Eng. Pract., vol. 4, no. 4, pp. 473–479, 1996.
[12] Z. Sun, G. Bebis, and R. Miller, "Monocular Precrash Vehicle Detection: Features and Classifiers,"
IEEE Transactions on Image Processing, vol. 15, no. 7, pp. 2019-2034, 2006.
[13] T. Xiong, and C. Debrunner, “Stochastic car tracking with lineand color-based features,” IEEE
Transactions on Intelligent Transportation Systems, vol. 5, no. 4, pp. 324–328, 2004.
[14] D. Guo, T. Fraichard, M. Xie and C. Laugier, “Color modeling by spherical influence field in sensing
driving environment,” in IEEE Intelligent Vehicle Symp., Dearborn, MI, Oct. 2000, pp. 249–254.
[15] T. Kalinke, C. Tzomakas, and W. von Seelen, "A texture-based object detection and an adaptive
model-based classification," in Proc. IEEE Int. Conf. Intelligent Vehicles, Stuttgart, Germany, Oct.
1998, pp. 143–148.
[16] T. Bucher, C. Curio, J. Edelbrunner, et al., “Image processing and behavior planning for intelligent
vehicles,” IEEE Transactions on Industrial Electronics, vol. 50, no. 1, pp. 62–75, 2003.
[17] R. Cucchiara and M. Piccardi, "Vehicle Detection under Day and Night Illumination," in Proc. Int’l
ICSC Symp. Intelligent Industrial Automation, 1999.
[18] J. Firl, M. H. Hoerter, M. Lauer, and C. Stiller, "Vehicle detection, classification and position
estimation based on monocular video data during night-time," In Proceedings of 8th International
Symposium on Automotive Lighting, Darmstadt, Sept. 2009.
[19] H. Mallot, H. Bulthoff, J. Little and S. Bohrer, “Inverse perspective mapping simplifies optical flow
computation and obstacle detection,” Biol. Cybern., vol. 64, no. 3, pp. 177–185, 1991.
[20] M. Bertozzi and A. Broggi, “Gold: A parallel real-time stereo vision system for generic obstacle and
lane detection,” IEEE Trans. Image Process., vol. 7, pp. 62–81, Jan. 1998.
[21] A. Broggi, M. Bertozzi, A. Fascioli, C. Guarino Lo Bianco and A. Piazzi, “Visual perception of
obstacles and vehicles for platooning,” IEEE Trans. Intell. Transp. Syst., vol. 1, pp. 164–176, Sep.
2000.
[22] A. Giachetti, M. Campani and V. Torre, “The use of optical flow for road navigation,” IEEE Trans.
Robot. Autom., vol. 14, pp. 34–48, Feb. 1998.
[23] W. Kruger, W. Enkelmann and S. Rossle, “Real-time estimation and tracking of optical flow vectors
for obstacle detection,” in Proc. IEEE Intelligent Vehicle Symp., Detroit, MI, Sep. 1995, pp. 304–309.
[24] B. Heisele and W. Ritter, “Obstacle Detection Based on Color Blob Flow,” Proc. IEEE Intelligent
Vehicle Symp., 1995, pp. 282-286.
[25] D. Koller, N. Heinze, and H. Nagel, “Algorithmic Characterization of Vehicle Trajectories from
Image Sequence by Motion Verbs,” Proc. IEEE Int’l Conf. Computer Vision and Pattern
Recognition, 1991, pp. 90-95.
[26] P. Parodi and G. Piccioli, “A Feature-Based Recognition Scheme for Traffic Scenes,” Proc. IEEE
Intelligent Vehicles Symp. , 1995, pp. 229-234.
19. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
49
[27] U. Handmann, T. Kalinke, C. Tzomakas, M. Werner, and W. Seelen, “An Image Processing System
for Driver Assistance,” Image and Vision Computing, vol. 18, no. 5, 2000.
[28] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proc.
CVPR, 2001, pp. 511-518.
[29] M. Weber, M. Welling, and P. Perona, “Unsupervised learning of models for recognition,” in Proc.
ECCV, 2000, pp. 18-32.
[30] S. Agarwal, A. Awan, and D. Roth, “Learning to detect objects in images via a sparse, part-based
representation,” IEEE PAMI, 26(11):1475-1490, Nov. 2004.
[31] Q.B. Truong, and B.R. Lee, "Vehicle Detection Algorithm Using Hypothesis Generation and
Verification," in Proc. ICIC (1), 2009, pp.534-543.
[32] C. Goerick, N. Detlev, and M. Werner, “Artificial Neural Networks in Real-Time Car Detection and
Tracking Applications,” Pattern Recognition Letters, vol. 17, pp. 335-343, 1996.
[33] O. L. Junior and U. Nunes, “Improving the generalization properties of neural networks: An
application to vehicle detection,” in Proc. IEEE Conf. Intell. Transp. Syst., Oct. 2008, pp. 310–315.
[34] Thiang, R. Lim, and A. T. Guntoro, “Car Recognition Using Gabor Filter Feature Extraction,”
Circuits and Systems, APCCAS’02. (2), pp.451-455, 2002.
[35] G. Y. Song, K. Y. Lee, and J. W. Lee, "Vehicle detection by edge-based candidate generation and
appearance-based classification," Intelligent Vehicles Symposium, pp. 428–433, June 2008.
[36] P. Viola and M. Jones, "Robust real-time object detection," in International Journal of Computer
Vision, 2001.
[37] C. Papageorgiou and T. Poggio, “A trainable system for object detection,” Int. J. Comput. Vis., vol.
38, no. 1, pp. 15–33, 2000.
[38] P. Negri, X. Clady, S. M. Hanif, and L. Prevost, “A cascade of boosted generative and discriminative
classifiers for vehicle detection,” EURASIP J. Adv. Signal Process., vol. 2008, pp. 1–12, 2008.
[39] C. Wang and J.-J. J. Lien, “Automatic vehicle detection using local features-A statistical approach,”
IEEE Trans. Intell. Transp. Syst., vol. 9, no. 1, pp. 83–96, Mar. 2008.
[40] Z. Sun, G. Bebis, and R. Miller, "Object Detection Using Feature Subset Selection," Pattern
Recognition, vol. 37, pp. 2165-2176, 2004.
[41] M.B. van Leeuwen and F.C.A. Groen, "Vehicle detection with a mobile camera: Spotting midrange,
distant, and passing cars," IEEE Robotics and Automation Magazine, vol. 12, no. 1, pp. 37-43, 2005.
[42] M.B. van Leeuwen, “Motion estimation and interpretation for in-car systems” Ph.D. dissertation,
University of Amsterdam, 2002.
[43] C. E. Shannon, "A mathematical theory of communication," Bell System Technical Journal, 27:379-
423,623-656, 1948.
[44] Z. Li, J.-i. Imai, and M. Kaneko, "Facial-component-based bag of words and phog descriptor for
facial expression recognition," in Proceedings of the 2009 IEEE international conference on Systems,
Man and Cybernetics, ser. SMC’09, 2009.
[45] L. Shao and L. Ji, "A Descriptor Combining MHI and PCOG for Human Motion Classification," In
Proceedings of the ACM International Conference on Image and Video Retrieval (CIVR), Xi’an,
China, July 2010.
[46] X. H. Han and Y. W. Chen, "Image Categorization by Learned PCA Subspace of Combined Visual-
words and Low-level Features," in Fifth International Conference on Intelligent Information Hiding
and Multimedia Signal Processing, 2009.
[47] S. Ozşen, and S. Guneş, "Attribute weighting via genetic algorithms for attribute weighted artificial
immune system (AWAIS) and its application to heart disease and liver disorders problems," Expert
Systems with Applications, vol. 36, pp 386-392, Jan. 2009.
[48] F. Hussein, N. Kharma, R. Ward, "Genetic Algorithm for Feature Selection and Weighting, a Review
and Study," 6th Int. Conf. on Document Analysis and Recognition, Sept. 2001, pp. 1240-1244.
[49] B. T. Ongkowijaya, and X. Zhu, "A New Weighted Feature Approach Based on GA for Speech
Recognition," in 7th International Conference on Signal Processing (ICSP), 2004 , pp. 663–666.
[50] A. Bosch, A. Zisserman, and X. Munoz, "Representing shape with a spatial pyramid kernel," In
Proceedings of the International Conference on Image and Video Retrieval, 2007.
[51] B. Zhang, Y. Song, and S. U. Guan, "Historic Chinese Architectures Image Retrieval by SVM and
Pyramid Histogram of Oriented Gradients Features," in International Journal of Soft Computing, vol.
5, issue 2, pp. 19-28, 2010.
[52] Y. Bai, L. Guo, L. Jin, and Q. Huang, "A Novel Feature Extraction Method Using Pyramid Histogram
of Orientation Gradients for Smile Recognition," in 16th IEEE International Conference on Image
Processing (ICIP), 2009, pp. 3305 – 3308.
20. Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.4, August 2013
50
[53] T. Kobayashi, A. Hidaka, and T. Kurita, "Selection of Histograms of Oriented Gradients Features for
Pedestrian Detection," in Proc. ICONIP (2), 2007, pp.598-607.
[54] N.G.Chitaliya and A.I.Trivedi, "An Efficient Method for Face Feature Extraction and Recognition
based on Contourlet Transform and Principal Component Analysis using Neural Network,"
International Journal of Computer Applications, vol. 6, No. 4, September 2010.
[55] C. Liu and H. Wechsler, "Evolutionary Pursuit and Its Application to Face Recognition," IEEE Trans.
Pattern Analysis and Machine Intelligence, vol. 22, no. 6, pp. 570-582, June 2000.
[56] G. Bebis, S. Uthiram and M. Georgiopoulos, "Face detection and verification using genetic search,"
Int. J. Artif. Intell. Tools, vol. 9, pp. 225–246, 2000.
[57] T. Zhao, J. Lu, Y. Zhang, and Q. Xiao, "Image Annotation Based on Feature Weight Selection," in
International Conference on Cyberworlds, 2008, pp. 251–255.
[58] Z. Sun, G. Bebis, X. Yuan and S. Louis, "Genetic feature subset selection for gender classification: A
comparison study," in IEEE Int. Workshop Application Computer Vision, Orlando, FL, Dec. 2002, pp.
165–170.
[59] A. J. Katz and P. R Thrift, "Generating lmage Filters for Target Recognition by Genetic Leaming,"
IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.16, pp. 906-910, 1994.
[60] L. Oliveira and U. Nunes, "On Integration of Features and Classifiers for Robust Vehicle Detection,"
in IEEE Conference on Intelligent Transportation Systems, 2008, pp. 414-419.
[61] K. She, G. Bebis, H. Gu, and R. Miller, "Vehicle Tracking Using On-Line Fusion of Color and Shape
Features," Proc. IEEE Int’l Conf. Intelligent Transportation Systems, 2004.
[62] J. Collado, C. Hilario, A. de la Escalera, and J. Armingol, "Model-Based Vehicle Detection for
Intelligent Vehicles," Proc. IEEE Intelligent Vehicles Symp., 2004.
[63] J. Wang, G. Bebis, and R. Miller, "Overtaking Vehicle Detection Using Dynamic and Quasi-Static
Background Modeling," Proc. IEEE Workshop Machine Vision for Intelligent Vehicles, 2005.
[64] S. Y. Kim, S. Y. Oh, J. K. Kang, Y. W. Ryu, K. Kim, S. C. Park and K. H. Park, "Front and Rear
Vehicle Detection and Tracking in the Day and Night Times Using Vision and Sonar Sensor Fusion,"
IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), Edmonton, Canada,
2005, pp.3616-3621.
Authors
Nima Khairdoost received the BS and MS degrees in Computer engineering from
Ferdowsi University of Mashhad and University of Isfahan, Iran in 2008 and 2011,
respectively. His research interests include Image processing, machine vision and
pattern recognition as well as evolutionary algorithms.
S. Amirhassan Monadjemi, born 1968, in Isfahan, Iran. He received his BS degree
at computer hardware engineering from Isfahan University of Tech., in 1991, his MS
degree at computer engineering, machine intelligence and robotics from Shiraz
University, Shiraz, in 1994 and his PhD degree in computer science, image
processing and pattern recognition from University of Bristol, Bristol, UK. in 2004.
He is now working as assistant professor at the Department of Computer
Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran. His
research interests include image processing, computer vision and pattern recognition,
computer aided learning and physical detection and elimination of viruses.
Kamal Jamshidi received the MS and PhD degrees in electrical engineering from
Anna University of India in 1990 and I.I.T University of India in 2003, respectively.
He currently is an assistant professor in the Engineering Department of University of
Isfahan. His research interests include wireless sensor network and vehicular ad hoc
networks as well as fuzzy systems and microprocessor based systems.