This document summarizes research on applying computer vision algorithms to develop an automatic traffic monitoring system in Vietnam. Key aspects of the system include vehicle detection using differences between frames, vehicle segmentation using edge detection and dilation, vehicle classification based on area and shape, and vehicle tracking across frames to count vehicles and estimate speeds. Experimental results found the system could detect 90-95% of vehicles and estimate speeds accurately 90-93% of the time. The research aims to improve traffic management by providing real-time traffic information.
CANNY EDGE DETECTION BASED REAL-TIME INTELLIGENT PARKING MANAGEMENT SYSTEMJANAK TRIVEDI
Real-time traffic monitoring and parking are very important aspects
for a better social and economic system. Python-based Intelligent Parking
Management System (IPMS) module using a USB camera and a canny edge
detection method was developed. The current situation of real-time parking slot
was simultaneously checked, both online and via a mobile application, with a
message of Parking “Available” or “Not available” for 10 parking slots. In
addition, at the time entering in parking module, gate open and at the time of exit
parking module, the gate closes automatically using servomotor and sensors.
Results are displayed in figures with the proposed method flow chart
LANE CHANGE DETECTION AND TRACKING FOR A SAFE-LANE APPROACH IN REAL TIME VISI...cscpconf
Image sequences recorded with cameras mounted in a moving vehicle provide information
about the vehicle’s environment which has to be analysed in order to really support the driver
in actual traffic situations. One type of information is the lane structure surrounding the vehicle.
Therefore, driver assistance functions which make explicit use of the lane structure represented
by lane borders and lane markings is to be analysed. Lane analysis is performed on the road
region to remove road pixels. Only lane markings are the interests for the lane detection
process. Once the lane boundaries are located, the possible edge pixels are scanned to
continuously obtain the lane model. The developed system can reduce the complexity of vision
data processing and meet the real time requirements.
Vehicle detection and tracking techniques a concise reviewsipij
Vehicle detection and tracking applications play an important role for civilian and military applications
such as in highway traffic surveillance control, management and urban traffic planning. Vehicle detection
process on road are used for vehicle tracking, counts, average speed of each individual vehicle, traffic
analysis and vehicle categorizing objectives and may be implemented under different environments
changes. In this review, we present a concise overview of image processing methods and analysis tools
which used in building these previous mentioned applications that involved developing traffic surveillance
systems. More precisely and in contrast with other reviews, we classified the processing methods under
three categories for more clarification to explain the traffic systems.
A Much Advanced and Efficient Lane Detection Algorithm for Intelligent Highwa...cscpconf
This paper presents a much advanced and efficient lane detection algorithm. The algorithm is based on (ROI) Region of Interest segmentation. In this algorithm images are pre-processed by top-hat transform for de-noising and enhancing contrast. ROI of a test image is then extracted. For detecting lines in the ROI, Hough transform is used. Estimation of the distance between Hough origin and lane-line midpoint is made. Lane departure decision is made based on the difference between these distances. As for the simulation part we have used Matlab software.Experiments show that the proposed algorithm can detect the lane markings accurately and quickly.
OpenCVand Matlab based Car Parking System Module for Smart City using Circle ...JANAK TRIVEDI
finding parking availability for a specific time period is
a very tedious job in urban areas. The Indian government now
focusing on t he smart city project, already they published city
name for a n upcoming smart city project. In smart city
application , intelligent transportation system (ITS) plays an
important role- in that finding parking place, specifically for the
car owner to avoid time computation, as well as congestion in
traffic is going to be very important. In this article, we propose
an intelligent car parking system for the smart city using Circle
Hough Transform (CHT).
CANNY EDGE DETECTION BASED REAL-TIME INTELLIGENT PARKING MANAGEMENT SYSTEMJANAK TRIVEDI
Real-time traffic monitoring and parking are very important aspects
for a better social and economic system. Python-based Intelligent Parking
Management System (IPMS) module using a USB camera and a canny edge
detection method was developed. The current situation of real-time parking slot
was simultaneously checked, both online and via a mobile application, with a
message of Parking “Available” or “Not available” for 10 parking slots. In
addition, at the time entering in parking module, gate open and at the time of exit
parking module, the gate closes automatically using servomotor and sensors.
Results are displayed in figures with the proposed method flow chart
LANE CHANGE DETECTION AND TRACKING FOR A SAFE-LANE APPROACH IN REAL TIME VISI...cscpconf
Image sequences recorded with cameras mounted in a moving vehicle provide information
about the vehicle’s environment which has to be analysed in order to really support the driver
in actual traffic situations. One type of information is the lane structure surrounding the vehicle.
Therefore, driver assistance functions which make explicit use of the lane structure represented
by lane borders and lane markings is to be analysed. Lane analysis is performed on the road
region to remove road pixels. Only lane markings are the interests for the lane detection
process. Once the lane boundaries are located, the possible edge pixels are scanned to
continuously obtain the lane model. The developed system can reduce the complexity of vision
data processing and meet the real time requirements.
Vehicle detection and tracking techniques a concise reviewsipij
Vehicle detection and tracking applications play an important role for civilian and military applications
such as in highway traffic surveillance control, management and urban traffic planning. Vehicle detection
process on road are used for vehicle tracking, counts, average speed of each individual vehicle, traffic
analysis and vehicle categorizing objectives and may be implemented under different environments
changes. In this review, we present a concise overview of image processing methods and analysis tools
which used in building these previous mentioned applications that involved developing traffic surveillance
systems. More precisely and in contrast with other reviews, we classified the processing methods under
three categories for more clarification to explain the traffic systems.
A Much Advanced and Efficient Lane Detection Algorithm for Intelligent Highwa...cscpconf
This paper presents a much advanced and efficient lane detection algorithm. The algorithm is based on (ROI) Region of Interest segmentation. In this algorithm images are pre-processed by top-hat transform for de-noising and enhancing contrast. ROI of a test image is then extracted. For detecting lines in the ROI, Hough transform is used. Estimation of the distance between Hough origin and lane-line midpoint is made. Lane departure decision is made based on the difference between these distances. As for the simulation part we have used Matlab software.Experiments show that the proposed algorithm can detect the lane markings accurately and quickly.
OpenCVand Matlab based Car Parking System Module for Smart City using Circle ...JANAK TRIVEDI
finding parking availability for a specific time period is
a very tedious job in urban areas. The Indian government now
focusing on t he smart city project, already they published city
name for a n upcoming smart city project. In smart city
application , intelligent transportation system (ITS) plays an
important role- in that finding parking place, specifically for the
car owner to avoid time computation, as well as congestion in
traffic is going to be very important. In this article, we propose
an intelligent car parking system for the smart city using Circle
Hough Transform (CHT).
REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED DRIVER ASSISTANC...ijcsit
Lane detection and tracking is one of the key features of advanced driver assistance system. Lane detection is finding the white markings on a dark road. Lane tracking use the previously detected lane markers and adjusts itself according to the motion model. In this paper, review of lane detection and tracking algorithms developed in the last decade is discussed. Several modalities are considered for lane detection which
include vision, LIDAR, vehicle odometry information,information from global positioning system and digital maps. The lane detection and tracking is one of the challenging problems in computer vision.Different vision based lane detection techniques are explained in the paper. The performance of different lane detection and tracking algorithms is also compared and studied.
Implementation of a lane-tracking system for autonomous driving using Kalman ...Francesco Corucci
This project was developed for a Digital Control class. It consists of a system that is able to identify and track lane marks in a video acquired by webcam. It's interesting how the Kalman filter is used in such a context in order to make the lane detection computationally feasible in the small amount of time between two subsequent video frames
Automatic Road Sign Recognition From VideoDr Wei Liu
Road signs provide important information for guiding, warning, or regulating the drivers’ behaviour in order to make driving safer and easier. The Road Sign Recognition (RSR) is a field of applied computer vision research concerned with the automatic detection and classification of traffic signs in traffic scene images acquired from a moving car. Pavement Management Services has developed the first truly spatially registered video system in Australia. The digital video system offers continuous, high resolution video capture of five different views along the roadway. In this paper a road sign recognition system (RS2) for the high resolution roadside video recorded by PMS system will be introduced. The recognition process of RS2 is divided into three distinct parts: detection and location, recognition and classification, and display and record for information of road signs. While lots of attempts at automated sign recognition were based on the detection of shape patterns, the proposed method for PMS Video detects road signs by recognising their patterns in color space. Based on the performance testing of proposed RS2 for the road video collected in state highway network, the proposed approach is found to be robust and fast for detection of most of road signs commonly found in New Zealand, including warning signs, information signs, regulatory signs, and street signs. The sign recognition results include the exact locations of the road sign, types of road sign, and the images containing the road sign detected, which can be presented in various format and be used in sign condition evaluation for asset management.
Traffic Light Detection and Recognition for Self Driving Cars using Deep Lear...ijtsrd
Self driving cars has the potential to revolutionize urban mobility by providing sustainable, safe, and convenient and congestion free transportability. Autonomous driving vehicles have become a trend in the vehicle industry. Many driver assistance systems DAS have been presented to support these automatic cars. This vehicle autonomy as an application of AI has several challenges like infallibly recognizing traffic lights, signs, unclear lane markings, pedestrians, etc. These problems can be overcome by using the technological development in the fields of Deep Learning, Computer Vision due to availability of Graphical Processing Units GPU and cloud platform. By using deep learning, a deep neural network based model is proposed for reliable detection and recognition of traffic lights TL . Aswathy Madhu | Sruthy S ""Traffic Light Detection and Recognition for Self Driving Cars using Deep Learning: Survey"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30030.pdf
Paper Url : https://www.ijtsrd.com/engineering/computer-engineering/30030/traffic-light-detection-and-recognition-for-self-driving-cars-using-deep-learning-survey/aswathy-madhu
An Analysis of Various Deep Learning Algorithms for Image Processingvivatechijri
Various applications of image processing has given it a wider scope when it comes to data analysis.
Various Machine Learning Algorithms provide a powerful environment for training modules effectively to
identify various entities of images and segment the same accordingly. Rather one can observe that though the
image classifiers like the Support Vector Machines (SVM) or Random Forest Algorithms do justice to the task,
deep learning algorithms like the Artificial Neural Networks (ANN) and its subordinates, the very well-known
and extremely powerful Algorithm Convolution Neural Networks (CNN) can provide a new dimension to the
image processing domain. It has way higher accuracy and computational power for classifying images further
and segregating their various entities as individual components of the image working region. Major focus will
be on the Region Convolution Neural Networks (R-CNN) algorithm and how well it provides the pixel-level
segmentation further using its better successors like the Fast-Faster and Mask R-CNN versions.
Scenario-Based Development & Testing for Autonomous DrivingYu Huang
Formal Scenario-Based Testing of Autonomous Vehicles: From Simulation to the Real World, 2020
A Scenario-Based Development Framework for Autonomous Driving, 2020
A Customizable Dynamic Scenario Modeling and Data Generation Platform for Autonomous Driving, 2020
Large Scale Autonomous Driving Scenarios Clustering with Self-supervised Feature Extraction, 2021
Generating and Characterizing Scenarios for Safety Testing of Autonomous Vehicles, 2021
Systems Approach to Creating Test Scenarios for Automated Driving Systems, Reliability Engineering and System Safety (215), 2021
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED DRIVER ASSISTANC...ijcsit
Lane detection and tracking is one of the key features of advanced driver assistance system. Lane detection is finding the white markings on a dark road. Lane tracking use the previously detected lane markers and adjusts itself according to the motion model. In this paper, review of lane detection and tracking algorithms developed in the last decade is discussed. Several modalities are considered for lane detection which
include vision, LIDAR, vehicle odometry information,information from global positioning system and digital maps. The lane detection and tracking is one of the challenging problems in computer vision.Different vision based lane detection techniques are explained in the paper. The performance of different lane detection and tracking algorithms is also compared and studied.
Implementation of a lane-tracking system for autonomous driving using Kalman ...Francesco Corucci
This project was developed for a Digital Control class. It consists of a system that is able to identify and track lane marks in a video acquired by webcam. It's interesting how the Kalman filter is used in such a context in order to make the lane detection computationally feasible in the small amount of time between two subsequent video frames
Automatic Road Sign Recognition From VideoDr Wei Liu
Road signs provide important information for guiding, warning, or regulating the drivers’ behaviour in order to make driving safer and easier. The Road Sign Recognition (RSR) is a field of applied computer vision research concerned with the automatic detection and classification of traffic signs in traffic scene images acquired from a moving car. Pavement Management Services has developed the first truly spatially registered video system in Australia. The digital video system offers continuous, high resolution video capture of five different views along the roadway. In this paper a road sign recognition system (RS2) for the high resolution roadside video recorded by PMS system will be introduced. The recognition process of RS2 is divided into three distinct parts: detection and location, recognition and classification, and display and record for information of road signs. While lots of attempts at automated sign recognition were based on the detection of shape patterns, the proposed method for PMS Video detects road signs by recognising their patterns in color space. Based on the performance testing of proposed RS2 for the road video collected in state highway network, the proposed approach is found to be robust and fast for detection of most of road signs commonly found in New Zealand, including warning signs, information signs, regulatory signs, and street signs. The sign recognition results include the exact locations of the road sign, types of road sign, and the images containing the road sign detected, which can be presented in various format and be used in sign condition evaluation for asset management.
Traffic Light Detection and Recognition for Self Driving Cars using Deep Lear...ijtsrd
Self driving cars has the potential to revolutionize urban mobility by providing sustainable, safe, and convenient and congestion free transportability. Autonomous driving vehicles have become a trend in the vehicle industry. Many driver assistance systems DAS have been presented to support these automatic cars. This vehicle autonomy as an application of AI has several challenges like infallibly recognizing traffic lights, signs, unclear lane markings, pedestrians, etc. These problems can be overcome by using the technological development in the fields of Deep Learning, Computer Vision due to availability of Graphical Processing Units GPU and cloud platform. By using deep learning, a deep neural network based model is proposed for reliable detection and recognition of traffic lights TL . Aswathy Madhu | Sruthy S ""Traffic Light Detection and Recognition for Self Driving Cars using Deep Learning: Survey"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30030.pdf
Paper Url : https://www.ijtsrd.com/engineering/computer-engineering/30030/traffic-light-detection-and-recognition-for-self-driving-cars-using-deep-learning-survey/aswathy-madhu
An Analysis of Various Deep Learning Algorithms for Image Processingvivatechijri
Various applications of image processing has given it a wider scope when it comes to data analysis.
Various Machine Learning Algorithms provide a powerful environment for training modules effectively to
identify various entities of images and segment the same accordingly. Rather one can observe that though the
image classifiers like the Support Vector Machines (SVM) or Random Forest Algorithms do justice to the task,
deep learning algorithms like the Artificial Neural Networks (ANN) and its subordinates, the very well-known
and extremely powerful Algorithm Convolution Neural Networks (CNN) can provide a new dimension to the
image processing domain. It has way higher accuracy and computational power for classifying images further
and segregating their various entities as individual components of the image working region. Major focus will
be on the Region Convolution Neural Networks (R-CNN) algorithm and how well it provides the pixel-level
segmentation further using its better successors like the Fast-Faster and Mask R-CNN versions.
Scenario-Based Development & Testing for Autonomous DrivingYu Huang
Formal Scenario-Based Testing of Autonomous Vehicles: From Simulation to the Real World, 2020
A Scenario-Based Development Framework for Autonomous Driving, 2020
A Customizable Dynamic Scenario Modeling and Data Generation Platform for Autonomous Driving, 2020
Large Scale Autonomous Driving Scenarios Clustering with Self-supervised Feature Extraction, 2021
Generating and Characterizing Scenarios for Safety Testing of Autonomous Vehicles, 2021
Systems Approach to Creating Test Scenarios for Automated Driving Systems, Reliability Engineering and System Safety (215), 2021
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Traffic State Estimation and Prediction under Heterogeneous Traffic ConditionsIDES Editor
The recent economic growth in developing countries
like India has resulted in an intense increase of vehicle
ownership and use, as witnessed by severe traffic congestion
and bottlenecks during peak hours in most of the metropolitan
cities. Intelligent Transportation Systems (ITS) aim to reduce
traffic congestion by adopting various strategies such as
providing pre-trip and en-route traffic information thereby
reducing demand, adaptive signal control for area wide
optimization of traffic flow, etc. The successful deployment
and the reliability of these systems largely depend on the
accurate estimation of the current traffic state and quick and
reliable prediction to future time steps. At a macroscopic level,
this involves the prediction of fundamental traffic stream
parameters which include speed, density and flow in spacetime
domain. The complexity of prediction is enhanced by
heterogeneous traffic conditions as prevailing in India due to
less lane discipline and complex interactions among different
vehicle types. Also, there is no exclusive traffic flow model for
heterogeneous traffic conditions which can characterize the
traffic stream at a macroscopic level. Hence, the present study
tries to explore the applicability of an existing macroscopic
model, namely the Lighthill-Whitham-Richards (LWR) model,
for short term prediction of traffic flow in a busy arterial in
the city of Chennai, India, under heterogeneous traffic
conditions. Both linear and exponential speed-density
relations were considered and incorporated into the
macroscopic model. The resulting partial differential
equations are solved numerically and the results are found to
be encouraging. This model can ultimately be helpful for the
implementation of ATIS/ATMS applications under
heterogeneous traffic environment.
Smart Wireless Surveillance Monitoring using RASPBERRY PIKrishna Kumar
This is a slide about the smart surveillance monitoring system using raspberry pi.
It includes the full details of the procedure , component description and the screenshots
Monitoring traffic in urban areas is an important task for intelligent transport applications to alleviate the traffic problems like traffic jams and long trip times. The traffic flow in urban areas is more complicated than the traffic flow in highway, due to the slow movement of vehicles and crowded traffic flows in urban areas. In this paper, a vehicle detection and classification system at intersections is proposed. The system consists of three main phases: vehicle detection, vehicle tracking and vehicle classification. In the vehicle detection, the background subtraction is utilized to detect the moving vehicles by employing mixture of Gaussians (MoGs) algorithm, and then the removal shadow algorithm is developed to improve the detection phase and eliminate the undesired detected region (shadows). After the vehicle detection phase, the vehicles are tracked until they reach the classification line. Then the vehicle dimensions are utilized to classify the vehicles into three classes (cars, bikes, and trucks). In this system, there are three counters; one counter for each class. When the vehicle is classified to a specific class, the class counter is incremented by one. The counting results can be used to estimate the traffic density at intersections, and adjust the timing of traffic light for the next light cycle. The system is applied to videos obtained by stationary cameras. The results obtained demonstrate the robustness and accuracy of the proposed system.
Neural Network based Vehicle Classification for Intelligent Traffic Controlijseajournal
Nowadays, number of vehicles has been increased and traditional systems of traffic controlling couldn’t be
able to meet the needs that cause to emergence of Intelligent Traffic Controlling Systems. They improve
controlling and urban management and increase confidence index in roads and highways. The goal of this
article is vehicles classification base on neural networks. In this research, it has been used a immovable
camera which is located in nearly close height of the road surface to detect and classify the vehicles. The
algorithm that used is included two general phases; at first, we are obtaining mobile vehicles in the traffic
situations by using some techniques included image processing and remove background of the images and
performing edge detection and morphology operations. In the second phase, vehicles near the camera are
selected and the specific features are processed and extracted. These features apply to the neural networks
as a vector so the outputs determine type of vehicle. This presented model is able to classify the vehicles in
three classes; heavy vehicles, light vehicles and motorcycles. Results demonstrate accuracy of the
algorithm and its highly functional level.
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...ITIIIndustries
The suggested method helps predicting vehicles movement in order to give the driver more time to react and avoid collisions on roads. The algorithm is dynamically modelling the road scene around the vehicle based on the data from the onboard camera. All moving objects are monitored and represented by the dynamic model on a 2D map. After analyzing every object’s movement, the algorithm predicts its possible behavior.
A VISION-BASED REAL-TIME ADAPTIVE TRAFFIC LIGHT CONTROL SYSTEM USING VEHICULA...JANAK TRIVEDI
In India, traffic control management is a difficult task due to an increment in the number of vehicles for the same infrastructure and systems. In the smart-city project, the Adaptive Traffic Light Control System (ATLCS) is one of the major research concerns for an Intelligent Transportation System (ITS) development to reduce traffic congestion and accidents, create a healthy environment, etc. Here, we have proposed a Vehicular Density Value (VDV) based adaptive traffic light control system method for 4-way intersection points using a selection of rotation, area of interest, and Statistical Block Matching Approach (SBMA). Graphical User Interface (GUI) and Hardware-based results are shown in the result section. We have compared, the normal traffic light control system with the proposed adaptive traffic light control system in the results section. The same results are verified using a hardware (raspberry-pi) device with different sizes, colors, and shapes of vehicles using the same method.
Traffic Violation Detection Using Multiple Trajectories of VehiclesIJERA Editor
In general lane change violations are likely to happen before the stop line in the red-light violation detection
region. The system which can be detecting red-light and lane change violation is very useful for the traffic
management detection using vehicles moving in the region of interest and combining with the evaluation of the
trajectories behavior of multiple vehicles using mean square displacement (MSD) to detected both of violation.
We are using image processing technique only to detected traffic signal without help of another other system.
The experiment result shows that the algorithm is high accuracy to detect both of violation.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
Real Time Object Identification for Intelligent Video Surveillance ApplicationsEditor IJCATR
Intelligent video surveillance system has emerged as a very important research topic in the computer vision field in the
recent years. It is well suited for a broad range of applications such as to monitor activities at traffic intersections for detecting
congestions and predict the traffic flow. Object classification in the field of video surveillance is a key component of smart
surveillance software. Two robust methodology and algorithms adopted for people and object classification for automated surveillance
systems is proposed in this paper. First method uses background subtraction model for detecting the object motion. The background
subtraction and image segmentation based on morphological transformation for tracking and object classification on highways is
proposed. This algorithm uses erosion followed by dilation on various frames. Proposed algorithm in first method, segments the image
by preserving important edges which improves the adaptive background mixture model and makes the system learn faster and more
accurately. The system used in second method adopts the object detection method without background subtraction because of the static
object detection. Segmentation is done by the bounding box registration technique. Then the classification is done with the multiclass
SVM using the edge histogram as features. The edge histograms are calculated for various bin values in different environment. The
result obtained demonstrates the effectiveness of the proposed approach.
Automated Traffic sign board classification system is one of the key technologies of Intelligent
Transportation Systems (ITS). Traffic Surveillance System is being more and important with improving
urban scale and increasing number of vehicles. This Paper presents an intelligent sign board
classification method based on blob analysis in traffic surveillance. Processing is done by three main
steps: moving object segmentation, blob analysis, and classifying. A Sign board is modelled as a
rectangular patch and classified via blob analysis. By processing the blob of sign boards, the meaningful
features are extracted. Tracking moving targets is achieved by comparing the extracted features with
training data. After classifying the sign boards the system will intimate to user in the form of alarms,
sound waves. The experimental results show that the proposed system can provide real-time and useful
information for traffic surveillance.
VEHICLE CLASSIFICATION USING THE CONVOLUTION NEURAL NETWORK APPROACHJANAK TRIVEDI
We present vehicle detection classification using the Convolution
Neural Network (CNN) of the deep learning approach. The automatic vehicle
classification for traffic surveillance video systems is challenging for the Intelligent
Transportation System (ITS) to build a smart city. In this article, three different
vehicles: bike, car and truck classification are considered for around 3,000 bikes,
6,000 cars, and 2,000 images of trucks. CNN can automatically absorb and extract
different vehicle dataset’s different features without a manual selection of features.
The accuracy of CNN is measured in terms of the confidence values of the detected
object. The highest confidence value is about 0.99 in the case of the bike category
vehicle classification. The automatic vehicle classification supports building an
electronic toll collection system and identifying emergency vehicles in the traffic
Vehicle Tracking Using Kalman Filter and Featuressipij
Vehicle tracking has a wide variety of applications. The image resolution of the video available from most traffic camera system is low. In many cases for tracking multi object, distinguishing them from another isn’t easy because of their similarity. In this paper we describe a method, for tracking multiple objects, where the objects are vehicles. The number of vehicles is unknown and varies. We detect all moving objects, and for tracking of vehicle we use the kalman filter and color feature and distance of it from one frame to the next. So the method can distinguish and tracking all vehicles individually. The proposed algorithm can be applied to multiple moving objects.
Automatic vs. human question answering over multimedia meeting recordingsLê Anh
Information access in meeting recordings can be assisted by
meeting browsers, or can be fully automated following a
question-answering (QA) approach. An information access task
is defined, aiming at discriminating true vs. false parallel statements
about facts in meetings. An automatic QA algorithm is
applied to this task, using passage retrieval over a meeting transcript.
The algorithm scores 59% accuracy for passage retrieval,
while random guessing is below 1%, but only scores 60% on
combined retrieval and question discrimination, for which humans
reach 70%–80% and the baseline is 50%. The algorithm
clearly outperforms humans for speed, at less than 1 second
per question, vs. 1.5–2 minutes per question for humans. The
degradation on ASR compared to manual transcripts still yields
lower but acceptable scores, especially for passage identification.
Automatic QA thus appears to be a promising enhancement
to meeting browsers used by humans, as an assistant for
relevant passage identification.
ICMI 2012 Workshop on gesture and speech productionLê Anh
In this slides, we present a common gesture speech framework for both virtual agents like ECA, IVA, VH and physical agents like humanoid robots. This framework is designed for different embodiments so that its processus are independent from a specific agent.
Applying Computer Vision to Traffic Monitoring System in Vietnam
1. Proceedings of the First Young Vietnamese Scientists Meeting (YVSM ‘05), Nha Trang,
June 12-16, 2005
Applying Computer Vision to Traffic Monitoring System in Vietnam
LE QUOC ANH, NGUYEN NGOC ANH
Union for Science-Production of New Technology, NEWSTECPRO,
17 Hoang Sam Street, Cau Giay District, Hanoi City, Vietnam
Telp: (04) 7561564 – 0912.643289
Email: quocanh@viettel.com.vn; ngocanhnguyen@voila.fr
Abstract: The purpose of this paper is to present promising results of our research on
applying real-time image processing algorithms to the automatic traffic surveillance system
in Vietnam.
The main functions of this system are solving problems of counting the number of vehicles
passing on a road during an interval time, as well as the problems of vehicles classification
and estimating the speed of the observed traffic flow from traffic scenes acquired by a
camera in real-time.
The report concentrates on describing the algorithms which have been applied in automatic
traffic surveillance systems in other countries, and now they are analysed and improved to
reach the accuracy and to be more suitable for the traffic condition in Vietnam.
Lastly, in order to illustrate our research more precisely, a complete application of automatic
traffic surveillance tested with a large number of real traffic video sequences in Hanoi city in
on-line and off-line mode will be shown. The results are quite promising. For example, from
90 to 95% of vehicles are detected and counted. The speed 90 to 93% of vehicles is well
estimated, depending on different situations of observed traffic flows.
1. Introduction
In order to solve the problem of traffic management, it is indispensable to have all
information concerning the density as well as the speed of various kinds of transport that
take part on the way. In some countries many kinds of sensor technology are applied to
extract such information, for example radar, microwaves, tubes or loop detectors, etc. and
more modern, the computer vision. This modern way appears to be extremely active, not
necessary to interfere and is influenced by infrastructure factors like the road, the sewerage,
and so on so forth.
What a pity that none of these sensor systems can be applied in the situation in Vietnam,
except at the road tax station. The reason for this is the characteristics of Vietnamese traffic
system: the vehicles participating on road do not move on the exact lane with a specific
distance and the number of motorbikes and bikes are many times more than number of
cars, bus or vans. This explains as well why some software of developed countries can not
be taken in used in Vietnam.
The purpose of this paper is to propose building a vision-based traffic monitoring
system for Vietnam which allows, in the first place, opening an enormous potentiality in
establishing the equipment of vehicle counting and identification, and in the other place
solving the problem of traffic management in Vietnam.
2. Proceedings of the First Young Vietnamese Scientists Meeting (YVSM ‘05), Nha Trang,
June 12-16, 2005
2. System Overview
The application of computer vision in real time into the traffic surveillance bases on the
helped counting system of computer that performs the algorithms of image processing to
extract the information necessary from traffic scenes acquired with cameras.
The information up to date
extracted in real time will
facilitate traffic management
such as vehicle count, vehicle
speed, vehicle path, vehicle
density, and vehicle
classification.
The general model of vision-
based traffic surveillance system
is illustrated in figure 1. Such
traffic surveillance system will
include some steps of processing
Figure 1. Model of vision-based traffic monitoring system as in the figure below (Figure 2):
Camera Digitizer Pre-image processing Detection
Traffic information Tracking Classification Segmentation
Figure 2. Block diagram of the system
The CCD camera provides live video which is digitized and fed into the computer
which may well contain some special purposed hardware to cope with the extremely high
data rate (~10 MBytes/s). Computer vision algorithms then perform vehicle detection,
segmentation, tracking and classification. Such a single camera is able to monitor more
than one lane of traffic along several hundred metres of road. A vision system could
theoretically have the same powers of observation as a human observer but without the
detrimental effects of tiredness or boredom. In fact, a large number of cameras are already
installed on road networks for surveillance purposes in the other developed countries.
In this paper, we focus on key stages of the system, namely vehicle segmentation,
vehicle classification and vehicle tracking.
3. Vehicle Detection and Segmentation
Many vision-based traffic monitoring systems rely on motion detection to segment
moving regions from the image. If the regions have suitable characteristics, they are
deemed to be vehicles and may then be counted or tracked as desired. There are several
3. Proceedings of the First Young Vietnamese Scientists Meeting (YVSM ‘05), Nha Trang,
June 12-16, 2005
well established techniques for motion detection, two of which are frequently used in road
monitoring systems: Frame differencing and Feature based motion detection.
In theory, there are three different kinds of efficient frame differencing algorithms to
extract moving points from image sequences: difference with background, two-frame
difference and three-frame difference (in other words double-difference). The first deals
with exploiting object motion with respect to background. Whereas with the second and
third algorithms, we can compute the object motion with respect to previous positions,
based on the hypothesis that some object points overlap in two consecutive frames.
The feature based motion detection works by tracking prominent features from frame to
frame. The first step is to identify suitable features. These should be areas of the image
describing by their surroundings. Corners are a frequently used feature. After the features
have been identified in all frames, the second step is to use a matching procedure to find
the correspondence between these points in consecutive frames. The search for the correct
correspondence is not a trivial one and iterative techniques are often used.
We have considered these algorithms. As a result we have found two algorithms
realizable: difference with background (or method using image reference) and three-frame
difference. Both of them are based on the fact that the difference between two frames
captured at different times will reveal regions of motion.
In the difference with background method, a difference image, d(i,j) is generated by
calculating the absolute difference between two frame f1, f2, and then thresholding the
result.
1 if | f1 (i, j) - f 2 (i, j) |> T, where T is a suitable threshold
d(i, j) = (1)
0 otherwise
There, f1 is the incoming frame and f2 is a reference or background frame. The reference
frame is merely an image of scene with no vehicles. If the incoming frame contains no
vehicles then it will be identical to the reference frame and the difference frame will
contain only zeros. However, if the incoming frame does contain vehicles, then these will
be shown in the difference frame.
To explain the double-difference algorithm, we can consider the sequence of binary
{Im}and the difference-image Dm is defined as:
D n (i, j) = I n (i, j) − I n −1 (i, j) (2)
The double-difference image is obtained by performing a logical AND between pixels
belonging to two subsequent difference-images, thresholded by a threshold T:
n +1 > ∧ >
= (3)
0
The purpose of the threshold, T, is to reduce the effects of noise and changes in scene
illumination.
4. Proceedings of the First Young Vietnamese Scientists Meeting (YVSM ‘05), Nha Trang,
June 12-16, 2005
We have realized that the method using image reference is good for extracting entire
vehicle but it is difficult to initialize an image reference. Furthermore, it needs to employ a
method of dynamically updating the reference frame so that it adapts to changes in scene
illumination and other problems.
After testing, we have decided to choose the one based on the difference of three
consecutive frames because we have found the double-difference method particularly
robust to noise due to camera movements and changes in scene illumination. Moreover, it
detects motion on the central frame In, where the image luminance gradient will be
computed.
However, this algorithm does not detect the entire vehicle yet. Therefore, in order to
improve the accuracy of vehicle detection, we proposed a small technique that combines
this algorithm with the edge detection and dilation operator to find out the border of
vehicles more completely and hence extract vehicles from background more exactly.
To clarify, a pixel will belong to a certain vehicle if it is achieved from double-
difference performance. Or it appertains a vehicle if it is obtained from the edge detection
and is the neighbour of double- difference pixels. Lastly, the dilation operator is performed
to enlarge the boundaries of regions of moving pixels.
We have tested and achieved a very good result. Many small parts of vehicle’s border
can not be detected in double-difference, but now it is quite possible to detect them thanks
to this improvement.
4. Vehicle Classification
After obtaining an initial mask for a moving object from previous steps, we may have to
pre-process the mask. Normally the mask is affected by some peppercorn noises. We use
median filter to remove the noises.
The purpose of this stage is to identify the vehicle to know whether it is a car, a motor-
bike, a bicycle, or other kinds of means of transport. Then, it will be tracked. In this system,
we have only classified them into two classes: two-wheel vehicle (motorbike, bicycle, etc.)
and four-wheel vehicle (car, truck, bus, etc.). In addition we use two main features to
classify vehicle: the area and the shape of unknown blobs that are extracted from the
previous process. Take a vehicle as an example, if its area is less than the above threshold
and bigger than the below threshold of a motorbike, then the shape of the vehicle will be
taken into consideration to decide whether it is a motorbike or not.
5. Vehicle Tracking
The aim of vehicle tracking in this system is to solve two problems. The first one is that
vehicle counting and tracking avoids counting one vehicle many times. The other one is
vehicle speed which bases on the total time of tracking vehicles and the length of the road.
5. Proceedings of the First Young Vietnamese Scientists Meeting (YVSM ‘05), Nha Trang,
June 12-16, 2005
Whenever the vehicle is in the camera zone, it is detected, segmented and tracked until
it disappears – get out of the zone. In order to track the object it is necessary to have a
relation between the vehicles in previous frame and the vehicles in current frame. In other
words these vehicles have to be determined whether they are the same object or not in the
two consecutive frames.
Proposed algorithm for this problem as below:
Step 0: Create an empty database D which will contain the vehicles. Each vehicle
corresponds to an entity with characteristics like position, shape, dimension, type, total
tracked time, etc.
Step 1: Perform extracting the vehicles from the incoming frame.
Step 2: Vehicle classification performance.
Step 3: Each extracted vehicle will be compared to the vehicles in D: If there is not any
coincidence, that means the vehicle has just entered the camera zone, so save it in D with it
characteristics. If it is found out, that means it is moving in the restricted camera zone, so it
is marked as flag tracked and belongs to normal move, some information of this vehicle are
updated such as position, time, ... in D.
Step 4: If the objects are still in D but not updated, that means the current frame does
not contain these vehicles. They can be concluded to have gotten out of the camera zone.
At that time it is quite possible to give a conclusion about these vehicles as well as erase
them in the D.
Step 5: Repeat the step 1 for the next frame.
6. Experiment Results
The experiment system includes a digital camera with the parameter 15 frames/s, and a
computer with configurations Pentium(R) 4 CPU 2,4 GHz, RAM 128. The size of photos
is 320x240 pixel. The camera is linked to the computer by the USB port.
The replaced data in some streets in Hanoi city is transferred to the computer. The figure
4 illustrates the interface of the program as well as the result screen of extracted vehicle.
6. Proceedings of the First Young Vietnamese Scientists Meeting (YVSM ‘05), Nha Trang,
June 12-16, 2005
Comparing the result of this program after 20 minutes working with the result of manual
observation, we can conclude that in good light condition, from 90% to 95% of motors and
cars is detected and counted, and the accuracy of vehicle speed is 90% to 93%.
7. Conclusion
The above presents some initial results of vision-based traffic monitoring system. This
research direction is quite able to be applied in Vietnamese situation. We will continue to
improve this product under different conditions of road, weather, and traffic intensity in the
next time.
References
[1] Belle L. Tseng, Ching-Yung Lin and John Smith, Real-time Video Surveillance for Traffic Monitoring
using Virtual Line Analysis, Proc. IEEE International Conference on Multimedia and Expos,
Switzerland, August 2002.
[2] R. Cucchiara, A. Prati, R. Vezzani, Real Time Motion Segmentation From Moving Cameras, in Real-
Time Imaging (2004), vol. 10, n. 3, p. 127-143.
[3] R. Cucchiara R., A. Prati, R. Vezzani, Object Segmentation in Videos From Moving Camera with
MRFs on Color and Motion Features, in Proceedings of IEEE-CS Conference on Computer Vision and
Pattern Recognition (IEEE CVPR June 2003), Madison, Wisconsin, USA, vol. 1, p405-410.
[4] R. Cucchiar, C. Grana, A. Prati, Detecting Moving Objects and their Shadows: an evaluation with the
PET2002 dataset, Proceeding of Third IEEE International Workshop on Performance Evaluation of
Tracking and Surveillance (PETS 2002), conj. with ECCV May 2002, Copenhagen, Denmark, p18-25.
[5] Christopher John SetChell, Applications of Computer Vision to Road-Traffic Monitoring, PhD thesis in
the Faculty of Engineering, Department of Electrical and Electronic Engineering, University of Bristol
(September 1997).
[6] Cohen and G. Medioni, Detecting and tracking moving objects in video surveillance, IEEE Conference
on Computer Vision and Pattern Recognition (1999).
[7] F.J John, Traffic monitoring in Great Britain, Second International Conference on Road Traffic
Monitoring (1989), p. 1-4.
[8] C. Koren, J. Schvab, Traffic survey system in Hungary, Second International Conference on Road
Traffic Monitoring (1989), p. 10-13.
[9] S.C. Lee, Road Traffic Monitoring in Hong Kong, Second International Conference on Road Traffic
Monitoring (1989), p. 14-18.