This document discusses the development of an efficient forward collision avoidance system for autonomous vehicles using a low-cost camera and embedded processor. It proposes algorithms for vehicle detection using HOG features and SVM classification optimized for embedded processors. It also details an approach for distance estimation using structure from motion that handles scale drift by estimating the ground plane and camera height. The algorithms were implemented on a Texas Instruments TDA2x processor by partitioning tasks between its DSP, ARM, and vision acceleration cores to achieve real-time performance for collision avoidance. Initial testing results indicate the system meets requirements for forward collision warning.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
Statistics indicate that most road accidents occur due to a lack of time to react to instant traffic. This problem can be addressed with self-driving vehicles with the application of automated systems to detect such traffic events. The Autonomous Vehicle Navigation System (ATS) has been a standard in the Intelligent Transport System (ITS) and many Driver Assistance Systems (DAS) have been adopted to support these Advanced Autonomous Vehicles (IAVs). To develop these recognition systems for automated self-driving cars, it's important to monitor and operate in real-time traffic events. It requires the correct detection and response of traffic event an automated vehicle. In this paper proposed to develop such a system by applying image recognition to detect and respond to a road blocker by means of real-time distance measurement. To study the performance by measuring accuracy and precision of road blocker detection system and distance calculation, various experiments were conducted by using Shalom frame dataset and detection accuracy, precision of 99%, 100%, while distance calculation 97%, 99% has been achieved by this approach.
Traffic Light Detection and Recognition for Self Driving Cars using Deep Lear...ijtsrd
This document summarizes a research paper that proposes a deep learning model for detecting and recognizing traffic lights using transfer learning. It begins with an introduction describing the challenges of autonomous vehicle perception and how deep learning can help overcome these challenges. It then reviews 9 other related works on traffic light detection using techniques like RFID, background subtraction, object tracking networks and HSV color modeling. Finally, it proposes a model using Faster R-CNN and Inception V2 for transfer learning to detect traffic lights in images and determine their state (red, yellow, or green). The model is trained on a dataset of Indian traffic signals.
traffic jam detection using image processingMalika Alix
1. The document discusses using image processing techniques to detect traffic jams through analyzing video frames captured by road cameras.
2. Key steps include extracting frames from video, converting to grayscale and binary, applying morphological operations like erosion and dilation, and comparing frames to detect vehicle motion between frames and count vehicles to assess traffic levels.
3. A proposed system sends frame data from cameras to a server for processing, which analyzes frames to determine traffic status and shares this with a mobile app to help users choose alternative routes.
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...ITIIIndustries
The suggested method helps predicting vehicles movement in order to give the driver more time to react and avoid collisions on roads. The algorithm is dynamically modelling the road scene around the vehicle based on the data from the onboard camera. All moving objects are monitored and represented by the dynamic model on a 2D map. After analyzing every object’s movement, the algorithm predicts its possible behavior.
Automatic Vehicle Detection Using Pixelwise Classification ApproachIOSR Journals
This document presents an automatic vehicle detection system using a pixelwise classification approach. The system first performs background color subtraction to remove background pixels. It then extracts features like edges, corners and color from each pixel. These features are used in a dynamic Bayesian network to classify each pixel as vehicle or non-vehicle. The network is trained on labeled image data. In detection, the Bayesian rule is used to calculate the probability of each pixel belonging to a vehicle. Post processing includes morphological operations and connected component analysis to obtain final vehicle detections. Experimental results on video data show the approach provides accurate vehicle detection across different scenes and camera heights.
An Overview of Traffic Accident Detection System using IoTIRJET Journal
This document discusses various technologies for automatic traffic accident detection using IoT (Internet of Things). It provides an overview of existing technologies such as the Gaussian mixture model, use of GPS and IoT, alcohol sensors with Arduino, support vector machines, MEMS, deep learning models, and image handling with machine learning. It then describes a proposed system that uses an Arduino microcontroller along with gyro sensors, GPS, and GSM to detect accidents and send location information and alerts to emergency services. The system aims to provide timely emergency response to save lives by automatically detecting accidents and notifying authorities.
The document describes a parking space counter project developed by a team of 4 students under the guidance of Mrs. Sujakumari N R. The project utilizes Python and OpenCV to create an automated system for counting available parking spaces in real-time by analyzing video feeds and detecting vehicles. The system is intended to improve parking management and enhance user experience by providing accurate information on parking availability. Key modules included video acquisition, vehicle detection and tracking, occupancy analysis and counting, and a user interface to display results.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
Statistics indicate that most road accidents occur due to a lack of time to react to instant traffic. This problem can be addressed with self-driving vehicles with the application of automated systems to detect such traffic events. The Autonomous Vehicle Navigation System (ATS) has been a standard in the Intelligent Transport System (ITS) and many Driver Assistance Systems (DAS) have been adopted to support these Advanced Autonomous Vehicles (IAVs). To develop these recognition systems for automated self-driving cars, it's important to monitor and operate in real-time traffic events. It requires the correct detection and response of traffic event an automated vehicle. In this paper proposed to develop such a system by applying image recognition to detect and respond to a road blocker by means of real-time distance measurement. To study the performance by measuring accuracy and precision of road blocker detection system and distance calculation, various experiments were conducted by using Shalom frame dataset and detection accuracy, precision of 99%, 100%, while distance calculation 97%, 99% has been achieved by this approach.
Traffic Light Detection and Recognition for Self Driving Cars using Deep Lear...ijtsrd
This document summarizes a research paper that proposes a deep learning model for detecting and recognizing traffic lights using transfer learning. It begins with an introduction describing the challenges of autonomous vehicle perception and how deep learning can help overcome these challenges. It then reviews 9 other related works on traffic light detection using techniques like RFID, background subtraction, object tracking networks and HSV color modeling. Finally, it proposes a model using Faster R-CNN and Inception V2 for transfer learning to detect traffic lights in images and determine their state (red, yellow, or green). The model is trained on a dataset of Indian traffic signals.
traffic jam detection using image processingMalika Alix
1. The document discusses using image processing techniques to detect traffic jams through analyzing video frames captured by road cameras.
2. Key steps include extracting frames from video, converting to grayscale and binary, applying morphological operations like erosion and dilation, and comparing frames to detect vehicle motion between frames and count vehicles to assess traffic levels.
3. A proposed system sends frame data from cameras to a server for processing, which analyzes frames to determine traffic status and shares this with a mobile app to help users choose alternative routes.
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...ITIIIndustries
The suggested method helps predicting vehicles movement in order to give the driver more time to react and avoid collisions on roads. The algorithm is dynamically modelling the road scene around the vehicle based on the data from the onboard camera. All moving objects are monitored and represented by the dynamic model on a 2D map. After analyzing every object’s movement, the algorithm predicts its possible behavior.
Automatic Vehicle Detection Using Pixelwise Classification ApproachIOSR Journals
This document presents an automatic vehicle detection system using a pixelwise classification approach. The system first performs background color subtraction to remove background pixels. It then extracts features like edges, corners and color from each pixel. These features are used in a dynamic Bayesian network to classify each pixel as vehicle or non-vehicle. The network is trained on labeled image data. In detection, the Bayesian rule is used to calculate the probability of each pixel belonging to a vehicle. Post processing includes morphological operations and connected component analysis to obtain final vehicle detections. Experimental results on video data show the approach provides accurate vehicle detection across different scenes and camera heights.
An Overview of Traffic Accident Detection System using IoTIRJET Journal
This document discusses various technologies for automatic traffic accident detection using IoT (Internet of Things). It provides an overview of existing technologies such as the Gaussian mixture model, use of GPS and IoT, alcohol sensors with Arduino, support vector machines, MEMS, deep learning models, and image handling with machine learning. It then describes a proposed system that uses an Arduino microcontroller along with gyro sensors, GPS, and GSM to detect accidents and send location information and alerts to emergency services. The system aims to provide timely emergency response to save lives by automatically detecting accidents and notifying authorities.
The document describes a parking space counter project developed by a team of 4 students under the guidance of Mrs. Sujakumari N R. The project utilizes Python and OpenCV to create an automated system for counting available parking spaces in real-time by analyzing video feeds and detecting vehicles. The system is intended to improve parking management and enhance user experience by providing accurate information on parking availability. Key modules included video acquisition, vehicle detection and tracking, occupancy analysis and counting, and a user interface to display results.
A Smart Image Processing-Based System For Parking Space Vacancy ManagementMary Calkins
This document describes a smart parking management system that uses image processing to detect vacant parking spots. The system works as follows:
1. Aerial cameras capture images of the parking lot and send them to a base station.
2. The base station processes the images using techniques like edge detection and morphological dilation to identify parking spots. It compares pixel counts in spots to detect occupied vs vacant spots.
3. The base station sends occupancy information to displays near the parking lot, helping drivers locate empty spots.
The system was tested on images of a parking lot. Image processing techniques accurately identified occupied spots based on increased pixel counts in bounding boxes around spots. This helps reduce time spent searching for parking.
COMPARATIVE STUDY ON VEHICLE DETECTION TECHNIQUES IN AERIAL SURVEILLANCEIJCI JOURNAL
Aerial surveillance system becomes a great trendy on past decades. Aerial surveillance vehicle tracking techniques plays a vital role and give rising to optimistic techniques continuously. This system can be very handy in various applications such as police, traffic monitoring, natural disaster and military. It is often covers large area and providing better perspective of moving objects. The detection of moving vehicle can be both from the dynamic aerial imagery, wide area motion imagery or images under low resolution and also the static in nature. It has been very difficult issue whether identify the object in the air view, the camera angles, movement objects and motionless object. This paper deals with comparative study on various vehicle detection and tracking approach in aerial videos with its experimental results and measures working condition, hit rate and false alarm rate
Identification and classification of moving vehicles on roadAlexander Decker
This document describes a system for automatically identifying and classifying vehicles in traffic video. The system uses image processing and machine learning techniques. Video frames are first processed to detect vehicles by subtracting background images. Features like width, length, area, and perimeter are then extracted from vehicle images. These features are input to a neural network classifier trained on sample vehicle data to classify vehicles as big or small. The system was tested on real traffic video from Saudi Arabia. It aims to automate traffic monitoring and reduce costs compared to manual methods.
An Analysis of Various Deep Learning Algorithms for Image Processingvivatechijri
Various applications of image processing has given it a wider scope when it comes to data analysis.
Various Machine Learning Algorithms provide a powerful environment for training modules effectively to
identify various entities of images and segment the same accordingly. Rather one can observe that though the
image classifiers like the Support Vector Machines (SVM) or Random Forest Algorithms do justice to the task,
deep learning algorithms like the Artificial Neural Networks (ANN) and its subordinates, the very well-known
and extremely powerful Algorithm Convolution Neural Networks (CNN) can provide a new dimension to the
image processing domain. It has way higher accuracy and computational power for classifying images further
and segregating their various entities as individual components of the image working region. Major focus will
be on the Region Convolution Neural Networks (R-CNN) algorithm and how well it provides the pixel-level
segmentation further using its better successors like the Fast-Faster and Mask R-CNN versions.
Automatic Detection of Unexpected Accidents Monitoring Conditions in TunnelsIRJET Journal
The document describes a proposed system to automatically detect accidents and unexpected events in road tunnels using video footage from CCTV cameras. The system would use object detection and tracking technology, along with a Faster R-CNN deep learning model, to identify objects like vehicles, fires, and people in tunnel videos. It would monitor the movement and position of detected objects over time to identify accidents or other irregular events. If an accident is detected, a signal would be sent to alert authorities so they can respond quickly. The system aims to address the challenges of limited visibility and low-quality images from tunnel CCTV cameras.
This document discusses various methods that have been proposed for detecting vehicle accidents using video analysis and machine learning techniques. It reviews 17 different research papers that tested approaches using convolutional neural networks, bidirectional LSTM, Gaussian mixture models, and other algorithms to analyze frames from videos in order to identify accidents. The papers evaluated methods for detecting vehicle collisions, extracting vehicle trajectories, recognizing abnormal traffic events, and alerting emergency services when accidents were identified. Overall, the document provides an overview of the state-of-the-art in using computer vision and deep learning for automatic accident detection from video footage.
IRJET- Traffic Sign Detection, Recognition and Notification System using ...IRJET Journal
This document presents a traffic sign detection, recognition, and notification system using Faster R-CNN. The system takes video input containing traffic signs and converts it to frames. Faster R-CNN with ROI pooling and a classifier is used to detect traffic signs. Color and shape information are then used to refine detections. A CNN classifier recognizes the signs. The system notifies drivers of detected signs via audio messages, helping drivers comply with signs even if ignored visually. The proposed detector detects all sign categories, and recognition accuracy on the German Traffic Sign Detection Benchmark dataset exceeds 90% for 42 sign classes.
A Review: Machine vision and its ApplicationsIOSR Journals
Abstract:The machine vision has been used in the industrial machine designing by using the intelligent character recognition. Due to its increased use, it makes the significant contribution to ensure the competitiveness in modern development. The state of art in machine vision inspection and a critical overview of applications in various industries are presented in this paper. In its restricted sense it is also known as the computer vision or the robot vision. This paper gives the overview of Machine Vision Technology in the first section, followed by various industrial application and thefuture trends in Machine Vision. Keywords:CCD- charged coupled devices, Fruit harvesting system, HIS- Hue Saturation Intensity, Image analysis, Image enhancement, Image feature extraction, Image feature classification processing, Intelligent Vehicle tracking , Isodiscriminationn Contour, Machine Vision
Noise Removal in Traffic Sign Detection SystemsCSEIJJournal
The application of Traffic sign detection and recognition is growing in traffic assistant driving systems and
automatic driving systems. It helps drivers and automatic driving systems to detect and recognize the
traffic signs effectively. However, it is found that it may be difficult for these systems to work in challenging
environments like rain, haze, hue, etc. To help the detection systems to have better performance in
challenging conditions like rain and haze, we propose the use of a deep learning technique based on a
Convolutional Neural Network to process visual data. The processed data could be used in the detection.
We are using the NoiseNet model [11], a noise reduction network for our architecture. The model is
trained to enhance images in patches instead of as a whole. The training is done using the Challenging
Unreal and Real Environment - Traffic Sign Detection Dataset(CURE-TSD) which contains videos of
different roads in various challenging situations. The
Identifying Parking Spots from Surveillance Cameras using CNNIRJET Journal
This document describes a study that uses convolutional neural networks (CNNs) to detect empty parking spots from surveillance camera images. The researchers aim to investigate the potential of using CNNs for parking spot detection and evaluate their performance compared to other approaches. The document provides details on the CNN-based parking spot detection system's methodology, which involves collecting and labeling a dataset of parking spot images, preprocessing the data, defining the CNN architecture, and training the CNN. It also reviews related work on parking spot detection using traditional computer vision techniques, machine learning algorithms, and deep learning methods.
Monitoring traffic in urban areas is an important task for intelligent transport applications to alleviate the traffic problems like traffic jams and long trip times. The traffic flow in urban areas is more complicated than the traffic flow in highway, due to the slow movement of vehicles and crowded traffic flows in urban areas. In this paper, a vehicle detection and classification system at intersections is proposed. The system consists of three main phases: vehicle detection, vehicle tracking and vehicle classification. In the vehicle detection, the background subtraction is utilized to detect the moving vehicles by employing mixture of Gaussians (MoGs) algorithm, and then the removal shadow algorithm is developed to improve the detection phase and eliminate the undesired detected region (shadows). After the vehicle detection phase, the vehicles are tracked until they reach the classification line. Then the vehicle dimensions are utilized to classify the vehicles into three classes (cars, bikes, and trucks). In this system, there are three counters; one counter for each class. When the vehicle is classified to a specific class, the class counter is incremented by one. The counting results can be used to estimate the traffic density at intersections, and adjust the timing of traffic light for the next light cycle. The system is applied to videos obtained by stationary cameras. The results obtained demonstrate the robustness and accuracy of the proposed system.
This document summarizes a research paper on traffic sign recognition using convolutional neural networks (CNNs). It discusses how a two-tier CNN architecture combined with YOLO networks can accurately detect and identify traffic signs, even in adverse weather conditions. The first part provides background on traffic sign recognition and related work using methods like support vector machines and HOG features. It then describes the current implementation which uses a two-tier CNN for sign detection and identification, and analyzes the results showing over 95% accuracy. In conclusion, the implementation proves effective for traffic sign recognition under varying conditions.
Vehicle License Plate Recognition (VLPR) is an important system for harmonious traffic. Moreover this system is helpful in many fields and places as private and public entrances, parking lots, border control and theft control. This paper presents a new framework for Sudanese VLPR system. The proposed framework uses Multi Objective Particle Swarm Optimization (MOPSO) and Connected Component Analysis (CCA) to extract the license plate. Horizontal and vertical projection will be used for character segmentation and the final recognition stage is based on the Artificial Immune System (AIS). A new dataset that contains samples for the current shape of Sudanese license plates will be used for training and testing the proposes framework.
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...csandit
In today’s technological life, everyone is quite familiar with the importance of security
measures in our lives. So in this regard, many attempts have been made by researchers and one
of them is flying robots technology. One well-known usage of flying robot, perhaps, is its
capability in security and care measurements which made this device extremely practical, not
only for its unmanned movement, but also for the unique manoeuvre during flight over the
arbitrary areas. In this research, the automatic landing of a flying robot is discussed. The
system is based on the frequent interruptions that is sent from main microcontroller to camera
module in order to take images; these images have been distinguished by image processing
system based on edge detection, after analysing the image the system can tell whether or not to
land on the ground. This method shows better performance in terms of precision as well as
experimentally.
Real Time Object Identification for Intelligent Video Surveillance ApplicationsEditor IJCATR
Intelligent video surveillance system has emerged as a very important research topic in the computer vision field in the
recent years. It is well suited for a broad range of applications such as to monitor activities at traffic intersections for detecting
congestions and predict the traffic flow. Object classification in the field of video surveillance is a key component of smart
surveillance software. Two robust methodology and algorithms adopted for people and object classification for automated surveillance
systems is proposed in this paper. First method uses background subtraction model for detecting the object motion. The background
subtraction and image segmentation based on morphological transformation for tracking and object classification on highways is
proposed. This algorithm uses erosion followed by dilation on various frames. Proposed algorithm in first method, segments the image
by preserving important edges which improves the adaptive background mixture model and makes the system learn faster and more
accurately. The system used in second method adopts the object detection method without background subtraction because of the static
object detection. Segmentation is done by the bounding box registration technique. Then the classification is done with the multiclass
SVM using the edge histogram as features. The edge histograms are calculated for various bin values in different environment. The
result obtained demonstrates the effectiveness of the proposed approach.
This document proposes a real-time image processing (RTIP) algorithm to detect vehicle queues at traffic junctions using a low-cost system. The algorithm has two main operations: 1) motion detection using frame differencing and thresholding, and 2) vehicle detection using a Separable Morphological Edge Detector (SMED) operator and thresholding. The algorithm aims to accurately measure queue parameters such as queue length with 95% accuracy. It uses simple spatial-domain techniques for real-time processing with low computational requirements.
Implementation of Lane Line Detection using HoughTransformation and Gaussian ...IRJET Journal
This document summarizes a research paper that implements lane line detection in images and videos using the Hough transform and Gaussian smoothing. The methodology section outlines the steps taken, which include converting the image to grayscale, applying Gaussian smoothing for noise reduction, using Canny edge detection to extract edges, and applying the Hough transform to detect lane lines. Key algorithms discussed are Gaussian smoothing, Canny edge detection, Hough transformation, grayscale conversion, and defining a region of interest. The implementation section demonstrates applying these techniques to detect lane lines, including masking the image, edge detection, and identifying the lane lines.
Automatism System Using Faster R-CNN and SVMIRJET Journal
The document describes a proposed system to automatically manage vacant parking spaces using computer vision techniques. The system would use existing surveillance cameras installed in parking lots. It detects vehicles in images using a Faster R-CNN object detection model. This model uses a Region Proposal Network to quickly detect objects. An SVM classifier is then used to classify detected objects as free or occupied parking spaces. The goal is to assist drivers in finding available spaces more efficiently.
Applying Computer Vision to Traffic Monitoring System in Vietnam Lê Anh
This document summarizes research on applying computer vision algorithms to develop an automatic traffic monitoring system in Vietnam. Key aspects of the system include vehicle detection using differences between frames, vehicle segmentation using edge detection and dilation, vehicle classification based on area and shape, and vehicle tracking across frames to count vehicles and estimate speeds. Experimental results found the system could detect 90-95% of vehicles and estimate speeds accurately 90-93% of the time. The research aims to improve traffic management by providing real-time traffic information.
A Smart Image Processing-Based System For Parking Space Vacancy ManagementMary Calkins
This document describes a smart parking management system that uses image processing to detect vacant parking spots. The system works as follows:
1. Aerial cameras capture images of the parking lot and send them to a base station.
2. The base station processes the images using techniques like edge detection and morphological dilation to identify parking spots. It compares pixel counts in spots to detect occupied vs vacant spots.
3. The base station sends occupancy information to displays near the parking lot, helping drivers locate empty spots.
The system was tested on images of a parking lot. Image processing techniques accurately identified occupied spots based on increased pixel counts in bounding boxes around spots. This helps reduce time spent searching for parking.
COMPARATIVE STUDY ON VEHICLE DETECTION TECHNIQUES IN AERIAL SURVEILLANCEIJCI JOURNAL
Aerial surveillance system becomes a great trendy on past decades. Aerial surveillance vehicle tracking techniques plays a vital role and give rising to optimistic techniques continuously. This system can be very handy in various applications such as police, traffic monitoring, natural disaster and military. It is often covers large area and providing better perspective of moving objects. The detection of moving vehicle can be both from the dynamic aerial imagery, wide area motion imagery or images under low resolution and also the static in nature. It has been very difficult issue whether identify the object in the air view, the camera angles, movement objects and motionless object. This paper deals with comparative study on various vehicle detection and tracking approach in aerial videos with its experimental results and measures working condition, hit rate and false alarm rate
Identification and classification of moving vehicles on roadAlexander Decker
This document describes a system for automatically identifying and classifying vehicles in traffic video. The system uses image processing and machine learning techniques. Video frames are first processed to detect vehicles by subtracting background images. Features like width, length, area, and perimeter are then extracted from vehicle images. These features are input to a neural network classifier trained on sample vehicle data to classify vehicles as big or small. The system was tested on real traffic video from Saudi Arabia. It aims to automate traffic monitoring and reduce costs compared to manual methods.
An Analysis of Various Deep Learning Algorithms for Image Processingvivatechijri
Various applications of image processing has given it a wider scope when it comes to data analysis.
Various Machine Learning Algorithms provide a powerful environment for training modules effectively to
identify various entities of images and segment the same accordingly. Rather one can observe that though the
image classifiers like the Support Vector Machines (SVM) or Random Forest Algorithms do justice to the task,
deep learning algorithms like the Artificial Neural Networks (ANN) and its subordinates, the very well-known
and extremely powerful Algorithm Convolution Neural Networks (CNN) can provide a new dimension to the
image processing domain. It has way higher accuracy and computational power for classifying images further
and segregating their various entities as individual components of the image working region. Major focus will
be on the Region Convolution Neural Networks (R-CNN) algorithm and how well it provides the pixel-level
segmentation further using its better successors like the Fast-Faster and Mask R-CNN versions.
Automatic Detection of Unexpected Accidents Monitoring Conditions in TunnelsIRJET Journal
The document describes a proposed system to automatically detect accidents and unexpected events in road tunnels using video footage from CCTV cameras. The system would use object detection and tracking technology, along with a Faster R-CNN deep learning model, to identify objects like vehicles, fires, and people in tunnel videos. It would monitor the movement and position of detected objects over time to identify accidents or other irregular events. If an accident is detected, a signal would be sent to alert authorities so they can respond quickly. The system aims to address the challenges of limited visibility and low-quality images from tunnel CCTV cameras.
This document discusses various methods that have been proposed for detecting vehicle accidents using video analysis and machine learning techniques. It reviews 17 different research papers that tested approaches using convolutional neural networks, bidirectional LSTM, Gaussian mixture models, and other algorithms to analyze frames from videos in order to identify accidents. The papers evaluated methods for detecting vehicle collisions, extracting vehicle trajectories, recognizing abnormal traffic events, and alerting emergency services when accidents were identified. Overall, the document provides an overview of the state-of-the-art in using computer vision and deep learning for automatic accident detection from video footage.
IRJET- Traffic Sign Detection, Recognition and Notification System using ...IRJET Journal
This document presents a traffic sign detection, recognition, and notification system using Faster R-CNN. The system takes video input containing traffic signs and converts it to frames. Faster R-CNN with ROI pooling and a classifier is used to detect traffic signs. Color and shape information are then used to refine detections. A CNN classifier recognizes the signs. The system notifies drivers of detected signs via audio messages, helping drivers comply with signs even if ignored visually. The proposed detector detects all sign categories, and recognition accuracy on the German Traffic Sign Detection Benchmark dataset exceeds 90% for 42 sign classes.
A Review: Machine vision and its ApplicationsIOSR Journals
Abstract:The machine vision has been used in the industrial machine designing by using the intelligent character recognition. Due to its increased use, it makes the significant contribution to ensure the competitiveness in modern development. The state of art in machine vision inspection and a critical overview of applications in various industries are presented in this paper. In its restricted sense it is also known as the computer vision or the robot vision. This paper gives the overview of Machine Vision Technology in the first section, followed by various industrial application and thefuture trends in Machine Vision. Keywords:CCD- charged coupled devices, Fruit harvesting system, HIS- Hue Saturation Intensity, Image analysis, Image enhancement, Image feature extraction, Image feature classification processing, Intelligent Vehicle tracking , Isodiscriminationn Contour, Machine Vision
Noise Removal in Traffic Sign Detection SystemsCSEIJJournal
The application of Traffic sign detection and recognition is growing in traffic assistant driving systems and
automatic driving systems. It helps drivers and automatic driving systems to detect and recognize the
traffic signs effectively. However, it is found that it may be difficult for these systems to work in challenging
environments like rain, haze, hue, etc. To help the detection systems to have better performance in
challenging conditions like rain and haze, we propose the use of a deep learning technique based on a
Convolutional Neural Network to process visual data. The processed data could be used in the detection.
We are using the NoiseNet model [11], a noise reduction network for our architecture. The model is
trained to enhance images in patches instead of as a whole. The training is done using the Challenging
Unreal and Real Environment - Traffic Sign Detection Dataset(CURE-TSD) which contains videos of
different roads in various challenging situations. The
Identifying Parking Spots from Surveillance Cameras using CNNIRJET Journal
This document describes a study that uses convolutional neural networks (CNNs) to detect empty parking spots from surveillance camera images. The researchers aim to investigate the potential of using CNNs for parking spot detection and evaluate their performance compared to other approaches. The document provides details on the CNN-based parking spot detection system's methodology, which involves collecting and labeling a dataset of parking spot images, preprocessing the data, defining the CNN architecture, and training the CNN. It also reviews related work on parking spot detection using traditional computer vision techniques, machine learning algorithms, and deep learning methods.
Monitoring traffic in urban areas is an important task for intelligent transport applications to alleviate the traffic problems like traffic jams and long trip times. The traffic flow in urban areas is more complicated than the traffic flow in highway, due to the slow movement of vehicles and crowded traffic flows in urban areas. In this paper, a vehicle detection and classification system at intersections is proposed. The system consists of three main phases: vehicle detection, vehicle tracking and vehicle classification. In the vehicle detection, the background subtraction is utilized to detect the moving vehicles by employing mixture of Gaussians (MoGs) algorithm, and then the removal shadow algorithm is developed to improve the detection phase and eliminate the undesired detected region (shadows). After the vehicle detection phase, the vehicles are tracked until they reach the classification line. Then the vehicle dimensions are utilized to classify the vehicles into three classes (cars, bikes, and trucks). In this system, there are three counters; one counter for each class. When the vehicle is classified to a specific class, the class counter is incremented by one. The counting results can be used to estimate the traffic density at intersections, and adjust the timing of traffic light for the next light cycle. The system is applied to videos obtained by stationary cameras. The results obtained demonstrate the robustness and accuracy of the proposed system.
This document summarizes a research paper on traffic sign recognition using convolutional neural networks (CNNs). It discusses how a two-tier CNN architecture combined with YOLO networks can accurately detect and identify traffic signs, even in adverse weather conditions. The first part provides background on traffic sign recognition and related work using methods like support vector machines and HOG features. It then describes the current implementation which uses a two-tier CNN for sign detection and identification, and analyzes the results showing over 95% accuracy. In conclusion, the implementation proves effective for traffic sign recognition under varying conditions.
Vehicle License Plate Recognition (VLPR) is an important system for harmonious traffic. Moreover this system is helpful in many fields and places as private and public entrances, parking lots, border control and theft control. This paper presents a new framework for Sudanese VLPR system. The proposed framework uses Multi Objective Particle Swarm Optimization (MOPSO) and Connected Component Analysis (CCA) to extract the license plate. Horizontal and vertical projection will be used for character segmentation and the final recognition stage is based on the Artificial Immune System (AIS). A new dataset that contains samples for the current shape of Sudanese license plates will be used for training and testing the proposes framework.
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...csandit
In today’s technological life, everyone is quite familiar with the importance of security
measures in our lives. So in this regard, many attempts have been made by researchers and one
of them is flying robots technology. One well-known usage of flying robot, perhaps, is its
capability in security and care measurements which made this device extremely practical, not
only for its unmanned movement, but also for the unique manoeuvre during flight over the
arbitrary areas. In this research, the automatic landing of a flying robot is discussed. The
system is based on the frequent interruptions that is sent from main microcontroller to camera
module in order to take images; these images have been distinguished by image processing
system based on edge detection, after analysing the image the system can tell whether or not to
land on the ground. This method shows better performance in terms of precision as well as
experimentally.
Real Time Object Identification for Intelligent Video Surveillance ApplicationsEditor IJCATR
Intelligent video surveillance system has emerged as a very important research topic in the computer vision field in the
recent years. It is well suited for a broad range of applications such as to monitor activities at traffic intersections for detecting
congestions and predict the traffic flow. Object classification in the field of video surveillance is a key component of smart
surveillance software. Two robust methodology and algorithms adopted for people and object classification for automated surveillance
systems is proposed in this paper. First method uses background subtraction model for detecting the object motion. The background
subtraction and image segmentation based on morphological transformation for tracking and object classification on highways is
proposed. This algorithm uses erosion followed by dilation on various frames. Proposed algorithm in first method, segments the image
by preserving important edges which improves the adaptive background mixture model and makes the system learn faster and more
accurately. The system used in second method adopts the object detection method without background subtraction because of the static
object detection. Segmentation is done by the bounding box registration technique. Then the classification is done with the multiclass
SVM using the edge histogram as features. The edge histograms are calculated for various bin values in different environment. The
result obtained demonstrates the effectiveness of the proposed approach.
This document proposes a real-time image processing (RTIP) algorithm to detect vehicle queues at traffic junctions using a low-cost system. The algorithm has two main operations: 1) motion detection using frame differencing and thresholding, and 2) vehicle detection using a Separable Morphological Edge Detector (SMED) operator and thresholding. The algorithm aims to accurately measure queue parameters such as queue length with 95% accuracy. It uses simple spatial-domain techniques for real-time processing with low computational requirements.
Implementation of Lane Line Detection using HoughTransformation and Gaussian ...IRJET Journal
This document summarizes a research paper that implements lane line detection in images and videos using the Hough transform and Gaussian smoothing. The methodology section outlines the steps taken, which include converting the image to grayscale, applying Gaussian smoothing for noise reduction, using Canny edge detection to extract edges, and applying the Hough transform to detect lane lines. Key algorithms discussed are Gaussian smoothing, Canny edge detection, Hough transformation, grayscale conversion, and defining a region of interest. The implementation section demonstrates applying these techniques to detect lane lines, including masking the image, edge detection, and identifying the lane lines.
Automatism System Using Faster R-CNN and SVMIRJET Journal
The document describes a proposed system to automatically manage vacant parking spaces using computer vision techniques. The system would use existing surveillance cameras installed in parking lots. It detects vehicles in images using a Faster R-CNN object detection model. This model uses a Region Proposal Network to quickly detect objects. An SVM classifier is then used to classify detected objects as free or occupied parking spaces. The goal is to assist drivers in finding available spaces more efficiently.
Applying Computer Vision to Traffic Monitoring System in Vietnam Lê Anh
This document summarizes research on applying computer vision algorithms to develop an automatic traffic monitoring system in Vietnam. Key aspects of the system include vehicle detection using differences between frames, vehicle segmentation using edge detection and dilation, vehicle classification based on area and shape, and vehicle tracking across frames to count vehicles and estimate speeds. Experimental results found the system could detect 90-95% of vehicles and estimate speeds accurately 90-93% of the time. The research aims to improve traffic management by providing real-time traffic information.
Similar to An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Embedded Processor in Autonomous Vehicles (20)
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
5214-1693458878915-Unit 6 2023 to 2024 academic year assignment (AutoRecovere...
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Embedded Processor in Autonomous Vehicles
1. Advanced Computational Intelligence: An International Journal (ACII), Vol.4, No.1/2, April 2017
DOI:10.5121/acii.2017.4201 1
AN EFFICIENT SYSTEM FOR FORWARD
COLLISION AVOIDANCE USING LOW COST
CAMERA & EMBEDDED PROCESSOR IN
AUTONOMOUS VEHICLES
Manoj C R
Tata Consultancy Services Limited, Bangalore, India
ABSTRACT
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver
Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main
sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and
advances in image processing algorithms, have been pushing the camera based features recently.
Monocular cameras face the challenge of accurate scale estimation which limits it use as a stand-alone
sensor for this application. This paper proposes an efficient system which can perform multi scale object
detection which is being patent granted and efficient 3D reconstruction using structure from motion (SFM)
framework. While the algorithms need to be accurate it also needs to operate real time in low cost
embedded hardware. The focus of the paper is to discuss how the proposed algorithms are designed in such
a way that it can be provide real time performance on low cost embedded CPU’s which makes use of only
Digital Signal processors (DSP) and vector processing cores.
KEYWORDS
Advanced driver assistance (ADAS), HOG, SFM, FCA, collision avoidance, 3D reconstruction, object
detection, classification
1. INTRODUCTION: FORWARD COLLISION SYSTEM OVERVEIW
Forward Collision Warning, is a feature that provides alerts intended to assist drivers in avoiding
or mitigating the harm caused by rear-end crashes. [1] The FCA system may alert the driver to an
approach (or closing) conflict a few seconds before the driver would have detected such a conflict
(e.g., if the driver's eyes were off-the-road) so they can take any necessary corrective action (e.g.,
Apply hard braking). Key to driver acceptance of the FCA feature is appropriate crash alert
timing. The goal of the alert timing approach is to allow the driver enough time to avoid the
crash, and yet avoid annoying the driver with alerts perceived as occurring too early or
unnecessary.
2. SENSING ALGORITHMS
Today’s most of the Forward collision Alert production systems use RADAR as the
primary sensor. The critical factor for the success of an FCA system depends on its
ability to detect and track the objects in the road which can potentially cause an accident.
At the same time it should be able to suppress the false warnings reliably.Mono camera
based systems lacks in the ability of detection of distance as accurate as RADAR and hence
2. Advanced Computational Intelligence: An International Journal (ACII), Vol.4, No.1/2, April 2017
2
suffers in the computation of Time to Collide (TTC).The proposed approach provides better
accuracy in terms of localization of the object and the computation of distance to the object of
interest.
2.1. Camera Calibration
Camera is a device which maps the 3D scene in a 2D plane, Here the camera is considered as a
pin hole through which the light rays from the objects is passed and projected to the image plane
at the focal length of the camera, Camera intrinsic and extrinsic parameters can be determined for
a particular camera and lens combination by photographing a controlled scene. The picture shown
in figure 2, gives a relationship between the camera parameters and the real world distances.
Figure 1. Camera & Real world relationship
2.2 Vehicle Detection
Detection of vehicles using camera images has been a well-researched topic. There are multiple
algorithms which are capable of doing this. The important aspect of a robust algorithm is its
capability to detect multiple kinds of vehicles under different environmental and lighting
conditions with high levels of accuracy. Histogram of Oriented Gradients (HOG) [3] is a popular
object detection algorithm for detection of vehicles along with a classifier like Support Vector
Machine (SVM). Classical HOG algorithm use multi scale based approach to detect objects at
varying size which is computationally too intensive as the same detection window passes through
multiple scale sizes. We propose a novel enhancement over the classical approach which uses
multiple window sizes in order to handle varying object size instead of rescaling the windows.
For example, with VGA resolution camera. a vehicle present at less than 20m of longitudinal
distance from the camera may be covered using a 128x128 detection window and from 20m to
40m using a 64x64 window and beyond this , with a 32x32 window. In order to ensure proper
detection of vehicles with varying colors and types (cars, trucks, buses etc.), multiple SVM
feature descriptors are trained each belonging to a typical variation of color and type of vehicle
.The modified classification equation could be depicted as below.
Where Xi is the feature vector of HOG, Yi is the feature vector of SVM trained data and N
number of samples per block, M1 is the number of blocks per window for 128x64 M2 for 64x64
and M3 for 32x32 windows.
The advantage of this method is that as can be seen from the above equation that it is more of
linear multiplication and accumulations operations and can be parallelized or vectorised easily in
3. Advanced Computational Intelligence: An International Journal (ACII), Vol.4, No.1/2, April 2017
3
Digital signal processor (DSP) or Single Instruction Multiple Data (SIMD) kind of architecture at
the same time ensuring that varying scales and orientations of the objects are being detected.
The pictures below show the actual detection output of algorithms (marked by bounding box) at
various distances when subjected with multiple test videos collected on a test vehicle by TCS.
Figure 2. Vehicle Detection at varying scales of object sizes
The statistics below show the performance of the algorithm under different environmental
conditions for different types of vehicles on long hours of real time road data, verified against the
ground truth.
Table 1: Vehicle detection algorithm statistics
2.3. Distance Estimation Using Structure From Motion
Camera based structure from motion (SFM) is becoming a promising means for distance
estimation in ADAS applications. SFM is attractive due to lower cost as it works with monocular
camera and simpler calibration requirements. However the main challenge in achieving the
accuracy with the SFM is that there is no fixed baseline available as the vehicle moves. Robust
monocular SFM that effectively reduces scale drift in road scenarios has significant benefits for
autonomous driving systems.
We propose an SFM approach, which handle scale drift by means of approximate calculation of
ground plane. This is made possible by a combination of approaches as multiple cues are used to
extract the ground plane information. There are other SFM approaches which take only sparse
feature matching for this estimation [4, 5, 6].
Correction of scale drift is the most important factor to get the correct distance to the object. With
the information available about the height (h) of the camera installation in the vehicle , the
Algorithm
True
Detection
(Frame by
Frame %)
Miss
detection
(Frame by
frame %)
False
Detection
(Out of
total
frames
tested)
Vehicle Detection
(up to 80m)
92% 8% 2%
4. Advanced Computational Intelligence: An International Journal (ACII), Vol.4, No.1/2, April 2017
4
bounding boxes provided by the object detection module is provided as an input to the SFM
module, in consecutive frames k and k+1 where the object is detected. Followed by this, the cues
extracted from a feature matching algorithm called Oriented FAST and Rotated BRIEF (ORB) [9]
is used to extract the features. Generally more popular feature matching techniques like SIFT or
SURF is used however they are more computationally intensive. Since the object detection
module provides a smaller region on which most of the space is covered by the detected object,
ORB itself provides a good feature matching and the advantage is that the time taken is lesser
making it embedded friendly. The feature matching is done using Lucas Kanade algorithm which
works under the assumption that the neighboring pixels will have similar motion information.
Using the matched feature correspondence, fundamental matrix is estimated .Camera matrix
obtained from camera calibration is used to calculate essential matrix from Fundamental matrix.
Given the projection matrices, 3 dimensional points are computed from their measured image
positions in two or more views using triangulation method. Figure 4 shows the intersection of line
joining the camera centre and image points x and x0 which will be the 3D point X. From image
features xij, structure from motion gives an initial estimate of projection matrices Pi and 3D
points Xj. Usually it will be necessary to refine this estimate using iterative non-linear
optimization to minimize an appropriate cost function. This is bundle adjustment [11]. Bundle
adjustment works by minimizing a cost function that is related to a weighted sum of squared re-
projection errors.
The final stage is resolving the scale ambiguity .As mentioned above, the scale drift is corrected
by taking into consideration that the camera mounted height would be above the road surface and
hence the mean of Y co-ordinate of 3D points extracted should be greater than camera height.
Then by using the formula, s = H/h where s is the scale, H is the known camera height mounted
on the test vehicle and h calculated mean height of 3D points. Multiplication of the scale factor
with the generated 3D points(X, Y, Z) need to be done to get absolute 3D point cloud.
The below block diagram provides the pipeline for the SFM.
Figure 3. SFM pipeline
5. Advanced Computational Intelligence: An International Journal (ACII), Vol.4, No.1/2, April 2017
5
The figure below, shows the distance being estimated by the algorithm.
Figure 4. Distance computed using SFM
Table 2: Distance Estimation Algorithm Statistics
2.4. Implementation & Optimization on Embedded Processor
Since the requirement of the system is to work in the vehicle, the algorithms and application
would need to be implemented on a hardware meeting real time performance .The stringent real
time requirement calls for processing the algorithms at a frame rate of 30 per second or a
detection latency of 33ms. Most of the existing camera based systems make use of Field
programmable gate arrays (FPGA) or dedicated application specific system on chips, which
results in higher overall cost .The algorithms considered above are designed in such a way that it
can be realized in a more generic hardware chip which makes use of DSP’s and vectorised
processing engines which are programmable.
Evaluation of different popular hardware platforms was done and Texas Instruments TDA2x was
identified as the suitable one for the system. TI’s TDA2x enables low-power, high-performance
vision-processing systems. [5] It features two C66x Digital Signal Processor (DSP) cores and four
Vision Acceleration vision programmable engines called EVE’s. Making use of different cores
efficiently and creating a parallel processing framework is critical to achieve the real time
performance.
The following step by step iterative design and implementation methodology was adopted to
optimize the performance of the algorithms.
6. Advanced Computational Intelligence: An International Journal (ACII), Vol.4, No.1/2, April 2017
6
Figure 5. Optimization process
Detailed analysis has been done on the strength of the different types of cores .i.e., DSP, Image
processor (EVE) and ARM Cortex .The following table gives the comparative performance of
each of the cores.
Table 3: Comparative speedup in different cores
As the above table shows , EVE has a definite advantage on the algorithms which involve fixed
point arithmetic whereas DSP offers flexibility in supporting algorithms which require floating
point arithmetic to maintain the precision required for higher accuracy.
Based on the analysis done on the algorithm, the following portioning is done for the HOG
Vehicle detection. As given in section 2.2, the SVM multiplication with the extracted HOG
features is a selected candidate for the EVE core, since it is a linear multiply accumulate loop.
Floating point to fixed point arithmetic conversion was performed and the data bandwidth was
ensured to be fit into 1 integer bit and 15 fractional bits in order to perform multiplication and
addition operations on 16 bit EVE Vector Co-processor (VCOP) engine. This enhanced the
performance of this block by the order of 3x.
Type of
Operation
Cortex A9
c66x
DSP
EVE
16 bit integer 1x 2.5x 8-12x
Single
Precision float
1x 5x
7. Advanced Computational Intelligence: An International Journal (ACII), Vol.4, No.1/2, April 2017
7
Figure 6. Partitioning of HOG in TI TDA2x processor
3. CONCLUSION
This paper discussed the development and initial testing results from an alternative FCA sensing
approach that uses a forward-looking stereo camera as a replacement for a radar/lidar device as
the sole Forward Collision Warning (FCA) sensing mechanism
This paper also provided a discussion about the state of the art algorithms which detects and
approves lead vehicle candidates, compute the distances to these candidates for the purposes of
identifying potential rear-end crash situations. An efficient implementation of the algorithm on a
low cost embedded hardware was also discussed. Results from initial testing indicate this system
would be capable of meeting the New Car Assessment Program (NCAP) Forward Collision
warning confirmation test requirements
ACKNOWLEDGEMENTS
The author would like to thank Mr.Rajarama Nayak, head of TCS innovation labs for his
guidance and support.
REFERENCES
[1] Pre-Crash Scenario Typology for Crash Avoidance Research. National Highway Transportation
Safety Administration. - Najm, W.G., Smith, J.D., and Yanagisawa, M.
[2] Development and validation of functional definitions and evaluation procedures for collision
warning/avoidance systems. Highway Transportation Safety Administration. - Kiefer, R., LeBlanc,
D., Palmer, M., Salinger, J., Deering, R., and Shulman, M.
[3] Histogram of Oriented Gradients (HOG) for Object Detection -Navneet Dalal and Bill Triggs
[4] A. Geiger, J. Ziegler, and C. Stiller. StereoScan: Dense 3D reconstruction in real-time. In IEEE Int.
Veh. Symp. 2011.
8. Advanced Computational Intelligence: An International Journal (ACII), Vol.4, No.1/2, April 2017
8
[5] D. Scaramuzza and R. Siegwart. Appearance-guided monocular omnidirectional visual odometry for
outdoor ground vehicles. IEEE Trans. Robotics, 24(5):1015–1026, 2008.
[6] S. Song, M. Chandraker, and C. C. Guest. Parallel, real-time monocular visual odometry. In ICRA,
pages 4698–4705, 2013.
[7] FORWARD COLLISION WARNING SYSTEM CONFIRMATION TEST, February 2013 - US
Department of Transportation
[8] Empowering automotive vision with TI’s Vision Acceleration Pac – Texas Instruments
[9] Rublee, Ethan; Rabaud, Vincent; Konolige, Kurt; Bradski, Gary (2011). "ORB: an efficient alternative
to SIFT or SURF" (PDF). IEEE International Conference on Computer Vision (ICCV).
[10] 3D Reconstruction Using the Direct Linear Transform with a Gabor Wavelet Based Correspondence
Measure Technical Report Daniel Bardsley / Bai Li
[11] RiBernd Kitt,Jorn Rehder, Andrew Chambers, Miriam Schonbein, Henning Lategahn, Sanjiv Singh :
Monocular Visual Odometry using a Planar Road Model to Solve Scale Ambiguity.
Authors
MANOJ C R
Solution Architect
EIS Innovation Labs, Tata Consultancy Services
Email: manoj.cr@tcs.com
Manoj CR is currently working as a Solution Architect in the area of Advanced Driver Assistance System
and Autonomous Driving. His primary responsibilities include conceptualizing and designing of new
ADAS features. Manoj has 13+ years of total experience with new product development in the areas of
Video, Imaging for the embedded products in automotive, consumer devices and industrial domains and
holds a Master’s degree in Embedded Systems. Manoj is an inventor of 4 patents in the area of Vision
based detection systems.