Classifying and finding type of individual vehicles within an accident image are considered difficult problems. This research concentrates on accurately classifying and recognizing vehicle accidents in question. The aim to provide a comparative analysis of vehicle accidents. A number of network topologies are tested to arrive at convincing results and a variety of matrices are used in the evaluation process to identify the best networks. The best two networks are used with faster recurrent convolution neural network (Faster RCNN) and you only look once (YOLO) to determine which network will identifiably detect the location and type of the vehicle. In addition, two datasets are used in this research. In consequence, experiment results show that MobileNetV2 and ResNet50 have accomplished higher accuracy compared to the rest of the models, with 89.11% and 88.45% for the GAI dataset as well as 88.72% and 89.69% for KAI dataset, respectively. The findings reveal that the ResNet50 base network for YOLO achieved higher accuracy than MobileNetV2 for YOLO, ResNet50 for Faster RCNN with 83%, 81%, and 79% for GAI dataset and 79%, 78% and 74% for KAI dataset.
VEHICLE CLASSIFICATION USING THE CONVOLUTION NEURAL NETWORK APPROACHJANAK TRIVEDI
We present vehicle detection classification using the Convolution
Neural Network (CNN) of the deep learning approach. The automatic vehicle
classification for traffic surveillance video systems is challenging for the Intelligent
Transportation System (ITS) to build a smart city. In this article, three different
vehicles: bike, car and truck classification are considered for around 3,000 bikes,
6,000 cars, and 2,000 images of trucks. CNN can automatically absorb and extract
different vehicle dataset’s different features without a manual selection of features.
The accuracy of CNN is measured in terms of the confidence values of the detected
object. The highest confidence value is about 0.99 in the case of the bike category
vehicle classification. The automatic vehicle classification supports building an
electronic toll collection system and identifying emergency vehicles in the traffic
Performance investigation of two-stage detection techniques using traffic lig...IAESIJAI
Using a camera to monitor an object or a group of objects over time is the process of object detection. It can be used for a variety of things, including security and surveillance, video communication, traffic light detection (TLD), object detection from compressed video in public places. In recent times, object tracking has become a popular topic in computer science particularly, the data science community, thanks to the usage of deep learning (DL) in artificial intelligence (AI). DL which convolutional neural network (CNN) as one of its techniques usually used two-stage detection methods in TLD. Despite all successes recorded in TLD through the use of two-stage detection methods, there is no study that has analyzed these methods in experimental research, studying the strength and witnesses by the researchers. Based on the needs this study analyses the applications of DL techniques in TLD. We implemented object detection for TLD using 5 two-stage detection methods with the traffic light dataset using a Jupyter notebook and the sklearn libraries. We present the achievements of two-stage detection methods in TLD, going by standard performance metrics used, FASTER-CNN was the best in detection accuracy, F1-score, precision and recall with 0.89, 0.93, 0.83 and 0.90 respectively.
A hierarchical RCNN for vehicle and vehicle license plate detection and recog...IJECEIAES
Vehicle and vehicle license detection obtained incredible achievements during recent years that are also popularly used in real traffic scenarios, such as intelligent traffic monitoring systems, auto parking systems, and vehicle services. Computer vision attracted much attention in vehicle and vehicle license detection, benefit from image processing and machine learning technologies. However, the existing methods still have some issues with vehicle and vehicle license plate recognition, especially in a complex environment. In this paper, we propose a multivehicle detection and license plate recognition system based on a hierarchical region convolutional neural network (RCNN). Firstly, a higher level of RCNN is employed to extract vehicles from the original images or video frames. Secondly, the regions of the detected vehicles are input to a lower level (smaller) RCNN to detect the license plate. Thirdly, the detected license plate is split into single numbers. Finally, the individual numbers are recognized by an even smaller RCNN. The experiments on the real traffic database validated the proposed method. Compared with the commonly used all-in-one deep learning structure, the proposed hierarchical method deals with the license plate recognition task in multiple levels for sub-tasks, which enables the modification of network size and structure according to the complexity of sub-tasks. Therefore, the computation load is reduced.
An Analysis of Various Deep Learning Algorithms for Image Processingvivatechijri
Various applications of image processing has given it a wider scope when it comes to data analysis.
Various Machine Learning Algorithms provide a powerful environment for training modules effectively to
identify various entities of images and segment the same accordingly. Rather one can observe that though the
image classifiers like the Support Vector Machines (SVM) or Random Forest Algorithms do justice to the task,
deep learning algorithms like the Artificial Neural Networks (ANN) and its subordinates, the very well-known
and extremely powerful Algorithm Convolution Neural Networks (CNN) can provide a new dimension to the
image processing domain. It has way higher accuracy and computational power for classifying images further
and segregating their various entities as individual components of the image working region. Major focus will
be on the Region Convolution Neural Networks (R-CNN) algorithm and how well it provides the pixel-level
segmentation further using its better successors like the Fast-Faster and Mask R-CNN versions.
The document discusses 6 research papers related to vehicle detection using deep learning algorithms. The papers cover topics like real-time vehicle detection using YOLO v5, object detection for autonomous vehicles using YOLOv4, optimized YOLO-v2 for tiny vehicle detection, vehicle detection from UAV imagery using Faster R-CNN and RetinaNet, instance segmentation using Mask R-CNN, and vehicle tracking with YOLO and DeepSORT. Accuracy rates between 67-94% are reported for different algorithms on various datasets like KITTI and BDD100K. Limitations mentioned include small datasets and limitations of the algorithms.
Automatism System Using Faster R-CNN and SVMIRJET Journal
The document describes a proposed system to automatically manage vacant parking spaces using computer vision techniques. The system would use existing surveillance cameras installed in parking lots. It detects vehicles in images using a Faster R-CNN object detection model. This model uses a Region Proposal Network to quickly detect objects. An SVM classifier is then used to classify detected objects as free or occupied parking spaces. The goal is to assist drivers in finding available spaces more efficiently.
Autonomous driving system using proximal policy optimization in deep reinforc...IAESIJAI
Autonomous driving is one solution that can minimize and even prevent
accidents. In autonomous driving, the vehicle must know the surrounding
environment and move under the provisions and situations. We build an
autonomous driving system using proximal policy optimization (PPO) in
deep reinforcement learning, with PPO acting as an instinct for the agent to
choose an action. The instinct will be updated continuously until the agent
reaches the destination from the initial point. We use five sensory inputs for
the agent to accelerate, turn the steer, hit the brakes, avoid the walls, detect
the initial point, and reach the destination point. We evaluated our proposed
autonomous driving system in a simulation environment with several
branching tracks, reflecting a real-world setting. For our driving simulation
purpose in this research, we use the Unity3D engine to construct the dataset
(in the form of a road track) and the agent model (in the form of a car). Our
experimental results firmly indicate our agent can successfully control a
vehicle to navigate to the destination point.
Vision-Based Motorcycle Crash Detection and Reporting Using Deep LearningIRJET Journal
This document discusses developing a vision-based system to detect motorcycle crashes in real-time using deep learning. The researchers created a custom dataset of 398 images containing motorcycle accidents and used YOLOv4 for object detection. YOLOv4 was trained on the dataset and achieved 74% mAP and 60% precision, outperforming Faster R-CNN and YOLOv4-Tiny in accuracy and speed tests. The trained YOLOv4 model was then used to detect accidents in video streams and send alerts when crashes were identified. The system provides a potential real-time solution to detect motorcycle accidents using only vision.
VEHICLE CLASSIFICATION USING THE CONVOLUTION NEURAL NETWORK APPROACHJANAK TRIVEDI
We present vehicle detection classification using the Convolution
Neural Network (CNN) of the deep learning approach. The automatic vehicle
classification for traffic surveillance video systems is challenging for the Intelligent
Transportation System (ITS) to build a smart city. In this article, three different
vehicles: bike, car and truck classification are considered for around 3,000 bikes,
6,000 cars, and 2,000 images of trucks. CNN can automatically absorb and extract
different vehicle dataset’s different features without a manual selection of features.
The accuracy of CNN is measured in terms of the confidence values of the detected
object. The highest confidence value is about 0.99 in the case of the bike category
vehicle classification. The automatic vehicle classification supports building an
electronic toll collection system and identifying emergency vehicles in the traffic
Performance investigation of two-stage detection techniques using traffic lig...IAESIJAI
Using a camera to monitor an object or a group of objects over time is the process of object detection. It can be used for a variety of things, including security and surveillance, video communication, traffic light detection (TLD), object detection from compressed video in public places. In recent times, object tracking has become a popular topic in computer science particularly, the data science community, thanks to the usage of deep learning (DL) in artificial intelligence (AI). DL which convolutional neural network (CNN) as one of its techniques usually used two-stage detection methods in TLD. Despite all successes recorded in TLD through the use of two-stage detection methods, there is no study that has analyzed these methods in experimental research, studying the strength and witnesses by the researchers. Based on the needs this study analyses the applications of DL techniques in TLD. We implemented object detection for TLD using 5 two-stage detection methods with the traffic light dataset using a Jupyter notebook and the sklearn libraries. We present the achievements of two-stage detection methods in TLD, going by standard performance metrics used, FASTER-CNN was the best in detection accuracy, F1-score, precision and recall with 0.89, 0.93, 0.83 and 0.90 respectively.
A hierarchical RCNN for vehicle and vehicle license plate detection and recog...IJECEIAES
Vehicle and vehicle license detection obtained incredible achievements during recent years that are also popularly used in real traffic scenarios, such as intelligent traffic monitoring systems, auto parking systems, and vehicle services. Computer vision attracted much attention in vehicle and vehicle license detection, benefit from image processing and machine learning technologies. However, the existing methods still have some issues with vehicle and vehicle license plate recognition, especially in a complex environment. In this paper, we propose a multivehicle detection and license plate recognition system based on a hierarchical region convolutional neural network (RCNN). Firstly, a higher level of RCNN is employed to extract vehicles from the original images or video frames. Secondly, the regions of the detected vehicles are input to a lower level (smaller) RCNN to detect the license plate. Thirdly, the detected license plate is split into single numbers. Finally, the individual numbers are recognized by an even smaller RCNN. The experiments on the real traffic database validated the proposed method. Compared with the commonly used all-in-one deep learning structure, the proposed hierarchical method deals with the license plate recognition task in multiple levels for sub-tasks, which enables the modification of network size and structure according to the complexity of sub-tasks. Therefore, the computation load is reduced.
An Analysis of Various Deep Learning Algorithms for Image Processingvivatechijri
Various applications of image processing has given it a wider scope when it comes to data analysis.
Various Machine Learning Algorithms provide a powerful environment for training modules effectively to
identify various entities of images and segment the same accordingly. Rather one can observe that though the
image classifiers like the Support Vector Machines (SVM) or Random Forest Algorithms do justice to the task,
deep learning algorithms like the Artificial Neural Networks (ANN) and its subordinates, the very well-known
and extremely powerful Algorithm Convolution Neural Networks (CNN) can provide a new dimension to the
image processing domain. It has way higher accuracy and computational power for classifying images further
and segregating their various entities as individual components of the image working region. Major focus will
be on the Region Convolution Neural Networks (R-CNN) algorithm and how well it provides the pixel-level
segmentation further using its better successors like the Fast-Faster and Mask R-CNN versions.
The document discusses 6 research papers related to vehicle detection using deep learning algorithms. The papers cover topics like real-time vehicle detection using YOLO v5, object detection for autonomous vehicles using YOLOv4, optimized YOLO-v2 for tiny vehicle detection, vehicle detection from UAV imagery using Faster R-CNN and RetinaNet, instance segmentation using Mask R-CNN, and vehicle tracking with YOLO and DeepSORT. Accuracy rates between 67-94% are reported for different algorithms on various datasets like KITTI and BDD100K. Limitations mentioned include small datasets and limitations of the algorithms.
Automatism System Using Faster R-CNN and SVMIRJET Journal
The document describes a proposed system to automatically manage vacant parking spaces using computer vision techniques. The system would use existing surveillance cameras installed in parking lots. It detects vehicles in images using a Faster R-CNN object detection model. This model uses a Region Proposal Network to quickly detect objects. An SVM classifier is then used to classify detected objects as free or occupied parking spaces. The goal is to assist drivers in finding available spaces more efficiently.
Autonomous driving system using proximal policy optimization in deep reinforc...IAESIJAI
Autonomous driving is one solution that can minimize and even prevent
accidents. In autonomous driving, the vehicle must know the surrounding
environment and move under the provisions and situations. We build an
autonomous driving system using proximal policy optimization (PPO) in
deep reinforcement learning, with PPO acting as an instinct for the agent to
choose an action. The instinct will be updated continuously until the agent
reaches the destination from the initial point. We use five sensory inputs for
the agent to accelerate, turn the steer, hit the brakes, avoid the walls, detect
the initial point, and reach the destination point. We evaluated our proposed
autonomous driving system in a simulation environment with several
branching tracks, reflecting a real-world setting. For our driving simulation
purpose in this research, we use the Unity3D engine to construct the dataset
(in the form of a road track) and the agent model (in the form of a car). Our
experimental results firmly indicate our agent can successfully control a
vehicle to navigate to the destination point.
Vision-Based Motorcycle Crash Detection and Reporting Using Deep LearningIRJET Journal
This document discusses developing a vision-based system to detect motorcycle crashes in real-time using deep learning. The researchers created a custom dataset of 398 images containing motorcycle accidents and used YOLOv4 for object detection. YOLOv4 was trained on the dataset and achieved 74% mAP and 60% precision, outperforming Faster R-CNN and YOLOv4-Tiny in accuracy and speed tests. The trained YOLOv4 model was then used to detect accidents in video streams and send alerts when crashes were identified. The system provides a potential real-time solution to detect motorcycle accidents using only vision.
Automatic Detection of Unexpected Accidents Monitoring Conditions in TunnelsIRJET Journal
The document describes a proposed system to automatically detect accidents and unexpected events in road tunnels using video footage from CCTV cameras. The system would use object detection and tracking technology, along with a Faster R-CNN deep learning model, to identify objects like vehicles, fires, and people in tunnel videos. It would monitor the movement and position of detected objects over time to identify accidents or other irregular events. If an accident is detected, a signal would be sent to alert authorities so they can respond quickly. The system aims to address the challenges of limited visibility and low-quality images from tunnel CCTV cameras.
Real time vehicle counting in complex scene for traffic flow estimation using...Conference Papers
This document proposes a multi-level convolutional neural network (mCNN) framework for real-time vehicle counting and classification in complex traffic scenes. The framework includes five modules: pre-processing, object detection using SSD and YOLO models, tracking using Kalman filters, vehicle classification using Inception network, and quantification to provide counting results. The framework is tested on over 585 minutes of highway video from four cameras, achieving 97.53% average counting accuracy and 90.2% weighted average accuracy for counting with classification.
VEHICLE DETECTION USING YOLO V3 FOR COUNTING THE VEHICLES AND TRAFFIC ANALYSISIRJET Journal
This document discusses using YOLOv3 for vehicle detection and counting from video to analyze traffic. Video frames are used to identify moving vehicles and background extraction is applied to each frame to detect and count vehicles. YOLOv3 with a pre-trained model is used for object detection and classification of vehicles into classes like car, bus, motorcycle. Classification is shown for vehicles and individual types to analyze traffic levels. The analysis of vehicle levels is displayed using a pie chart.
Vehicle detection and classification are essential for advanced driver assistance systems (ADAS) and even traffic camera surveillance. Yet, it is challenging due to complex backgrounds, varying illumination intensities, occlusions, vehicle size, and type variations. This paper aims to apply you only look once (YOLO) since it has been proven to produce high object detection and classification accuracy. There are various versions of YOLO, and their performances differ. An investigation on the detection and classification performance of YOLOv3, YOLOv4, and YOLOv5 has been conducted. The training images were from common objects in context (COCO) and open image, two publicly available datasets. The testing input images were captured on a few highways in two main cities in Malaysia, namely Shah Alam and Kuala Lumpur. These images were captured using a mobile phone camera with different backgrounds during the day and night, representing different illuminations and varying types and sizes of vehicles. The accuracy and speed of detecting and classifying cars, trucks, buses, motorcycles, and bicycles have been evaluated. The experimental results show that YOLOv5 detects vehicles more accurately but slower than its predecessors, namely YOLOv4 and YOLOv3. Future work includes experimenting with newer versions of YOLO.
The document describes the development of a traffic sign recognition system using machine learning techniques. It involves building a convolutional neural network (CNN) model to classify images of traffic signs into different categories. The front-end will utilize libraries like Pandas, NumPy, Matplotlib and OpenCV for data processing and visualization. Tkinter will be used for the graphical user interface. The back-end will use TensorFlow and Keras deep learning frameworks to develop the CNN model for traffic sign classification. The system aims to accurately detect and recognize traffic signs to help with autonomous driving.
Malaysian car plate localization using region-based convolutional neural networkjournalBEEI
Automatic car plate localization and recognition system is a system that identifies the car plate location and recognizes the characters on the car plate input images. Within the automated system, the car plate localization stage is the first stage and is the most crucial stage as the success rate of the whole system depends heavily on it. In this paper, a Malaysian car plate localization system using Region-based Convolutional Neural Network (R-CNN) is proposed. Using transfer learning on the AlexNet CNN, the localization was greatly improved achieving best precision and recall rate of 95.19% and 97.84% respectively. Besides, the proposed R-CNN was able to localize car plates in complex scenarios such as under occlusion.
Semantic Concept Detection in Video Using Hybrid Model of CNN and SVM Classif...CSCJournals
In today's era of digitization and fast internet, many video are uploaded on websites, a mechanism is required to access this video accurately and efficiently. Semantic concept detection achieve this task accurately and is used in many application like multimedia annotation, video summarization, annotation, indexing and retrieval. Video retrieval based on semantic concept is efficient and challenging research area. Semantic concept detection bridges the semantic gap between low level extraction of features from key-frame or shot of video and high level interpretation of the same as semantics. Semantic Concept detection automatically assigns labels to video from predefined vocabulary. This task is considered as supervised machine learning problem. Support vector machine (SVM) emerged as default classifier choice for this task. But recently Deep Convolutional Neural Network (CNN) has shown exceptional performance in this area. CNN requires large dataset for training. In this paper, we present framework for semantic concept detection using hybrid model of SVM and CNN. Global features like color moment, HSV histogram, wavelet transform, grey level co-occurrence matrix and edge orientation histogram are selected as low level features extracted from annotated groundtruth video dataset of TRECVID. In second pipeline, deep features are extracted using pretrained CNN. Dataset is partitioned in three segments to deal with data imbalance issue. Two classifiers are separately trained on all segments and fusion of scores is performed to detect the concepts in test dataset. The system performance is evaluated using Mean Average Precision for multi-label dataset. The performance of the proposed framework using hybrid model of SVM and CNN is comparable to existing approaches.
Traffic Light Detection and Recognition for Self Driving Cars using Deep Lear...ijtsrd
This document summarizes a research paper that proposes a deep learning model for detecting and recognizing traffic lights using transfer learning. It begins with an introduction describing the challenges of autonomous vehicle perception and how deep learning can help overcome these challenges. It then reviews 9 other related works on traffic light detection using techniques like RFID, background subtraction, object tracking networks and HSV color modeling. Finally, it proposes a model using Faster R-CNN and Inception V2 for transfer learning to detect traffic lights in images and determine their state (red, yellow, or green). The model is trained on a dataset of Indian traffic signals.
Identification and classification of moving vehicles on roadAlexander Decker
This document describes a system for automatically identifying and classifying vehicles in traffic video. The system uses image processing and machine learning techniques. Video frames are first processed to detect vehicles by subtracting background images. Features like width, length, area, and perimeter are then extracted from vehicle images. These features are input to a neural network classifier trained on sample vehicle data to classify vehicles as big or small. The system was tested on real traffic video from Saudi Arabia. It aims to automate traffic monitoring and reduce costs compared to manual methods.
A Transfer Learning Approach to Traffic Sign RecognitionIRJET Journal
This document presents a study on traffic sign recognition using transfer learning with three pre-trained convolutional neural network models: InceptionV3, Xception, and ResNet50. The models were trained on the German Traffic Sign Recognition Benchmark dataset containing 43 classes of traffic signs. InceptionV3 achieved the highest test accuracy of 97.15% for traffic sign classification, followed by Xception at 96.79%, while ResNet50 performed poorly with only 60.69% accuracy. Transfer learning with InceptionV3 is shown to be an effective approach for traffic sign recognition tasks.
Classification and Detection of Vehicles using Deep Learningijtsrd
The vehicle classification and detecting its license plate are important tasks in intelligent security and transportation systems. The traditional methods of vehicle classification and detection are highly complex which provides coarse grained results due to suffering from limited viewpoints. Because of the latest achievements of Deep Learning, it was successfully applied to image classification and detection of objects. This paper presents a method based on a convolutional neural network, which consists of two steps vehicle classification and vehicle license plate recognition. Several typical neural network modules have been applied in training and testing the vehicle Classification and detection of license plate model, such as CNN convolutional neural networks , TensorFlow, Tesseract OCR. The proposed method can identify the vehicle type, number plate and other information accurately. This model provides security and log details regarding vehicles by using AI Surveillance. It guides the surveillance operators and assists human resources. With the help of the original training dataset and enriched testing dataset, the algorithm can obtain results with an average accuracy of about 97.32 in the classification and detection of vehicles. By increasing the amount of the data, the mean error and misclassification rate gradually decreases. So, this algorithm which is based on Deep Learning has good superiority and adaptability. When compared to the leading methods in the challenging Image datasets, our deep learning approach obtains highly competitive results. Finally, this paper proposes modern methods for the improvement of the algorithm and prospects the development direction of deep learning in the field of machine learning and artificial intelligence. Madde Pavan Kumar | Dr. K. Manivel | N. Jayanthi "Classification & Detection of Vehicles using Deep Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30353.pdf Paper Url :https://www.ijtsrd.com/engineering/software-engineering/30353/classification-and-detection-of-vehicles-using-deep-learning/madde-pavan-kumar
ROAD SIGN DETECTION USING CONVOLUTIONAL NEURAL NETWORK (CNN)IRJET Journal
This document presents a method for detecting and recognizing road signs using convolutional neural networks (CNNs). The method uses the German Traffic Sign Recognition Benchmark (GTSRB) dataset to train and test a CNN model for classifying images into 43 sign categories. The images are preprocessed by resizing to 30x30 pixels and splitting the training set into train and validation portions. The CNN is implemented in TensorFlow and achieves over 95% accuracy on both the training and test sets. The document concludes the proposed method provides an accurate and robust approach for automatic road sign detection and recognition.
fddf c cdfvfdgffbvcbfbfbfbfbfgdfbdbdbdbdhfbdfndfbdbdgfbdbd MINI TRAFFIC.pptxrosava2497
This document summarizes a mini project presentation on traffic sign detection using deep learning. A team of 3 students used deep learning methods like sequential models and train-test splitting to divide a dataset for training and testing a sequential model for traffic sign recognition. Their goals were to develop an efficient system for automatically identifying and classifying traffic signs using convolutional neural networks to handle challenges like occlusions and lighting conditions. They conducted a literature survey on past works using techniques like convolutional networks and deep neural networks that achieved state-of-the-art results on traffic sign datasets.
IRJET- Self Driving Car using Deep Q-LearningIRJET Journal
This document describes a study on developing a self-driving car prototype using deep reinforcement learning. The researchers used a deep Q-network (DQN) algorithm to train an agent to control a simulated car directly from sensor inputs like cameras. The DQN was able to successfully navigate the simulated environment and control the car without any knowledge of its dynamics. While the discrete state-space DQN achieved stable control, the researchers believe the work could be extended to use continuous action spaces to allow for varying speeds and improved reward functions. The document also provides background on deep learning, autonomous vehicles, and reviews related work applying reinforcement learning methods to autonomous driving tasks.
Virtual Contact Discovery using Facial RecognitionIRJET Journal
The document describes a project that aims to use facial recognition as a means of contact discovery and metadata retrieval. The project seeks to optimize machine learning models for facial detection and verification in order to provide fast and accurate contact matching based on facial encodings. It outlines the objectives, scope, literature review, proposed system architecture and implementation details. The system would take facial landmarks and encodings to compare and rank the top 10 most similar encodings to identify matches from a database. The optimized model aims to reduce latency and improve accuracy for contact matching based on facial scans.
Traffic Congestion Detection Using Deep Learningijtsrd
Despite the huge amount of traffic surveillance videos and images have been accumulated in the daily monitoring, deep learning approaches have been underutilized in the application of traffic intelligent management and control. Traffic images, including various illumination, weather conditions, and vast scenarios are considered and preprocessed to set up a proper training dataset. In order to detect traffic congestion, a network structure is proposed based on residual learning to be pre trained and fine tuned. The network is then transferred to the traffic application and retrained with self established training dataset to generate the Traffic Net. The accuracy of Traffic Net to classify congested and uncongested road states reaches 99 for the validation dataset and 95 for the testing dataset. The trained model is stored in cloud storage for easy access for application from anywhere. The proposed Traffic Net can be used by a regional detection of traffic congestion on a large scale surveillance system. The effectiveness and efficiencies are magnificently demonstrated with quick detection in the high accuracy in the case study. The experimental trial could extend its successful application to traffic surveillance system and has potential enhancement for intelligent transport system in future. Anusha C | Dr. J. Bhuvana "Traffic Congestion Detection Using Deep Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-2 , February 2022, URL: https://www.ijtsrd.com/papers/ijtsrd49401.pdf Paper URL: https://www.ijtsrd.com/computer-science/other/49401/traffic-congestion-detection-using-deep-learning/anusha-c
Satellite Image Classification with Deep Learning Surveyijtsrd
Satellite imagery is important for many applications including disaster response, law enforcement and environmental monitoring etc. These applications require the manual identification of objects in the imagery. Because the geographic area to be covered is very large and the analysts available to conduct the searches are few, thus an automation is required. Yet traditional object detection and classification algorithms are too inaccurate and unreliable to solve the problem. Deep learning is a part of broader family of machine learning methods that have shown promise for the automation of such tasks. It has achieved success in image understanding by means that of convolutional neural networks. The problem of object and facility recognition in satellite imagery is considered. The system consists of an ensemble of convolutional neural networks and additional neural networks that integrate satellite metadata with image features. Roshni Rajendran | Liji Samuel ""Satellite Image Classification with Deep Learning: Survey"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30031.pdf
Paper Url : https://www.ijtsrd.com/engineering/computer-engineering/30031/satellite-image-classification-with-deep-learning-survey/roshni-rajendran
1) The document proposes a vehicle and pedestrian detection method based on spatial pyramid pooling and attention mechanisms to improve YOLOv3.
2) It first uses spatial pyramid pooling to fuse local and global features to better detect targets of different sizes. It then adds an attention mechanism to enhance key features and remove redundant features.
3) Experimental results showed the proposed method achieved 91.4% mAP and 83.2% F1 score on the KITTI dataset, performing better than YOLOv3 in accuracy and speed.
Vehicle Speed Estimation using Haar Classifier Algorithmijtsrd
An efficient traffic management system is needed in all kinds of roads, such as off roads, highways, etc... Though several laws and speed controller has been attached to the vehicles, Speed limit may vary from road to road. Still Traffic management system faces different kinds of challenges everyday and its being a research area though number of proposals has been identified. Many numbers of methods has been proposed in computer Vision and machine learning approaches for object tracking. In this paper vehicles are identified and detected using a videos that taken from surveillance camera. The objective of the present work is to identification of the vehicles is done by using Computer vision technique and detection of vehicles using Haar cascade classifier. Detecting the vehicles using machine learning and estimating speed is tough but beneficial task. For the past few tears Convolution Neural Network CNN has been widely used in computer vision for vehicle detection and identification. This method manages to track multiple objects at real time using dlibs. P. Devi Mahalakshmi | Dr. M. Babu "Vehicle Speed Estimation using Haar Classifier Algorithm" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29482.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/29482/vehicle-speed-estimation-using-haar-classifier-algorithm/p-devi-mahalakshmi
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...IRJET Journal
This document discusses implementing various machine learning algorithms for traffic sign detection and recognition. It compares the accuracies of KNN, multinomial logistic regression, CNN, and random forest algorithms on a German traffic sign dataset. For real-time traffic sign detection, it uses the YOLO v4 model. The document reviews several papers on traffic sign recognition using techniques like SVM, CNN, Capsule Networks and analyzes their reported accuracies. It then describes the proposed system for traffic sign recognition using two datasets and data preprocessing steps before applying the algorithms and evaluating their performance.
Development of depth map from stereo images using sum of absolute differences...nooriasukmaningtyas
This article proposes a framework for the depth map reconstruction using stereo images. Fundamentally, this map provides an important information which commonly used in essential applications such as autonomous vehicle navigation, drone’s navigation and 3D surface reconstruction. To develop an accurate depth map, the framework must be robust against the challenging regions of low texture, plain color and repetitive pattern on the input stereo image. The development of this map requires several stages which starts with matching cost calculation, cost aggregation, optimization and refinement stage. Hence, this work develops a framework with sum of absolute difference (SAD) and the combination of two edge preserving filters to increase the robustness against the challenging regions. The SAD convolves using block matching technique to increase the efficiency of matching process on the low texture and plain color regions. Moreover, two edge preserving filters will increase the accuracy on the repetitive pattern region. The results show that the proposed method is accurate and capable to work with the challenging regions. The results are provided by the Middlebury standard dataset. The framework is also efficiently and can be applied on the 3D surface reconstruction. Moreover, this work is greatly competitive with previously available methods.
Model predictive controller for a retrofitted heat exchanger temperature cont...nooriasukmaningtyas
This paper aims to demonstrate the practical aspects of process control theory for undergraduate students at the Department of Chemical Engineering at the University of Bahrain. Both, the ubiquitous proportional integral derivative (PID) as well as model predictive control (MPC) and their auxiliaries were designed and implemented in a real-time framework. The latter was realized through retrofitting an existing plate-and-frame heat exchanger unit that has been operated using an analog PID temperature controller. The upgraded control system consists of a personal computer (PC), low-cost signal conditioning circuit, national instruments USB 6008 data acquisition card, and LabVIEW software. LabVIEW control design and simulation modules were used to design and implement the PID and MPC controllers. The performance of the designed controllers was evaluated while controlling the outlet temperature of the retrofitted plate-and-frame heat exchanger. The distinguished feature of the MPC controller in handling input and output constraints was perceived in real-time. From a pedagogical point of view, realizing the theory of process control through practical implementation was substantial in enhancing the student’s learning and the instructor’s teaching experience.
More Related Content
Similar to Accident vehicle types classification: a comparative study between different deep learning models
Automatic Detection of Unexpected Accidents Monitoring Conditions in TunnelsIRJET Journal
The document describes a proposed system to automatically detect accidents and unexpected events in road tunnels using video footage from CCTV cameras. The system would use object detection and tracking technology, along with a Faster R-CNN deep learning model, to identify objects like vehicles, fires, and people in tunnel videos. It would monitor the movement and position of detected objects over time to identify accidents or other irregular events. If an accident is detected, a signal would be sent to alert authorities so they can respond quickly. The system aims to address the challenges of limited visibility and low-quality images from tunnel CCTV cameras.
Real time vehicle counting in complex scene for traffic flow estimation using...Conference Papers
This document proposes a multi-level convolutional neural network (mCNN) framework for real-time vehicle counting and classification in complex traffic scenes. The framework includes five modules: pre-processing, object detection using SSD and YOLO models, tracking using Kalman filters, vehicle classification using Inception network, and quantification to provide counting results. The framework is tested on over 585 minutes of highway video from four cameras, achieving 97.53% average counting accuracy and 90.2% weighted average accuracy for counting with classification.
VEHICLE DETECTION USING YOLO V3 FOR COUNTING THE VEHICLES AND TRAFFIC ANALYSISIRJET Journal
This document discusses using YOLOv3 for vehicle detection and counting from video to analyze traffic. Video frames are used to identify moving vehicles and background extraction is applied to each frame to detect and count vehicles. YOLOv3 with a pre-trained model is used for object detection and classification of vehicles into classes like car, bus, motorcycle. Classification is shown for vehicles and individual types to analyze traffic levels. The analysis of vehicle levels is displayed using a pie chart.
Vehicle detection and classification are essential for advanced driver assistance systems (ADAS) and even traffic camera surveillance. Yet, it is challenging due to complex backgrounds, varying illumination intensities, occlusions, vehicle size, and type variations. This paper aims to apply you only look once (YOLO) since it has been proven to produce high object detection and classification accuracy. There are various versions of YOLO, and their performances differ. An investigation on the detection and classification performance of YOLOv3, YOLOv4, and YOLOv5 has been conducted. The training images were from common objects in context (COCO) and open image, two publicly available datasets. The testing input images were captured on a few highways in two main cities in Malaysia, namely Shah Alam and Kuala Lumpur. These images were captured using a mobile phone camera with different backgrounds during the day and night, representing different illuminations and varying types and sizes of vehicles. The accuracy and speed of detecting and classifying cars, trucks, buses, motorcycles, and bicycles have been evaluated. The experimental results show that YOLOv5 detects vehicles more accurately but slower than its predecessors, namely YOLOv4 and YOLOv3. Future work includes experimenting with newer versions of YOLO.
The document describes the development of a traffic sign recognition system using machine learning techniques. It involves building a convolutional neural network (CNN) model to classify images of traffic signs into different categories. The front-end will utilize libraries like Pandas, NumPy, Matplotlib and OpenCV for data processing and visualization. Tkinter will be used for the graphical user interface. The back-end will use TensorFlow and Keras deep learning frameworks to develop the CNN model for traffic sign classification. The system aims to accurately detect and recognize traffic signs to help with autonomous driving.
Malaysian car plate localization using region-based convolutional neural networkjournalBEEI
Automatic car plate localization and recognition system is a system that identifies the car plate location and recognizes the characters on the car plate input images. Within the automated system, the car plate localization stage is the first stage and is the most crucial stage as the success rate of the whole system depends heavily on it. In this paper, a Malaysian car plate localization system using Region-based Convolutional Neural Network (R-CNN) is proposed. Using transfer learning on the AlexNet CNN, the localization was greatly improved achieving best precision and recall rate of 95.19% and 97.84% respectively. Besides, the proposed R-CNN was able to localize car plates in complex scenarios such as under occlusion.
Semantic Concept Detection in Video Using Hybrid Model of CNN and SVM Classif...CSCJournals
In today's era of digitization and fast internet, many video are uploaded on websites, a mechanism is required to access this video accurately and efficiently. Semantic concept detection achieve this task accurately and is used in many application like multimedia annotation, video summarization, annotation, indexing and retrieval. Video retrieval based on semantic concept is efficient and challenging research area. Semantic concept detection bridges the semantic gap between low level extraction of features from key-frame or shot of video and high level interpretation of the same as semantics. Semantic Concept detection automatically assigns labels to video from predefined vocabulary. This task is considered as supervised machine learning problem. Support vector machine (SVM) emerged as default classifier choice for this task. But recently Deep Convolutional Neural Network (CNN) has shown exceptional performance in this area. CNN requires large dataset for training. In this paper, we present framework for semantic concept detection using hybrid model of SVM and CNN. Global features like color moment, HSV histogram, wavelet transform, grey level co-occurrence matrix and edge orientation histogram are selected as low level features extracted from annotated groundtruth video dataset of TRECVID. In second pipeline, deep features are extracted using pretrained CNN. Dataset is partitioned in three segments to deal with data imbalance issue. Two classifiers are separately trained on all segments and fusion of scores is performed to detect the concepts in test dataset. The system performance is evaluated using Mean Average Precision for multi-label dataset. The performance of the proposed framework using hybrid model of SVM and CNN is comparable to existing approaches.
Traffic Light Detection and Recognition for Self Driving Cars using Deep Lear...ijtsrd
This document summarizes a research paper that proposes a deep learning model for detecting and recognizing traffic lights using transfer learning. It begins with an introduction describing the challenges of autonomous vehicle perception and how deep learning can help overcome these challenges. It then reviews 9 other related works on traffic light detection using techniques like RFID, background subtraction, object tracking networks and HSV color modeling. Finally, it proposes a model using Faster R-CNN and Inception V2 for transfer learning to detect traffic lights in images and determine their state (red, yellow, or green). The model is trained on a dataset of Indian traffic signals.
Identification and classification of moving vehicles on roadAlexander Decker
This document describes a system for automatically identifying and classifying vehicles in traffic video. The system uses image processing and machine learning techniques. Video frames are first processed to detect vehicles by subtracting background images. Features like width, length, area, and perimeter are then extracted from vehicle images. These features are input to a neural network classifier trained on sample vehicle data to classify vehicles as big or small. The system was tested on real traffic video from Saudi Arabia. It aims to automate traffic monitoring and reduce costs compared to manual methods.
A Transfer Learning Approach to Traffic Sign RecognitionIRJET Journal
This document presents a study on traffic sign recognition using transfer learning with three pre-trained convolutional neural network models: InceptionV3, Xception, and ResNet50. The models were trained on the German Traffic Sign Recognition Benchmark dataset containing 43 classes of traffic signs. InceptionV3 achieved the highest test accuracy of 97.15% for traffic sign classification, followed by Xception at 96.79%, while ResNet50 performed poorly with only 60.69% accuracy. Transfer learning with InceptionV3 is shown to be an effective approach for traffic sign recognition tasks.
Classification and Detection of Vehicles using Deep Learningijtsrd
The vehicle classification and detecting its license plate are important tasks in intelligent security and transportation systems. The traditional methods of vehicle classification and detection are highly complex which provides coarse grained results due to suffering from limited viewpoints. Because of the latest achievements of Deep Learning, it was successfully applied to image classification and detection of objects. This paper presents a method based on a convolutional neural network, which consists of two steps vehicle classification and vehicle license plate recognition. Several typical neural network modules have been applied in training and testing the vehicle Classification and detection of license plate model, such as CNN convolutional neural networks , TensorFlow, Tesseract OCR. The proposed method can identify the vehicle type, number plate and other information accurately. This model provides security and log details regarding vehicles by using AI Surveillance. It guides the surveillance operators and assists human resources. With the help of the original training dataset and enriched testing dataset, the algorithm can obtain results with an average accuracy of about 97.32 in the classification and detection of vehicles. By increasing the amount of the data, the mean error and misclassification rate gradually decreases. So, this algorithm which is based on Deep Learning has good superiority and adaptability. When compared to the leading methods in the challenging Image datasets, our deep learning approach obtains highly competitive results. Finally, this paper proposes modern methods for the improvement of the algorithm and prospects the development direction of deep learning in the field of machine learning and artificial intelligence. Madde Pavan Kumar | Dr. K. Manivel | N. Jayanthi "Classification & Detection of Vehicles using Deep Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30353.pdf Paper Url :https://www.ijtsrd.com/engineering/software-engineering/30353/classification-and-detection-of-vehicles-using-deep-learning/madde-pavan-kumar
ROAD SIGN DETECTION USING CONVOLUTIONAL NEURAL NETWORK (CNN)IRJET Journal
This document presents a method for detecting and recognizing road signs using convolutional neural networks (CNNs). The method uses the German Traffic Sign Recognition Benchmark (GTSRB) dataset to train and test a CNN model for classifying images into 43 sign categories. The images are preprocessed by resizing to 30x30 pixels and splitting the training set into train and validation portions. The CNN is implemented in TensorFlow and achieves over 95% accuracy on both the training and test sets. The document concludes the proposed method provides an accurate and robust approach for automatic road sign detection and recognition.
fddf c cdfvfdgffbvcbfbfbfbfbfgdfbdbdbdbdhfbdfndfbdbdgfbdbd MINI TRAFFIC.pptxrosava2497
This document summarizes a mini project presentation on traffic sign detection using deep learning. A team of 3 students used deep learning methods like sequential models and train-test splitting to divide a dataset for training and testing a sequential model for traffic sign recognition. Their goals were to develop an efficient system for automatically identifying and classifying traffic signs using convolutional neural networks to handle challenges like occlusions and lighting conditions. They conducted a literature survey on past works using techniques like convolutional networks and deep neural networks that achieved state-of-the-art results on traffic sign datasets.
IRJET- Self Driving Car using Deep Q-LearningIRJET Journal
This document describes a study on developing a self-driving car prototype using deep reinforcement learning. The researchers used a deep Q-network (DQN) algorithm to train an agent to control a simulated car directly from sensor inputs like cameras. The DQN was able to successfully navigate the simulated environment and control the car without any knowledge of its dynamics. While the discrete state-space DQN achieved stable control, the researchers believe the work could be extended to use continuous action spaces to allow for varying speeds and improved reward functions. The document also provides background on deep learning, autonomous vehicles, and reviews related work applying reinforcement learning methods to autonomous driving tasks.
Virtual Contact Discovery using Facial RecognitionIRJET Journal
The document describes a project that aims to use facial recognition as a means of contact discovery and metadata retrieval. The project seeks to optimize machine learning models for facial detection and verification in order to provide fast and accurate contact matching based on facial encodings. It outlines the objectives, scope, literature review, proposed system architecture and implementation details. The system would take facial landmarks and encodings to compare and rank the top 10 most similar encodings to identify matches from a database. The optimized model aims to reduce latency and improve accuracy for contact matching based on facial scans.
Traffic Congestion Detection Using Deep Learningijtsrd
Despite the huge amount of traffic surveillance videos and images have been accumulated in the daily monitoring, deep learning approaches have been underutilized in the application of traffic intelligent management and control. Traffic images, including various illumination, weather conditions, and vast scenarios are considered and preprocessed to set up a proper training dataset. In order to detect traffic congestion, a network structure is proposed based on residual learning to be pre trained and fine tuned. The network is then transferred to the traffic application and retrained with self established training dataset to generate the Traffic Net. The accuracy of Traffic Net to classify congested and uncongested road states reaches 99 for the validation dataset and 95 for the testing dataset. The trained model is stored in cloud storage for easy access for application from anywhere. The proposed Traffic Net can be used by a regional detection of traffic congestion on a large scale surveillance system. The effectiveness and efficiencies are magnificently demonstrated with quick detection in the high accuracy in the case study. The experimental trial could extend its successful application to traffic surveillance system and has potential enhancement for intelligent transport system in future. Anusha C | Dr. J. Bhuvana "Traffic Congestion Detection Using Deep Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-2 , February 2022, URL: https://www.ijtsrd.com/papers/ijtsrd49401.pdf Paper URL: https://www.ijtsrd.com/computer-science/other/49401/traffic-congestion-detection-using-deep-learning/anusha-c
Satellite Image Classification with Deep Learning Surveyijtsrd
Satellite imagery is important for many applications including disaster response, law enforcement and environmental monitoring etc. These applications require the manual identification of objects in the imagery. Because the geographic area to be covered is very large and the analysts available to conduct the searches are few, thus an automation is required. Yet traditional object detection and classification algorithms are too inaccurate and unreliable to solve the problem. Deep learning is a part of broader family of machine learning methods that have shown promise for the automation of such tasks. It has achieved success in image understanding by means that of convolutional neural networks. The problem of object and facility recognition in satellite imagery is considered. The system consists of an ensemble of convolutional neural networks and additional neural networks that integrate satellite metadata with image features. Roshni Rajendran | Liji Samuel ""Satellite Image Classification with Deep Learning: Survey"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30031.pdf
Paper Url : https://www.ijtsrd.com/engineering/computer-engineering/30031/satellite-image-classification-with-deep-learning-survey/roshni-rajendran
1) The document proposes a vehicle and pedestrian detection method based on spatial pyramid pooling and attention mechanisms to improve YOLOv3.
2) It first uses spatial pyramid pooling to fuse local and global features to better detect targets of different sizes. It then adds an attention mechanism to enhance key features and remove redundant features.
3) Experimental results showed the proposed method achieved 91.4% mAP and 83.2% F1 score on the KITTI dataset, performing better than YOLOv3 in accuracy and speed.
Vehicle Speed Estimation using Haar Classifier Algorithmijtsrd
An efficient traffic management system is needed in all kinds of roads, such as off roads, highways, etc... Though several laws and speed controller has been attached to the vehicles, Speed limit may vary from road to road. Still Traffic management system faces different kinds of challenges everyday and its being a research area though number of proposals has been identified. Many numbers of methods has been proposed in computer Vision and machine learning approaches for object tracking. In this paper vehicles are identified and detected using a videos that taken from surveillance camera. The objective of the present work is to identification of the vehicles is done by using Computer vision technique and detection of vehicles using Haar cascade classifier. Detecting the vehicles using machine learning and estimating speed is tough but beneficial task. For the past few tears Convolution Neural Network CNN has been widely used in computer vision for vehicle detection and identification. This method manages to track multiple objects at real time using dlibs. P. Devi Mahalakshmi | Dr. M. Babu "Vehicle Speed Estimation using Haar Classifier Algorithm" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29482.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/29482/vehicle-speed-estimation-using-haar-classifier-algorithm/p-devi-mahalakshmi
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...IRJET Journal
This document discusses implementing various machine learning algorithms for traffic sign detection and recognition. It compares the accuracies of KNN, multinomial logistic regression, CNN, and random forest algorithms on a German traffic sign dataset. For real-time traffic sign detection, it uses the YOLO v4 model. The document reviews several papers on traffic sign recognition using techniques like SVM, CNN, Capsule Networks and analyzes their reported accuracies. It then describes the proposed system for traffic sign recognition using two datasets and data preprocessing steps before applying the algorithms and evaluating their performance.
Similar to Accident vehicle types classification: a comparative study between different deep learning models (20)
Development of depth map from stereo images using sum of absolute differences...nooriasukmaningtyas
This article proposes a framework for the depth map reconstruction using stereo images. Fundamentally, this map provides an important information which commonly used in essential applications such as autonomous vehicle navigation, drone’s navigation and 3D surface reconstruction. To develop an accurate depth map, the framework must be robust against the challenging regions of low texture, plain color and repetitive pattern on the input stereo image. The development of this map requires several stages which starts with matching cost calculation, cost aggregation, optimization and refinement stage. Hence, this work develops a framework with sum of absolute difference (SAD) and the combination of two edge preserving filters to increase the robustness against the challenging regions. The SAD convolves using block matching technique to increase the efficiency of matching process on the low texture and plain color regions. Moreover, two edge preserving filters will increase the accuracy on the repetitive pattern region. The results show that the proposed method is accurate and capable to work with the challenging regions. The results are provided by the Middlebury standard dataset. The framework is also efficiently and can be applied on the 3D surface reconstruction. Moreover, this work is greatly competitive with previously available methods.
Model predictive controller for a retrofitted heat exchanger temperature cont...nooriasukmaningtyas
This paper aims to demonstrate the practical aspects of process control theory for undergraduate students at the Department of Chemical Engineering at the University of Bahrain. Both, the ubiquitous proportional integral derivative (PID) as well as model predictive control (MPC) and their auxiliaries were designed and implemented in a real-time framework. The latter was realized through retrofitting an existing plate-and-frame heat exchanger unit that has been operated using an analog PID temperature controller. The upgraded control system consists of a personal computer (PC), low-cost signal conditioning circuit, national instruments USB 6008 data acquisition card, and LabVIEW software. LabVIEW control design and simulation modules were used to design and implement the PID and MPC controllers. The performance of the designed controllers was evaluated while controlling the outlet temperature of the retrofitted plate-and-frame heat exchanger. The distinguished feature of the MPC controller in handling input and output constraints was perceived in real-time. From a pedagogical point of view, realizing the theory of process control through practical implementation was substantial in enhancing the student’s learning and the instructor’s teaching experience.
Control of a servo-hydraulic system utilizing an extended wavelet functional ...nooriasukmaningtyas
Servo-hydraulic systems have been extensively employed in various industrial applications. However, these systems are characterized by their highly complex and nonlinear dynamics, which complicates the control design stage of such systems. In this paper, an extended wavelet functional link neural network (EWFLNN) is proposed to control the displacement response of the servo-hydraulic system. To optimize the controller's parameters, a recently developed optimization technique, which is called the modified sine cosine algorithm (M-SCA), is exploited as the training method. The proposed controller has achieved remarkable results in terms of tracking two different displacement signals and handling external disturbances. From a comparative study, the proposed EWFLNN controller has attained the best control precision compared with those of other controllers, namely, a proportional-integralderivative (PID) controller, an artificial neural network (ANN) controller, a wavelet neural network (WNN) controller, and the original wavelet functional link neural network (WFLNN) controller. Moreover, compared to the genetic algorithm (GA) and the original sine cosine algorithm (SCA), the M-SCA has shown better optimization results in finding the optimal values of the controller's parameters.
Decentralised optimal deployment of mobile underwater sensors for covering la...nooriasukmaningtyas
This paper presents the problem of sensing coverage of layers of the ocean in three dimensional underwater environments. We propose distributed control laws to drive mobile underwater sensors to optimally cover a given confined layer of the ocean. By applying this algorithm at first the mobile underwater sensors adjust their depth to the specified depth. Then, they make a triangular grid across a given area. Afterwards, they randomly move to spread across the given grid. These control laws only rely on local information also they are easily implemented and computationally effective as they use some easy consensus rules. The feature of exchanging information just among neighbouring mobile sensors keeps the information exchange minimum in the whole networks and makes this algorithm practicable option for undersea. The efficiency of the presented control laws is confirmed via mathematical proof and numerical simulations.
Evaluation quality of service for internet of things based on fuzzy logic: a ...nooriasukmaningtyas
The development of the internet of thing (IoT) technology has become a major concern in sustainability of quality of service (SQoS) in terms of efficiency, measurement, and evaluation of services, such as our smart home case study. Based on several ambiguous linguistic and standard criteria, this article deals with quality of service (QoS). We used fuzzy logic to select the most appropriate and efficient services. For this reason, we have introduced a new paradigmatic approach to assess QoS. In this regard, to measure SQoS, linguistic terms were collected for identification of ambiguous criteria. This paper collects the results of other work to compare the traditional assessment methods and techniques in IoT. It has been proven that the comparison that traditional valuation methods and techniques could not effectively deal with these metrics. Therefore, fuzzy logic is a worthy method to provide a good measure of QoS with ambiguous linguistic and criteria. The proposed model addresses with constantly being improved, all the main axes of the QoS for a smart home. The results obtained also indicate that the model with its fuzzy performance importance index (FPII) has efficiently evaluate the multiple services of SQoS.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Smart monitoring system using NodeMCU for maintenance of production machinesnooriasukmaningtyas
Maintenance is an activity that helps to reduce risk, increase productivity, improve quality, and minimize production costs. The necessity for maintenance actions will increase efficiency and enhance the safety and quality of products and processes. On getting these conditions, it is necessary to implement a monitoring system used to observe machines' conditions from time to time, especially the machine parts that often experience problems. This paper presents a low-cost intelligent monitoring system using NodeMCU to continuously monitor machine conditions and provide warnings in the case of machine failure. Not only does it provide alerts, but this monitoring system also generates historical data on machine conditions to the Google Cloud (Google Sheet), includes which machines were down, downtime, issues occurred, repairs made, and technician handling. The results obtained are machine operators do not need to lose a relatively long time to call the technician. Likewise, the technicians assisted in carrying out machine maintenance activities and online reports so that errors that often occur due to human error do not happen again. The system succeeded in reducing the technician-calling time and maintenance workreporting time up to 50%. The availability of online and real-time maintenance historical data will support further maintenance strategy.
Design and simulation of a software defined networkingenabled smart switch, f...nooriasukmaningtyas
Using sustainable energy is the future of our planet earth, this became not only economically efficient but also a necessity for the preservation of life on earth. Because of such necessity, smart grids became a very important issue to be researched. Many literatures discussed this topic and with the development of internet of things (IoT) and smart sensors, smart grids are developed even further. On the other hand, software defined networking is a technology that separates the control plane from the data plan of the network. It centralizes the management and the orchestration of the network tasks by using a network controller. The network controller is the heart of the SDN-enabled network, and it can control other networking devices using software defined networking (SDN) protocols such as OpenFlow. A smart switching mechanism called (SDN-smgrid-sw) for the smart grid will be modeled and controlled using SDN. We modeled the environment that interact with the sensors, for the sun and the wind elements. The Algorithm is modeled and programmed for smart efficient power sharing that is managed centrally and monitored using SDN controller. Also, all if the smart grid elements (power sources) are connected to the IP network using IoT protocols.
Efficient wireless power transmission to remote the sensor in restenosis coro...nooriasukmaningtyas
In this study, the researchers have proposed an alternative technique for designing an asymmetric 4 coil-resonance coupling module based on the series-to-parallel topology at 27 MHz industrial scientific medical (ISM) band to avoid the tissue damage, for the constant monitoring of the in-stent restenosis coronary artery. This design consisted of 2 components, i.e., the external part that included 3 planar coils that were placed outside the body and an internal helical coil (stent) that was implanted into the coronary artery in the human tissue. This technique considered the output power and the transfer efficiency of the overall system, coil geometry like the number of coils per turn, and coil size. The results indicated that this design showed an 82% efficiency in the air if the transmission distance was maintained as 20 mm, which allowed the wireless power supply system to monitor the pressure within the coronary artery when the implanted load resistance was 400 Ω.
Grid reactive voltage regulation and cost optimization for electric vehicle p...nooriasukmaningtyas
Expecting large electric vehicle (EV) usage in the future due to environmental issues, state subsidies, and incentives, the impact of EV charging on the power grid is required to be closely analyzed and studied for power quality, stability, and planning of infrastructure. When a large number of energy storage batteries are connected to the grid as a capacitive load the power factor of the power grid is inevitably reduced, causing power losses and voltage instability. In this work large-scale 18K EV charging model is implemented on IEEE 33 network. Optimization methods are described to search for the location of nodes that are affected most due to EV charging in terms of power losses and voltage instability of the network. Followed by optimized reactive power injection magnitude and time duration of reactive power at the identified nodes. It is shown that power losses are reduced and voltage stability is improved in the grid, which also complements the reduction in EV charging cost. The result will be useful for EV charging stations infrastructure planning, grid stabilization, and reducing EV charging costs.
Topology network effects for double synchronized switch harvesting circuit on...nooriasukmaningtyas
Energy extraction takes place using several different technologies, depending on the type of energy and how it is used. The objective of this paper is to study topology influence for a smart network based on piezoelectric materials using the double synchronized switch harvesting (DSSH). In this work, has been presented network topology for circuit DSSH (DSSH Standard, Independent DSSH, DSSH in parallel, mono DSSH, and DSSH in series). Using simulation-based on a structure with embedded piezoelectric system harvesters, then compare different topology of circuit DSSH for knowledge is how to connect the circuit DSSH together and how to implement accurately this circuit strategy for maximizing the total output power. The network topology DSSH extracted power a technique allows again up to in terms of maximal power output compared with network topology standard extracted at the resonant frequency. The simulation results show that by using the same input parameters the maximum efficiency for topology DSSH in parallel produces 120% more energy than topology DSSH-series. In addition, the energy harvesting by mono-DSSH is more than DSSH-series by 650% and it has exceeded DSSHind by 240%.
Improving the design of super-lift Luo converter using hybrid switching capac...nooriasukmaningtyas
In this article, an improvement to the positive output super-lift Luo converter (POSLC) has been proposed to get high gain at a low duty cycle. Also, reduce the stress on the switch and diodes, reduce the current through the inductors to reduce loss, and increase efficiency. Using a hybrid switch unit composed of four inductors and two capacitors it is replaced by the main inductor in the elementary circuit. It’s charged in parallel with the same input voltage and discharged in series. The output voltage is increased according to the number of components. The gain equation is modeled. The boundary condition between continuous conduction mode (CCM) and discontinuous conduction mode (DCM) has been derived. Passive components are designed to get high output voltage (8 times at D=0.5) and low ripple about (0.004). The circuit is simulated and analyzed using MATLAB/Simulink. Maximum power point tracker (MPPT) controls the converter to provide the most interest from solar energy.
Third harmonic current minimization using third harmonic blocking transformernooriasukmaningtyas
Zero sequence blocking transformers (ZSBTs) are used to suppress third harmonic currents in 3-phase systems. Three-phase systems where singlephase loading is present, there is every chance that the load is not balanced. If there is zero-sequence current due to unequal load current, then the ZSBT will impose high impedance and the supply voltage at the load end will be varied which is not desired. This paper presents Third harmonic blocking transformer (THBT) which suppresses only higher harmonic zero sequences. The constructional features using all windings in single-core and construction using three single-phase transformers explained. The paper discusses the constructional features, full details of circuit usage, design considerations, and simulation results for different supply and load conditions. A comparison of THBT with ZSBT is made with simulation results by considering four different cases
Power quality improvement of distribution systems asymmetry caused by power d...nooriasukmaningtyas
With an increase of non-linear load in today’s electrical power systems, the rate of power quality drops and the voltage source and frequency deteriorate if not properly compensated with an appropriate device. Filters are most common techniques that employed to overcome this problem and improving power quality. In this paper an improved optimization technique of filter applies to the power system is based on a particle swarm optimization with using artificial neural network technique applied to the unified power flow quality conditioner (PSO-ANN UPQC). Design particle swarm optimization and artificial neural network together result in a very high performance of flexible AC transmission lines (FACTs) controller and it implements to the system to compensate all types of power quality disturbances. This technique is very powerful for minimization of total harmonic distortion of source voltages and currents as a limit permitted by IEEE-519. The work creates a power system model in MATLAB/Simulink program to investigate our proposed optimization technique for improving control circuit of filters. The work also has measured all power quality disturbances of the electrical arc furnace of steel factory and suggests this technique of filter to improve the power quality.
Studies enhancement of transient stability by single machine infinite bus sys...nooriasukmaningtyas
Maintaining network synchronization is important to customer service. Low fluctuations cause voltage instability, non-synchronization in the power system or the problems in the electrical system disturbances, harmonics current and voltages inflation and contraction voltage. Proper tunning of the parameters of stabilizer is prime for validation of stabilizer. To overcome instability issues and get reinforcement found a lot of the techniques are developed to overcome instability problems and improve performance of power system. Genetic algorithm was applied to optimize parameters and suppress oscillation. The simulation of the robust composite capacitance system of an infinite single-machine bus was studied using MATLAB was used for optimization purpose. The critical time is an indication of the maximum possible time during which the error can pass in the system to obtain stability through the simulation. The effectiveness improvement has been shown in the system
Renewable energy based dynamic tariff system for domestic load managementnooriasukmaningtyas
To deal with the present power-scenario, this paper proposes a model of an advanced energy management system, which tries to achieve peak clipping, peak to average ratio reduction and cost reduction based on effective utilization of distributed generations. This helps to manage conventional loads based on flexible tariff system. The main contribution of this work is the development of three-part dynamic tariff system on the basis of time of utilizing power, available renewable energy sources (RES) and consumers’ load profile. This incorporates consumers’ choice to suitably select for either consuming power from conventional energy sources and/or renewable energy sources during peak or off-peak hours. To validate the efficiency of the proposed model we have comparatively evaluated the model performance with existing optimization techniques using genetic algorithm and particle swarm optimization. A new optimization technique, hybrid greedy particle swarm optimization has been proposed which is based on the two aforementioned techniques. It is found that the proposed model is superior with the improved tariff scheme when subjected to load management and consumers’ financial benefit. This work leads to maintain a healthy relationship between the utility sectors and the consumers, thereby making the existing grid more reliable, robust, flexible yet cost effective.
Energy harvesting maximization by integration of distributed generation based...nooriasukmaningtyas
The purpose of distributed generation systems (DGS) is to enhance the distribution system (DS) performance to be better known with its benefits in the power sector as installing distributed generation (DG) units into the DS can introduce economic, environmental and technical benefits. Those benefits can be obtained if the DG units' site and size is properly determined. The aim of this paper is studying and reviewing the effect of connecting DG units in the DS on transmission efficiency, reactive power loss and voltage deviation in addition to the economical point of view and considering the interest and inflation rate. Whale optimization algorithm (WOA) is introduced to find the best solution to the distributed generation penetration problem in the DS. The result of WOA is compared with the genetic algorithm (GA), particle swarm optimization (PSO), and grey wolf optimizer (GWO). The proposed solutions methodologies have been tested using MATLAB software on IEEE 33 standard bus system
Intelligent fault diagnosis for power distribution systemcomparative studiesnooriasukmaningtyas
Short circuit is one of the most popular types of permanent fault in power distribution system. Thus, fast and accuracy diagnosis of short circuit failure is very important so that the power system works more effectively. In this paper, a newly enhanced support vector machine (SVM) classifier has been investigated to identify ten short-circuit fault types, including single line-toground faults (XG, YG, ZG), line-to-line faults (XY, XZ, YZ), double lineto-ground faults (XYG, XZG, YZG) and three-line faults (XYZ). The performance of this enhanced SVM model has been improved by using three different versions of particle swarm optimization (PSO), namely: classical PSO (C-PSO), time varying acceleration coefficients PSO (T-PSO) and constriction factor PSO (K-PSO). Further, utilizing pseudo-random binary sequence (PRBS)-based time domain reflectometry (TDR) method allows to obtain a reliable dataset for SVM classifier. The experimental results performed on a two-branch distribution line show the most optimal variant of PSO for short fault diagnosis.
A deep learning approach based on stochastic gradient descent and least absol...nooriasukmaningtyas
More than eighty-five to ninety percentage of the diabetic patients are affected with diabetic retinopathy (DR) which is an eye disorder that leads to blindness. The computational techniques can support to detect the DR by using the retinal images. However, it is hard to measure the DR with the raw retinal image. This paper proposes an effective method for identification of DR from the retinal images. In this research work, initially the Weiner filter is used for preprocessing the raw retinal image. Then the preprocessed image is segmented using fuzzy c-mean technique. Then from the segmented image, the features are extracted using grey level co-occurrence matrix (GLCM). After extracting the fundus image, the feature selection is performed stochastic gradient descent, and least absolute shrinkage and selection operator (LASSO) for accurate identification during the classification process. Then the inception v3-convolutional neural network (IV3-CNN) model is used in the classification process to classify the image as DR image or non-DR image. By applying the proposed method, the classification performance of IV3-CNN model in identifying DR is studied. Using the proposed method, the DR is identified with the accuracy of about 95%, and the processed retinal image is identified as mild DR.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Accident vehicle types classification: a comparative study between different deep learning models
1. Indonesian Journal of Electrical Engineering and Computer Science
Vol. 21, No. 3, March 2021, pp. 1474~1484
ISSN: 2502-4752, DOI: 10.11591/ijeecs.v21.i3.pp1474-1484 1474
Journal homepage: http://ijeecs.iaescore.com
Accident vehicle types classification: a comparative study
between different deep learning models
Mardin Abdullah Anwer, Shareef M. Shareef, Abbas M. Ali
Software and Informatics Department, College of Engineering, Salahaddin University-Erbil, Iraq
Article Info ABSTRACT
Article history:
Received Sep 24, 2020
Revised Nov 27, 2020
Accepted Dec 16, 2020
Classifying and finding type of individual vehicles within an accident image
are considered difficult problems. This research concentrates on accurately
classifying and recognizing vehicle accidents in question. The aim to provide
a comparative analysis of vehicle accidents. A number of network topologies
are tested to arrive at convincing results and a variety of matrices are used in
the evaluation process to identify the best networks. The best two networks
are used with faster recurrent convolution neural network (Faster RCNN) and
you only look once (YOLO) to determine which network will identifiably
detect the location and type of the vehicle. In addition, two datasets are used
in this research. In consequence, experiment results show that MobileNetV2
and ResNet50 have accomplished higher accuracy compared to the rest of the
models, with 89.11% and 88.45% for the GAI dataset as well as 88.72% and
89.69% for KAI dataset, respectively. The findings reveal that the ResNet50
base network for YOLO achieved higher accuracy than MobileNetV2 for
YOLO, ResNet50 for Faster RCNN with 83%, 81%, and 79% for GAI
dataset and 79%, 78% and 74% for KAI dataset.
Keywords:
Accident recognition
Deep learning
GAI and KAI datasets
Transfer learning
Vehicle accidents image
classification
This is an open access article under the CC BY-SA license.
Corresponding Author:
Mardin Abdullah Anwer
Software and Informatics department
Salahaddin University- Erbil, Iraq
Email: mardin.anwer@su.edu.krd
1. INTRODUCTION
Intelligent transportation system has become one of the civilizational developments that the world is
witnessing. Monitoring and analyzing the road has become quite necessary to control some areas instead of
human beings-this breakthrough has brought into vision what is known as the smart city. Auto accident
recognition and reporting system is a crucial topic; however, it has some problems such as multiple object
tracking, object detection, and video surveillance in addition to some other issues related to the real-time
monitoring system.
The main objective of this research is to find the best deep learning network for describing vehicle
accident images. Since use of roads by vehicles is on the increase, numerous studies on accident detection
and classification are conducted utilizing signals such as acoustic where cross-correlation processing of
radiated tire noise is used [1]. Radar [2] and ultrasonic signals [3] have been used to solve the problem of
computational complexity in vehicle detection. Infrared thermal images taken from infrared thermal cameras
are used to analyse road traffic flow surveillance in various circumstances, including conditions of poor
visibility [4]. Many approaches using convolutional neural networks (CNNs) based on deep learning have
been applied for vehicle classification and detection [5, 6]. At the time of the accident, vehicle data such as
tag, picture, time, area, volume, and type are reported in written form by a police officer while in developed
countries such information is passed on using intelligent transportation systems [7]. Research in this area has
2. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
Accident vehicle types classification - a comparative study between different dee… (Mardin A. Anwer)
1475
used machine learning technology for image classification. Five pre-trained convolutional neural networks
have been worked on to classify vehicle accident images and compare the results achieved. There is
accomplished research that has used machine vision for detecting the damaged vehicles. This research
examines some CNN detectors to see which one best classifies images of the accident.
The key contributions of this research are: 1) Identifying the best pre-trained CNN from a deep
learning perspective for classifying the categories of the accident images. 2) Identifying the best network for
detecting the vehicle location and its type whether it is a car, a motorcycle, a bus, or a truck. 3) Creating two
datasets for vehicle accident images where the images are categorized according to the accident type. The
categories can be car-car, car-bus, car-motorcycle, bus-motorcycle, and bus-truck accidents. This paper
consists of five sections. Section one is literature review and section two covers the classification of accidents
and vehicle detection CNNs. Section three sets up the experiment, whereas section four tackles the evaluation
of the matrices and discusses the results. The last section is the conclusion which sums up the main findings
of the research and includes recommendations for further research. It is common for vehicle accident
detection and recognition system to analyse and retrieve accident images incorrectly from the accident image
investigation. The problem might increase if the preprocess stages such as conversion, scaling, and rotation
are applied to the images.
Many algorithms are used to process and extract the significant features of visual vehicle accident or
filter the background for easier analysis. This process is beneficial for images with low quality or for transfer
learning. The preprocessing relies on the accident images if it is colored or grey as each process requires
different methods and algorithms. Methods mainly are histogram equalization [8], wavelet transform [9], and
homomorphic filtering [10] to enhance low-quality images. The efficiency of these methods is limited under
certain conditions since it requires a large number of parameters and thresholds to be adjusted. Accident
images classification using deep learning has become a debatable issue. The research in [11] offers a novel
deep learning-based vehicle area that figures and calculates with 2D deep conviction organizes. The 2D-DBN
design utilizes second-request planes rather than first-request vector as input and utilization bilinear
projection for holding dis-criminative data to decide the size of the deep architecture which improves the
achievement rate of vehicle identification. He proposes a different productive vehicle identification and
classification approach which depends on the convolutional neural network. The features extracted by using
this approach beat those created by customary methodologies [12]. Yi applies a pre-trained AlexNet model
for checking vehicles in an image patch in wide area motion imagery (WAMI) investigation [13]. While
Kumeda proposes a deep convolutional neural network (CNN) which consists of four convolutional layers
and three fully connected layers [14].
They conclude that their CNN model was effective for image classification and performed the task
with an accuracy of 94.4% on four target accident classes (Traffic-Net dataset). However, they did not apply
their CNN to different datasets or compare their results with another research. In [15], a single 2D end-to-end
fully convolutional network is introduced to predict the vehicle and the bounding boxes simultaneously.
Object Detection includes images or video frames as inputs, while outputs are lists where each element
represents the location and category of candidate object information [16]. Generally, object detection aims to
extract discriminatory features to help distinguish the classes. The process of object recognition using a
convolutional neural network is accomplished in two phases: feature extraction and training [17]. In the first
phase, the common features are extracted using a base network which can be any pre-trained CNN. The
second phase is achieved by using models such as support vector machines [18], Neural Networks [19],
AdaBoost [20], histogram of flow gradient (HFG), hidden markov model (HMM) and gaussian mixture
model (GMM). Nevertheless, they did not compare their results with any deep learning models. Nowadays,
convolutional neural networks (CNNs) are used to extract features from area proposals or end-to-end object
detection without specifying the unique features of a certain class. The well-performed deep learning-based
approaches of object detection include region proposals (R-CNN) [21], Fast R-CNN [22], Faster R-CNN
[23], single shot multi-box detector (SSD) [24], and you only look once (YOLO) [25]. Mudumbi combines
Faster R-CNN and MobileNet to detect different kinds of logos. They improve the regional proposal network
(RPN) by selecting appropriate anchors which have achieved 92.4% detection accuracy. Nonetheless, the
authors did not test a suitable resolution for generating proposals that may improve the performance of the
RPN considerably [26].
2. METHODOLOGY
2.1. CNN
Pre-trained deep CNN image-based classifications were trained on a large number of images to be
able to classify other images within a big range of objects [27]. Therefore, the features of many images were
learned by pre-trained CNN. The networks used in this research are GoogleNet, which contains 22 layers,
3. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 21, No. 3, March 2021 : 1474 - 1484
1476
ResNet50, containing 50 layers, MobileNetV2, containing 53 layers, AlexNet, containing 25 layers, and
SqueesNet with 18 layers. To classify the kind of vehicle in the images of the accident, these CNN models
have been fine-tuned using transfer learning. Thus, the five used CNNs have been retrained with images from
the SCGAI and SCKAI datasets. A deep learning toolbox in MATLAB was used to develop and train the
networks. For both datasets, 66% of images for training and 34% for testing have been employed. As the
dimension of an input image in each pre-trained CNN is different, analyze network function has been used to
examine the network architecture and change the input layer dimension to match the architecture of the pre-
trained CNN. For example, SqueesNet and AlexNet require 227-by-227 image size, while MobileNetV2 and
ResNet50 input layer is 224-by-224. Augmented image datastore has been used to achieve the process of
automatically resizing the images and avoid overfitting of the network. The preprocessing step is shown in
the workflow of CNN image-based systems in Figure 1.
Figure 1. Workflow of CNN-Image based systems
2.2. Accident classification
The last classification and learnable layers use the extracted features resulted from convolutional
layers for classification. These layers embrace the details on how to combine the features into loss value,
class probabilities and predicted label [28]. To retrain the pre-trained deep CNN, these layers were replaced
to be able to classify the new images. Another important function that was helpful in this stage was
findLayersToReplace function. This function finds the last layer that holds learnable weights and replaces it
with the new two output layers which are fully connected. No changes have been made to the parameters of
the rest of the layers as the main goal of this research is to compare the accuracy of the networks. Therefore,
the layers were frozen. Before training the network using the trainnetwork function, MATLAB gave us the
freedom of changing the training option using training options function. The main parameters which need to
be changed are InitialLearnRate, MiniBatchSize, and MaxEpochs. In transfer learning, InitialLearnRate
prefers to be of very low value to decrease the training time in non-frozen layers. Besides, there is no need
for performing the learning process for many epochs. Setting the mini-batch size means updating the weights
and evaluating the gradient of the loss function.
One of the main contributions of this research is evaluating the performance of the networks and
comparing the accuracy results. The value of accuracy is between 0 and 1. The two most efficient classifier
networks will be used as a base network for the second stage that is detecting the type of the vehicle in the
accident image.
2.3. Vehicle detection and recognition
YOLO V2 [29] and Faster Recurrent Convolutional Neural Network [26] models have been used for
detecting multi-class vehicles in the images. Vehicles in accident image datasets are truck, bus, motorcycle,
and cars. The network architecture of YOLO consists of 24 convolutional layers, the last one outputs a tensor
with shape (7,7,1024), which is flattened, and with two fully connected layers. The number of fully
connected layers is changed to four because we have four classes in the GAI dataset and three in the KAI
dataset. The final loss adds localization, confidence, and classification losses together. The function of the
two fully connected layers is to perform a linear regression to create boundary box predictions and a final
prediction using a threshold of box confidence scores. YOLO is an end-to-end regression problem and uses a
single convolutional network to predict the boundary boxes and the corresponding class probabilities. By
4. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
Accident vehicle types classification - a comparative study between different dee… (Mardin A. Anwer)
1477
splitting the image into the SxS grid and taking m of boundary boxes from each grid, the convolutional
neural network outputs a class probability and offsets values for the boundary box. It then selects boundary
boxes that have the class probability above a threshold value and uses them to locate the object within the
image [16]. However, YOLO does not perform well if the objects involved in the image are very small.
Figure 2 illustrates the architecture of the YOLO detection system.
Instead of using boxes approach, Faster R-CNN detector adds a region proposal network (RPN) to
generate regions of interest directly from the network. The RPN works by moving a sliding window over the
CNN feature map and at each window generating k potential boundary boxes and scores associated with how
good each of those boxes is expected to be. This k represents the common aspect ratios that candidates to
objects could fit caller anchor boxes. For each anchor box, the RPN outputs a boundary box and score per
position in the image. Consequently, the speed and detection process are increased by using this method.
Figure 3 demonstrates the implementation of the Faster RCNN detection algorithm.
Figure 2. YOLOV2 detection system, the input is an accident image of GAI dataset and the output the same
image with class probability and offset values for the bounding box
Figure 3. Faster-RCNN object detection system
As it can be seen from the architecture, the region proposal network and the classifier use
convolutional layers that are trained jointly. The region proposal network behaves as an attention director,
determining the optimal boundary boxes across a wide range of scales and using nine candidate aspect ratios
to be evaluated for object classification. In other words, the RPN tells the unified network where to look [30].
The detector identified two vehicles in the accident image, a truck, and a car with a score confident 0.95, 0.90
respectively.
3. EXPERIMENTAL SETUP
The experiments were conducted on a computer with specification Intel(R) Core (TM) i7-8700T
CPU @ 2.40 GHz (2400Mhz), 6 Cores, 8 GB RAM, using MATLAB 2019a.
3.1. Dataset
In this research, vehicle accident images have been collected from different resources since there are
no public datasets for vehicle accidents that would align with our research methodology. The images have
generated two datasets. The first dataset is named general accident images (GAI) where images are taken
from DCD [31], accident dataset [32], and online google images. The second dataset is kurdish accident
images (KAI) where images are taken directly from the ministry of interior of the Iraqi kurdistan regional
government (KRG). The images are of accidents that occurred from 2015-2020 in different locations and on
roads such as highways and intersections. The raw dataset contains 2000 images as well as 100 videos of the
accidents. We screenshotted specific parts of the videos to be used as images of the accident. After creating
the datasets, the procedure of preprocessing and categorizing the images according to the accidents and
vehicles in the images begins. In the GAI dataset, six categories are formed, which are: cars, car-bus, car-
motorcycle, bus-motorcycle, bus-truck, and car-truck. To maintain the balance of the image numbers in each
category, 118 images in each category are selected to have a total of 708 images. Most of the accidents in the
KAI dataset involve a single car with a few ones that occurred as bus-bus, bus-truck, bus-car, and bus-
5. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 21, No. 3, March 2021 : 1474 - 1484
1478
motorcycle. This could be due to the transportation system in place in Kurdistan region. As an example, bus
transportation between the cities does not exist in Kurdistan and buses operate locally only during the
daytime. As a result, only three categories in KAI are created, which are cars, car-motorcycle, and car-truck.
There are 130 images in each category to become 390 images. Image augmentation has been used to augment
the datasets. Regarding the number of vehicles in each image, images of accidents between vehicles are
selected. Although the number of vehicles is not balanced in both datasets, this will not affect the purpose of
the research. Detecting the type of vehicle will enrich the classification process in terms of description. Table
1 illustrates the number of vehicles in each dataset. All the images are preprocessed, filtered, and scaled from
their original sizes to 224×224 pixels. Figure 4 illustrates some images and their category.
(a) (b) (c) (d) (e) (f)
Figure 4. Images from GAI and KAI after preprocessing: (a) bus-truck, (b) cars accidents, (c) car-bus, (d)
car-motorcycle, (e) bus-motorcycle, (f) car-truck accident images
Table 1. Number of vehicles in each dataset
Dataset Number of cars Number of bus Number of motorcycles Number of trucks
GAI 590 354 236 236
KAI 520 - 130 130
3.2. Classification results
After processing the images and adding them to the categories they each belong to, deep learning
was utilized for the classification stage. In computer vision, adopting deep and pre-training CNN in
classification proves the ability to accomplish high precision in low-quality images and real-time
applications. To assess the classification process of vehicle accidents, five pre-trained Convolutional Neural
Network models are used. The models are: AlexNet, GoogleNet, RestNetv2, MobileNetV2 and SqueesNet.
Each of the listed models has different architecture and parameters. Selection of the models depends on the
most used ones in the image-based vehicle classification and detection research. A remarkable accuracy of
89.69%, 88.45% 88.72%, and 89.11% has been achieved while applying ResNet50 and MobileNetV2,
respectively for GAI and KAI datasets. And this has resulted from the architecture and number of layers
discussed in section 2. Figure 5 is the comparison of the training process of the listed pre-trained CNNs on
the datasets. Figure 6 demonstrates examples of the prediction results for the classification of the datasets.
Table 1 illustrate the number of vehicles in each dataset and Table 2 illustrates The number of images used
for experimental results.
Table 2. The number of images used for experimental results (CC: cars accidents, CB: car-bus, CM: car-
motorcycle, BM: bus-motorcycle, BT: bus-truck, CT: car-truck) accident images
Dataset Training dataset Testing dataset
CC CB CM BM BT CT Total CC CB CM BM BT CT Total
GAI 78 78 78 78 78 78 468 40 40 40 40 40 40 240
KAI 97 - 80 - - 80 257 53 - 40 - - 40 133
6. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
Accident vehicle types classification - a comparative study between different dee… (Mardin A. Anwer)
1479
(a)
(b)
Figure 5. Comparison of the accuracy of training process between different pre-trained CNN using (a) GAI
dataset (b) KAI dataset
(a) (b) (c)
Figure 6. Prediction results for classification using: (a) GoogleNet (b) MobileNetV2 (c) GoogleNet predefine
CNN on GAI and KAI datasets
3.3. Vehicle detection results
This stage locates the site of the accident and accurately recognizes the type of vehicle in the
accident images. The results from the deep-learning-based classification stage show that ResNet50 and
MobileNetV2 have higher accuracy than the rest of the models. Therefore, ResNet50 and MobileNetV2 as
base networks for both Faster RCNN and YOLOV2 object detection networks are used. YOLO and RCNN
depend on the object annotation technique. Annotating object boundary boxes is necessary for the detection
process. In this research, a total of 2,196 boundary boxes are annotated using the image labeler app in
MATLAB for GAI and KAI datasets.
7. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 21, No. 3, March 2021 : 1474 - 1484
1480
a) Vehicle detection experiment with YOLO: YOLO is composed of two subnetworks: feature extraction
network followed by detection network. The feature extraction network is typically a convolutional neural
network, which is ResNet50 and MobileNetV2 from section 3. Conversely, detection subnetwork is a
small CNN if compared to the feature extraction network and consists of a few convolutional layers and
layers specific for YOLO. The subnetwork makes the grid of the image and does the prediction of the
detected object class if it exists and that is by separating boundary boxes and associating class
probabilities. It utilizes a custom network based on the GoogLeNet architecture, using 8.52 billion
activities for a forward pass [33]. The advantage of this prediction is that it can be implemented
independently since the first detection. However, the disadvantage is the difficulty of detecting the
smallest objects as well as objects that are overlapping. Figure 7 displays the detection and prediction
results using the YOLOV2 network.
(a) (b)
Figure 7. YOLOV2 detection and prediction score using (a) ResNet50, 2 vehicles available, 2 correctly
detected, 2 true positive (b) MobileNetV2, 2 vehicles available, 1 correctly detected and 1 false negative
b) Vehicle Detection Experiment with RCNN: The Faster R-CNN is the first detector tested for the detection
of apples as well as obtaining the highest precession of PASCAL VOC 2007 and 2012. In the COCO
challenges 2015 [23] and ImageNet detection and localization at ILSVRC 2016 [34], faster R-CNN was
the basis of more than 125 proposed entries and was the foundation of the winners in several categories
by sharing the fully convolutional layers in the region proposal network and the object classifier [35].
These layers are trained jointly. The region proposal network (RPN) behaves as an attention director,
determining the optimal boundary boxes across a wide range of scales and using nine candidate aspect
ratios to be evaluated for object classification [30]. In other words, the RPN employs anchor boxes for
object detection. Producing region proposals in the network is considered faster and better tuned to the
data. Figure 8 shows the detection and prediction score using Faster RCNN.
(a) (b)
Figure 8. Vehicle detection using (a) Faster RCNN resent50 on GAI test image dataset, 2 vehicles available,
1 correctly detected, 1 true positive and 1 false negative (b) Faster RCNN resent50 on KAI test image
dataset, 2 vehicles available, 1 correctly detected, 1 true negative
4. DISCUSSION
This research is about accident images classification and vehicle type detection processes. The
images used in this experimental research are of modest quality and possess good resolution. This is
consistent with the aim of the research which is to compare the classification and performance detection of
the selected networks. In order to evaluate the performance of the detectors and compare their results,
performance matrices, which are listed in [36, 37], have been used. However, these matrices have been
modified because of using multiclass instead of binary class. Common benchmarks used are boundary box
detection, mAP, Log Average Miss Rate detection, F1-measure and accuracy.
8. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
Accident vehicle types classification - a comparative study between different dee… (Mardin A. Anwer)
1481
a) Intersection over Union (IoU): The process of detecting uses the image as an input to the network and
output a detected BB (BBdt) and a ground truth BB (BBgt) form a potential match if they overlap
sufficiently by performing multiscale detection and any necessary non-maximal suppression (NMS) for
merging nearby detections. The IoU score ranges from 0 to 1 and the closer the two boxes, the higher the
IoU score [37]. Figure 9 shows the estimated anchor box for the class bus.
(a) (b)
(c)
Figure 9. (a) box area verses aspect ratio (b) K-mididos with 4 clusters (c) number of anchors verses Mean
IoU for the vehicle bus
𝐼𝑜𝑈𝑑𝑡
𝑔𝑡
=
𝑎𝑟𝑒𝑎(𝐵𝐵𝑑𝑡∩𝐵𝐵𝑔𝑡)
𝑎𝑟𝑒𝑎(𝐵𝐵𝑑𝑡∪𝐵𝐵𝑔𝑡)
(1)
To compare the detection results of the models we applied them on the same accident image as
shown in Figure 10 that is clearly shown that Yolo ResNet50 BBgt and BBdt are closer than other models.
The red box represents the BBgt and the green box represents the BBdt with the confidence score. Figures 10
and 11 shows the BBgt and BBdt on car and truck accident after applying YOLOV2 MobileNetV2.
b) mAP: to evaluate the performance of object detectors, AP utilized metric as it is a single number metric
that typifies both precision and recall as well as summarizes the Precision-Recall curve by averaging
precision over recall values from 0 to 1. Mean average precision (mAP) is the averages of AP over the N
classes. It measures how well the model generates a bounding box [36].
𝑚𝐴𝑃 =
1
𝑁
∑ 𝐴𝑃𝑖
𝑁
𝑖=1 (2)
As shown Figure 10, the precision of a given class in classification, known as the positive predicted
value, is given as the ratio of true positive (TP) and the total number of predicted positives (true positive +
false positive) [37].
9. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 21, No. 3, March 2021 : 1474 - 1484
1482
(a) (b) (c)
Figure 10. BBgt and BBdt on a random image of GAI using (a)Yolo ResNet50, (b)Faster RCNN Res-Net50,
(c) YOLOV2 MobileNetV2
(a) (b) (c)
Figure 11. (a) Car and truck accident image in KAI dataset (b) the BBgt of the vehicles (c) BBdt detection
and prediction score using YOLOV2 MobileNetV2 network
Precision=
𝑇𝑃
𝑇𝑃+𝐹𝑃
(3)
Similarly, the recall, also called a true positive rate or sensitivity, of a given class in classifica-tion,
is defined as the ratio of TP and total of ground truth positives [36].
Recall (TPR)=
𝑇𝑃
𝑇𝑃+𝐹𝑁
(4)
c) Log Average Miss Rate: it uses to summarize detector performance, computed by averaging miss rate at
nine FPPI rates evenly spaced in log-space in the range 102
to 100 because there is a limitation on
(FPPI) [37]. Table 3 illustrates the results of average precision and log average miss rate for all the
classes in the datasets.
FPPI = FP/number of tested images (5)
Table 3. Experimental result of average precision and log average miss rate for each vehicle in the datasets
Dataset CNN Backbone CNN Average precision Log Average Miss Rate
Bus Motorcycle car Turk Bus Motorcycle car Turk
GAI
YOLOV2 ResNet50 0.99 0.93 0.98 0.99 0.01 0.07 0.02 0.01
YOLOV2 MobileNetV2 0.93 0.61 0.89 0.84 0.07 0.33 0.13 0.16
Faster RCNN ResNet50 0.71 0.58 0.65 0.84 0.30 0.39 0.42 0.17
KAI
YOLOV2 ResNet50 - 0.90 0.95 0.93 - 0.10 0.05 0.07
YOLOV2 MobileNetV2 - 0.63 0.87 0.73 - 0.37 0.13 0.27
Faster RCNN ResNet50 - 0.59 0.67 0.78 - 0.41 0.33 0.22
d) Accuracy: is the percentage of correctly predicted examples out of all predictions [32]. Table 4 shows
the number of detecting each vehicle in the test datasets. While Table 5 shows the accuracy results on
the testing image dataset.
Accuracy=
𝑇𝑃+𝑇𝑁
𝑇𝑃+𝐹𝑃+𝑇𝑁+𝐹𝑁
(6)
10. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
Accident vehicle types classification - a comparative study between different dee… (Mardin A. Anwer)
1483
Table 4. presents the occurrence of the vehicle in test images. NCDC: number of correct detection-Car,
NCDB: number of correct detection-Bus, NCDM: number of correct detection-Motorcycle, NCDT: number
of correct detection-Truck
Vehicle
accident
dataset
YOLOV2/ ResNet50 YOLOV2/MobileNetV2 Faster RCNN/ResNet50
NCDC NCDB NCDM NCDT NCDC NCDB NCDM NCDT NCDC NCDB NCDM NCDT
GAI 84/93 42/45 37/52 37/50 78/93 40/45 33/52 45/50 73/93 40/45 37/52 34/50
KAI 69/78 - 18/30 19/25 65/78 - 20/30 20/25 60/78 - 19/30 20/25
Table 5. Accuracy of Faster RCNN and YOLO on test images datasets
Datasets CNN Model Backbone CNN Total number of vehicles
in test images
Number of vehicles
detected
Accuracy
GAI
YOLOV2 ResNet50 240 200 0.83
YOLOV2 MobileNetV2 240 196 0.81
Faster RCNN ResNet50 240 184 0.76
KAI
YOLOV2 ResNet50 133 106 0.79
YOLOV2 MobileNetV2 133 105 0.78
Faster RCNN ResNet50 133 99 0.74
5. CONCLUSION AND FUTURE WORK
This research has made it certain that convolutional neural networks (CNNs) can achieve accurate
classification and detection results. As there are different kinds of CNN used for recognizing different images, it
has chosen several CNN to recognize images of vehicle accidents. To accomplish this goal, KAI and GAI datasets
were proposed. Following analysis of the result, these datasets can now be recommended to fill in the gap in the
intelligent transportation systems, especially when it comes to images of vehicle accidents. To classify the images,
five deep learning models such as; GoogleNet, ResNet50, MobileNetV2, AlexNet, and SqueesNet have compared.
The selected networks have provided different classification rates (MobileNetV2 and ResNet50 achieved 4% more
than the rest networks). Many pre-trained network topologies are used for image classification, but for space
concerns, this research has chosen only five. The research findings revealed that using detector networks with deep
CNN topologies increases the accuracy of accident classification by determining the location and vehicle type in
the accident image. Although training time of YOLOV2 was much less than Faster RCNN, the experiment results
also show that pre-trained Resenet50, which is used as an extractor of features with YOLOv2 model, achieves
much better in detecting the type of vehicles (more than 6%) in both datasets, according to the accuracy equation.
Nonetheless, these results fall short of expectations as the research aims to discover a network or method that can
describe the accident images precisely and detect the vehicle type with full accuracy. Therefore, at present time,
deep semantic segmentation is under research to help with the precise classification of object types or classes and
the checking of results. Hence our future research will focus on this issue.
REFERENCES
[1] J. F. Forren, D. Jaarsma, “Traffic monitoring by tire noise”, in : Proceeding of Conference on Intelligent
Transportation Systems, Boston, MA, USA, 1997, pp. 177-182, doi: 10.1109/ITSC.1997.660471.
[2] Duzdar, G. Kompa, “Applications using a low-cost baseband pulsed microwave radar sensor," IMTC 2001, in:
Proceedings of the 18th IEEE Instrumentation and Measurement Technology Conference. Rediscovering
Measurement in the Age of Informatics (Cat. No.01CH 37188), Budapest, vol. 1, pp. 239-243, 2001. Doi:
10.1109/IMTC.2001.928819.
[3] Y. Jo, I. Jung, “Analysis of vehicle detection with WSN-based ultra-sonic sensors”, Sensors. vol. 14, no. 8, pp.
14050-14069, 2014.
[4] Y. Iwasaki, M. Misumi, and T. Nakamiya, “Robust vehicle detection under various environments to realize road
traffic flow surveillance using an infrared thermal camera”, The Scientific World Journal, 947272, 2015.
[5] Y. Hans, T. Jiang, Y. Ma and C. Xu, “Pretraining convolutional neural networks for image-based vehicle
classification”. Advances in Multimedia, 2018.
[6] H. Leung, X. Chen, C. Yu, H. Liang, J. Wu, Y. Chen, “A Deep-Learning-Based Vehicle Detection Approach for
Insufficient and Nighttime Illumination Conditions”, Applied Sciences, vol. 9, no. 22, p. 4769, 2019.
[7] M. Anwer, S. Shareef, A. Ali, “Smart traffic incident reporting system in e-government,” in: Proceedings of the of
the ECIAIR 2019 European Conference on the Impact of Artificial Intelligence and Robotic, 2019. DOI:
10.34190/ECIAIR.19.061.
[8] Y. Wang, Z. Pan, “Image contrast enhancement using adjacent-blocks-based modification for local histogram
equalization,” Infrared Physics and Technology, vol. 86, pp. 59-65, 2017.
[9] W. He, Q. Wu, and S. Li, “Medical X-Ray image enhancement based on wavelet domain homo-morphic filtering
and CLAHE,” in: Proceeding of the 2016 International Conference on Robots & Intelligent System (ICRIS), IEEE,
Zhangjiajie, China, pp. 249-254, 2016.
11. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 21, No. 3, March 2021 : 1474 - 1484
1484
[10] B. Shalchian, H. Rajabi, and H. Soltanian-Zadeh, “Assessment of the wavelet transform in re-duction of noise from
simulated PET images,” Journal of Nuclear Medicine Technology, vol. 37, no. 4, pp. 223-228, 2009.
[11] H. Wang, Y. Cai, and L. Chen, 2014. “A vehicle detection algorithm based on deep belief net-work,” The Scientific
World Journal, p. 647380.
[12] D. He, C. Lang, S. Feng, X. Du, and C. Zhang, “Vehicle detection and classification based on convolutional neural
network,” In: Proceedings of the of the 7th International Conference on Internet Mul-timedia Computing and
Service, pp. 1-5, 2015.
[13] M. Yi, F. Yang, E. Blashch, “Vehicle Classification in WAMI Imagery using Deep Network,” In: Proceedings of
the of the SPIE 9838: Sensors and Systems for Space Applications IX, 2016.
[14] B. kumeda, Z. fengli, A. oluwasanmi, F. owusu, M. assefa and T. amenu, “Vehicle Accident and Traffic
Classification Using Deep Convolutional Neural Networks,” in : Proceeding 2019 16th International Computer
Conference on Wavelet Active Media Technology and Information Processing, Chengdu, China, pp. 323-328, 2019
doi: 10.1109/ICCWAMTIP47768.2019.9067530.
[15] B. Li, T. Zhang, and T. Xia, “Vehicle detection from 3D lidar using fully convolutional network,” in: Proceedings
of the of the Robotics: Science and Systems, 2016.
[16] X. Huang, P. He, A. Rangarajan and S. Ranka. “Intelligent Intersection: Two-stream Convolu-tional Networks for
Real-time Near-accident Detection in Traffic Video,” ACM Transactions on Spatial Algorithms and Systems
(TSAS), vol. 6, no. 2, pp. 1-28, 2020.
[17] X. Wen, L. Shao, W. Fang, and Y. Xue, “Efficient feature selection and classification for ve-hicle detection, ‖ IEEE
Trans,” Circuits Syst. Video Technol., vol. 25, no. 3.pp. 508-517, 2015.
[18] Y. Tang. “Deep learning using linear support vector machines,” In Workshop on Representation Learning, ICML,
Atlanta, USA, 2013.
[19] M. Egmont-Petersen, D. de Ridder, and H. Handels, “Image processing with neural net-works- A review, Pattern
recognition,” vol. 35, no. 10, pp. 2279-2301, 2002.
[20] P.A. Viola and M.J. Jones, “Rapid object detection using a boosted cascade of simple fea-tures,” Proc. CVPR, no.
1, pp. 511-518, 2001.
[21] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and
semantic segmentation,” In: Proceedings of the of the IEEE conference on computer vision and pattern recognition,
580-587, 2014.
[22] R. Girshick, “Fast-cnn,” In: Proceedings of the of the IEEE international conference on computer vi-sion, pp. 1440-
1448, 2015.
[23] S. Ren, K. He, R. Girshick, and J. Sun. “Faster r-cnn: Towards real-time object detection with region proposal
networks,” In Advances in neural information processing systems, pp. 91-99, 2015.
[24] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “SSD: Single shot multibox
detector,” 2016. Doi:10/1007/978-3-319-46448-0_2
[25] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” arXiv preprint (2017).
[26] T. Mudumbi, N. Bian, Y. Zhangand, F. Hazoum, “An Approach Combined the Faster RCNN and Mobilenet for
Logo Detection,” In Journal of Physics: Conference Series, vol. 1284, no. 1, p. 012072, 2019. IOP Publishing.
[27] Image Category Classification Using Deep Learning, https://www.mathworks.com/help/vision/examples/image-
category-classification-using-deep-learning.html, (accessed 20 June 2020).
[28] H. El-Khatib, D. Popescu and, L. Ichim, “Deep Learning-Based Methods for Automatic Diagnosis of Skin
Lesions,” Sensors (Basel, Switzerland), vol. 20, no. 6, p. 1753, 2020, https://doi.org/10.3390/s20061753.
[29] Real time vehicle detection using YOLO, W. Mengxi, 2017, https://medium.com/@xslittlegrass/almost-real-time-
vehicle-detection-using-yolo-da0f016b43de (accessed 20 June 2020).
[30] J. Espinosa, S. Velastin, J. Branch, “Vehicle detection using alex net and faster R-CNN deep learning models: a
comparative research,” In International Visual Informatics Conference 2017, pp. 3-15. Springer, Cham. DOI:
https://doi.org/10.1007/978-3-319-70010-6_1
[31] R. Vaishnavi, V. Lavanya and R. Shanta,” A Novel Approach to Automatic Road-Accident De-tection using
Machine Vision Techniques,” International Journal of Advanced Computer Science and Applications. vol. 7, no.
10.14569/IJACSA.2016.071130,2016. https://github.com/vaishnavi29/DCD
[32] Accident Data. https://www.kaggle.com/parthmehta15/accidentdata, 2020 (accessed 20 June 2020).
[33] YOLO_v2 and YOLO9000 Part 2, https://medium.com/adventures-with-deep-learning/yolo-v2-and-yolo9000-part-
2-5aa24b6d32e2, Divakar Kapil, 2018, (accessed 1 January 2021).
[34] ‘ILSVRC2016’.http://imagenet.org/challenges/LSVRC/2016/results. 2017, (accessed 20 June 2020).
[35] J. Espinosa, S. Velastin, J. Branch, “Motorcycle detection and classification in urban Scenarios using a model based
on Faster R-CNN,” 2018, 1808.02299.
[36] Evaluating performance of an object detection model, what is mAP? How to evaluate the per-formance of an object
detection model?, https://towardsdatascience.com/evaluating-performance-of-an-object-detection-model-
137a349c517b ,2020 (accessed 20 June 2020).
[37] Evaluating Object Detection Models: Guide to Performance
Metrics,https://manalelaidouni.github.io/manalelaidouni.github.io/Evaluating-Object-Detection-Models-Guide-to-
Performance-Metrics.html, 2019 (accessed 20 June 2020).