Recent years have witnessed an astronomical growth in the amount of textual information available both on the web and institutional wise document repositories. As a result, text mining has become extremely prevalent and processing of textual information from such repositories got the focus of the current age researchers. Indeed, in the researcher front of text analysis, there are numerous cutting edge applications are available for text mining. More specifically, the classification oriented text mining has been gaining more attention as it concentrates measures like coverage and accuracy. Along with the huge volume of data, the aspirations of the user are growing far higher than the human capacity, thus, an utomated and competitive intelligent systems are essential for reliable text analysis. Towards this, the authors in the present paper propose an Intelligent Text Data Classification System
(ITDCS) which is designed in the light of biological nature of genetic approach and able to acquire
computational intelligence accurately. Initially, ITDCS focusses on preparing structured data from the huge volume of unstructured data with its procedural steps and filter methods. Subsequently, it emphasises on classifying the text data into labelled classes using KNN classification based on the selection of best features derived by genetic algorithm. In this process, it specially concentrates on adding the power of
intelligence to the classifier using together with the biological parts namely, encoding strategy, fitness function and operators of genetic algorithm. The integration of all biological zomponents of genetic algorithm in ITDCS significantly improves the accuracy and reduces the misclassification rate in classifying the text data
Object tracking using motion flow projection for pan-tilt configurationIJECEIAES
We propose a new object tracking model for two degrees of freedom mechanism. Our model uses a reverse projection from a camera plane to a world plane. Here, the model takes advantage of optic flow technique by re-projecting the flow vectors from the image space into world space. A pan-tilt (PT) mounting system is used to verify the performance of our model and maintain the tracked object within a region of interest (ROI). This system contains two servo motors to enable a webcam rotating along PT axes. The PT rotation angles are estimated based on a rigid transformation of the the optic flow vectors in which an idealized translation matrix followed by two rotational matrices around PT axes are used. Our model was tested and evaluated using different objects with different motions. The results reveal that our model can keep the target object within a certain region in the camera view.
Vehicle detection using background subtraction and clustering algorithmsTELKOMNIKA JOURNAL
Traffic congestion has raised worldwide as a result of growing motorization, urbanization, and population. In fact, congestion reduces the efficiency of transportation infrastructure usage and increases travel time, air pollutions as well as fuel consumption. Then, Intelligent Transportation System (ITS) comes as a solution of this problem by implementing information technology and communications networks. One classical option of Intelligent Transportation Systems is video camera technology. Particularly, the video system has been applied to collect traffic data including vehicle detection and analysis. However, this application still has limitation when it has to deal with a complex traffic and environmental condition. Thus, the research proposes OTSU, FCM and K-means methods and their comparison in video image processing. OTSU is a classical algorithm used in image segmentation, which is able to cluster pixels into foreground and background. However, only FCM (Fuzzy C-Means) and K-means algorithms have been successfully applied to cluster pixels without supervision. Therefore, these methods seem to be more potential to generate the MSE values for defining a clearer threshold for background subtraction on a moving object with varying environmental conditions. Comparison of these methods is assessed from MSE and PSNR values. The best MSE result is demonstrated from K-means and a good PSNR is obtained from FCM. Thus, the application of the clustering algorithms in detection of moving objects in various condition is more promising.
EVALUATION OF THE VISUAL ODOMETRY METHODS FOR SEMI-DENSE REAL-TIMEacijjournal
Recent decades have witnessed a significant increase in the use of visual odometry(VO) in the computer vision area. It has also been used in varieties of robotic applications, for example on the Mars Exploration
Rovers.
This paper, firstly, discusses two popular existing visual odometry approaches, namely LSD-SLAM and ORB-SLAM2 to improve the performance metrics of visual SLAM systems using Umeyama Method. We carefully evaluate the methods referred to above on three different well-known KITTI datasets, EuRoC
MAV dataset, and TUM RGB-D dataset to obtain the best results and graphically compare the results to evaluation metrics from different visual odometry approaches.
Secondly, we propose an approach running in real-time with a stereo camera, which combines an existing feature-based (indirect) method and an existing feature-less (direct) method matching with accurate semidense direct image alignment and reconstructing an accurate 3D environment directly on pixels that have image gradient.
Lane and Object Detection for Autonomous Vehicle using Advanced Computer VisionYogeshIJTSRD
The vision of this project is to develop lane and object detection in Autonomous Vehicle system to run efficiently in normal road condition and to eliminate the use of high cost Light based LiDAR system to implement high resolution cameras with advanced computer vision and deep learning technology to provide an Advanced Driver Assistance System ADAS . Detecting lane lines could be a crucial task for any self driving autonomous vehicle. Hence, this project was focused to identify lane lines on the road using OpenCV. The OpenCV tools such as colour selection, the region of interest selection, grey scaling, canny edge detection and perspective transformation are being employed. This project is modelled as an integration of two systems to solve the real time implementation problem in autonomous vehicles. The first part of the system is lane detection by advanced computer vision techniques to detect the lane lines to command the vehicle to stay inside the lane marking. The second part of the system is object detection and tracking is to detect and track the vehicle and pedestrians on the road to get a clear understanding of the environment to plan and generate a trajectory to navigate the autonomous vehicle safely to its destination without any crashes, this is done by a special deep learning method called transfer learning with Single Shot multibox Detection SSD algorithm and Mobile Net architecture. G. Monika | S. Bhavani | L. Azim Jahan Siana | N. Meenakshi "Lane and Object Detection for Autonomous Vehicle using Advanced Computer Vision" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd39952.pdf Paper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/39952/lane-and-object-detection-for-autonomous-vehicle-using-advanced-computer-vision/g-monika
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Simulation for autonomous driving at uber atgYu Huang
Testing Safety of SDVs by Simulating Perception and Prediction
LiDARsim: Realistic LiDAR Simulation by Leveraging the Real World
Recovering and Simulating Pedestrians in the Wild
S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling
SceneGen: Learning to Generate Realistic Traffic Scenes
TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors
GeoSim: Realistic Video Simulation via Geometry-Aware Composition for Self-Driving
AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles
Appendix: (Waymo)
SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving
Object tracking using motion flow projection for pan-tilt configurationIJECEIAES
We propose a new object tracking model for two degrees of freedom mechanism. Our model uses a reverse projection from a camera plane to a world plane. Here, the model takes advantage of optic flow technique by re-projecting the flow vectors from the image space into world space. A pan-tilt (PT) mounting system is used to verify the performance of our model and maintain the tracked object within a region of interest (ROI). This system contains two servo motors to enable a webcam rotating along PT axes. The PT rotation angles are estimated based on a rigid transformation of the the optic flow vectors in which an idealized translation matrix followed by two rotational matrices around PT axes are used. Our model was tested and evaluated using different objects with different motions. The results reveal that our model can keep the target object within a certain region in the camera view.
Vehicle detection using background subtraction and clustering algorithmsTELKOMNIKA JOURNAL
Traffic congestion has raised worldwide as a result of growing motorization, urbanization, and population. In fact, congestion reduces the efficiency of transportation infrastructure usage and increases travel time, air pollutions as well as fuel consumption. Then, Intelligent Transportation System (ITS) comes as a solution of this problem by implementing information technology and communications networks. One classical option of Intelligent Transportation Systems is video camera technology. Particularly, the video system has been applied to collect traffic data including vehicle detection and analysis. However, this application still has limitation when it has to deal with a complex traffic and environmental condition. Thus, the research proposes OTSU, FCM and K-means methods and their comparison in video image processing. OTSU is a classical algorithm used in image segmentation, which is able to cluster pixels into foreground and background. However, only FCM (Fuzzy C-Means) and K-means algorithms have been successfully applied to cluster pixels without supervision. Therefore, these methods seem to be more potential to generate the MSE values for defining a clearer threshold for background subtraction on a moving object with varying environmental conditions. Comparison of these methods is assessed from MSE and PSNR values. The best MSE result is demonstrated from K-means and a good PSNR is obtained from FCM. Thus, the application of the clustering algorithms in detection of moving objects in various condition is more promising.
EVALUATION OF THE VISUAL ODOMETRY METHODS FOR SEMI-DENSE REAL-TIMEacijjournal
Recent decades have witnessed a significant increase in the use of visual odometry(VO) in the computer vision area. It has also been used in varieties of robotic applications, for example on the Mars Exploration
Rovers.
This paper, firstly, discusses two popular existing visual odometry approaches, namely LSD-SLAM and ORB-SLAM2 to improve the performance metrics of visual SLAM systems using Umeyama Method. We carefully evaluate the methods referred to above on three different well-known KITTI datasets, EuRoC
MAV dataset, and TUM RGB-D dataset to obtain the best results and graphically compare the results to evaluation metrics from different visual odometry approaches.
Secondly, we propose an approach running in real-time with a stereo camera, which combines an existing feature-based (indirect) method and an existing feature-less (direct) method matching with accurate semidense direct image alignment and reconstructing an accurate 3D environment directly on pixels that have image gradient.
Lane and Object Detection for Autonomous Vehicle using Advanced Computer VisionYogeshIJTSRD
The vision of this project is to develop lane and object detection in Autonomous Vehicle system to run efficiently in normal road condition and to eliminate the use of high cost Light based LiDAR system to implement high resolution cameras with advanced computer vision and deep learning technology to provide an Advanced Driver Assistance System ADAS . Detecting lane lines could be a crucial task for any self driving autonomous vehicle. Hence, this project was focused to identify lane lines on the road using OpenCV. The OpenCV tools such as colour selection, the region of interest selection, grey scaling, canny edge detection and perspective transformation are being employed. This project is modelled as an integration of two systems to solve the real time implementation problem in autonomous vehicles. The first part of the system is lane detection by advanced computer vision techniques to detect the lane lines to command the vehicle to stay inside the lane marking. The second part of the system is object detection and tracking is to detect and track the vehicle and pedestrians on the road to get a clear understanding of the environment to plan and generate a trajectory to navigate the autonomous vehicle safely to its destination without any crashes, this is done by a special deep learning method called transfer learning with Single Shot multibox Detection SSD algorithm and Mobile Net architecture. G. Monika | S. Bhavani | L. Azim Jahan Siana | N. Meenakshi "Lane and Object Detection for Autonomous Vehicle using Advanced Computer Vision" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd39952.pdf Paper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/39952/lane-and-object-detection-for-autonomous-vehicle-using-advanced-computer-vision/g-monika
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Simulation for autonomous driving at uber atgYu Huang
Testing Safety of SDVs by Simulating Perception and Prediction
LiDARsim: Realistic LiDAR Simulation by Leveraging the Real World
Recovering and Simulating Pedestrians in the Wild
S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling
SceneGen: Learning to Generate Realistic Traffic Scenes
TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors
GeoSim: Realistic Video Simulation via Geometry-Aware Composition for Self-Driving
AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles
Appendix: (Waymo)
SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Path Planning Technique For Autonomous Mobile Robot Using Free-Configuratio...CSCJournals
This paper presents the implementation of a novel technique for sensor based path planning of autonomous mobile robots. The proposed method is based on finding free-configuration eigen spaces (FCE) in the robot actuation area. Using the FCE technique to find optimal paths for autonomous mobile robots, the underlying hypothesis is that in the low-dimensional manifolds of laser scanning data, there lies an eigenvector which corresponds to the free-configuration space of the higher order geometric representation of the environment. The vectorial combination of all these eigenvectors at discrete time scan frames manifests a trajectory, whose sum can be treated as a robot path or trajectory. The proposed algorithm was tested on two different test bed data, real data obtained from Navlab SLAMMOT and data obtained from the real-time robotics simulation program Player/Stage. Performance analysis of FCE technique was done with existing four path planning algorithms under certain working parameters, namely computation time needed to find a solution, the distance travelled and the amount of turning required by the autonomous mobile robot. This study will enable readers to identify the suitability of path planning algorithm under the working parameters, which needed to be optimized. All the techniques were tested in the real-time robotic software Player/Stage. Further analysis was done using MATLAB mathematical computation software.
Path Planning for Mobile Robot Navigation Using Voronoi Diagram and Fast Marc...Waqas Tariq
For navigation in complex environments, a robot needs to reach a compromise between the need for having efficient and optimized trajectories and the need for reacting to unexpected events. This paper presents a new sensor-based Path Planner which results in a fast local or global motion planning able to incorporate the new obstacle information. In the first step the safest areas in the environment are extracted by means of a Voronoi Diagram. In the second step the Fast Marching Method is applied to the Voronoi extracted areas in order to obtain the path. The method combines map-based and sensor-based planning operations to provide a reliable motion plan, while it operates at the sensor frequency. The main characteristics are speed and reliability, since the map dimensions are reduced to an almost unidimensional map and this map represents the safest areas in the environment for moving the robot. In addition, the Voronoi Diagram can be calculated in open areas, and with all kind of shaped obstacles, which allows to apply the proposed planning method in complex environments where other methods of planning based on Voronoi do not work.
Motion planning and controlling algorithm for grasping and manipulating movin...ijscai
Many of the robotic grasping researches have been focusing on stationary objects. And for dynamic moving
objects, researchers have been using real time captured images to locate objects dynamically. However,
this approach of controlling the grasping process is quite costly, implying a lot of resources and image
processing.Therefore, it is indispensable to seek other method of simpler handling… In this paper, we are
going to detail the requirements to manipulate a humanoid robot arm with 7 degree-of-freedom to grasp
and handle any moving objects in the 3-D environment in presence or not of obstacles and without using
the cameras. We use the OpenRAVE simulation environment, as well as, a robot arm instrumented with the
Barrett hand. We also describe a randomized planning algorithm capable of planning. This algorithm is an
extent of RRT-JT that combines exploration, using a Rapidly-exploring Random Tree, with exploitation,
using Jacobian-based gradient descent, to instruct a 7-DoF WAM robotic arm, in order to grasp a moving
target, while avoiding possible encountered obstacles . We present a simulation of a scenario that starts
with tracking a moving mug then grasping it and finally placing the mug in a determined position, assuring
a maximum rate of success in a reasonable time.
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGsipij
Object tracking is one of the most important problems in computer vision. The aim of video tracking is to extract the trajectories of a target or object of interest, i.e. accurately locate a moving target in a video sequence and discriminate target from non-targets in the feature space of the sequence. So, feature descriptors can have significant effects on such discrimination. In this paper, we use the basic idea of many trackers which consists of three main components of the reference model, i.e., object modeling, object detection and localization, and model updating. However, there are major improvements in our system. Our forth component, occlusion handling, utilizes the r-spatiogram to detect the best target candidate. While spatiogram contains some moments upon the coordinates of the pixels, r-spatiogram computes region-based compactness on the distribution of the given feature in the image that captures richer features to represent the objects. The proposed research develops an efficient and robust way to keep tracking the object throughout video sequences in the presence of significant appearance variations and severe occlusions. The proposed method is evaluated on the Princeton RGBD tracking dataset considering sequences with different challenges and the obtained results demonstrate the effectiveness of the proposed method.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Novel Background Subtraction Algorithm for Dynamic Texture ScenesIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
New approach to the identification of the easy expression recognition system ...TELKOMNIKA JOURNAL
In recent years, facial recognition has been a major problem in the field of computer vision, which has attracted lots of interest in previous years because of its use in different applications by different domains and image analysis. Which is based on the extraction of facial descriptors, it is a very important step in facial recognition. In this article, we compared robust methods (SIFT, PCA-SIFT, ASIFT and SURF) to extract relevant facial information with different facial posture variations (open and unopened mouth, glasses and no glasses, open and closed eyes). The simulation results show that the detector (SURF) is better than others at finding the similarity descriptor and calculation time. Our method is based on the normalization of vector descriptors and combined with the RANSAC algorithm to cancel outliers in order to calculate the Hessian matrix with the objective of reducing the calculation time. To validate our experience, we tested four facial images databases containing several modifications. The results of the simulation show that our method is more efficient than other detectors in terms of speed of recognition and determination of similar points between two images of the same face, one belonging to the base of the text and the other one to the base driven by different modifications. This method, which can be applied on a mobile platform to analyze the content of simple images, for example, to detect driver fatigue, human-machine interaction, human-robot. Using descriptors with properties important for good accuracy and real-time response.
Digital Heritage Documentation Via TLS And Photogrammetry Case Studytheijes
In the last decade, several manual tradition measurement techniques were used to document the heritage buildings around the word; however, some of these techniques take a long time, often lack completeness, and may sometimes give unreliable information. In contrast, terrestrial laser scanning “TLS” surveys and Photogrammetry have already been undertaken in several heritage sites in the United Kingdom and other countries of Europe as a new method of documenting heritagesites. This paper focuses on using the TLS and Photogrammetry methods to document one of the important houses in Historic Jeddah, Saudi Arabia, which is Nasif Historical House, as an example of Digital Heritage Documentation (DHD).
La gestion des données au service de la performance énergétique de l’industrieTanguy Mathon
Conférence Pollutec 2016, animée par Tanguy Mathon, Directeur de blu.e by ENGIE.
Issue de la démarche innovation du groupe ENGIE, blu.e propose des solutions numériques aux industriels pour la gestion et l’optimisation de l’énergie.
blu.e collecte et analyse en continu des milliers de données pour proposer des réglages précis aux usines dans l’objectif de maximiser leur performance énergétique, avec l’appui de son réseau d’experts.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Path Planning Technique For Autonomous Mobile Robot Using Free-Configuratio...CSCJournals
This paper presents the implementation of a novel technique for sensor based path planning of autonomous mobile robots. The proposed method is based on finding free-configuration eigen spaces (FCE) in the robot actuation area. Using the FCE technique to find optimal paths for autonomous mobile robots, the underlying hypothesis is that in the low-dimensional manifolds of laser scanning data, there lies an eigenvector which corresponds to the free-configuration space of the higher order geometric representation of the environment. The vectorial combination of all these eigenvectors at discrete time scan frames manifests a trajectory, whose sum can be treated as a robot path or trajectory. The proposed algorithm was tested on two different test bed data, real data obtained from Navlab SLAMMOT and data obtained from the real-time robotics simulation program Player/Stage. Performance analysis of FCE technique was done with existing four path planning algorithms under certain working parameters, namely computation time needed to find a solution, the distance travelled and the amount of turning required by the autonomous mobile robot. This study will enable readers to identify the suitability of path planning algorithm under the working parameters, which needed to be optimized. All the techniques were tested in the real-time robotic software Player/Stage. Further analysis was done using MATLAB mathematical computation software.
Path Planning for Mobile Robot Navigation Using Voronoi Diagram and Fast Marc...Waqas Tariq
For navigation in complex environments, a robot needs to reach a compromise between the need for having efficient and optimized trajectories and the need for reacting to unexpected events. This paper presents a new sensor-based Path Planner which results in a fast local or global motion planning able to incorporate the new obstacle information. In the first step the safest areas in the environment are extracted by means of a Voronoi Diagram. In the second step the Fast Marching Method is applied to the Voronoi extracted areas in order to obtain the path. The method combines map-based and sensor-based planning operations to provide a reliable motion plan, while it operates at the sensor frequency. The main characteristics are speed and reliability, since the map dimensions are reduced to an almost unidimensional map and this map represents the safest areas in the environment for moving the robot. In addition, the Voronoi Diagram can be calculated in open areas, and with all kind of shaped obstacles, which allows to apply the proposed planning method in complex environments where other methods of planning based on Voronoi do not work.
Motion planning and controlling algorithm for grasping and manipulating movin...ijscai
Many of the robotic grasping researches have been focusing on stationary objects. And for dynamic moving
objects, researchers have been using real time captured images to locate objects dynamically. However,
this approach of controlling the grasping process is quite costly, implying a lot of resources and image
processing.Therefore, it is indispensable to seek other method of simpler handling… In this paper, we are
going to detail the requirements to manipulate a humanoid robot arm with 7 degree-of-freedom to grasp
and handle any moving objects in the 3-D environment in presence or not of obstacles and without using
the cameras. We use the OpenRAVE simulation environment, as well as, a robot arm instrumented with the
Barrett hand. We also describe a randomized planning algorithm capable of planning. This algorithm is an
extent of RRT-JT that combines exploration, using a Rapidly-exploring Random Tree, with exploitation,
using Jacobian-based gradient descent, to instruct a 7-DoF WAM robotic arm, in order to grasp a moving
target, while avoiding possible encountered obstacles . We present a simulation of a scenario that starts
with tracking a moving mug then grasping it and finally placing the mug in a determined position, assuring
a maximum rate of success in a reasonable time.
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGsipij
Object tracking is one of the most important problems in computer vision. The aim of video tracking is to extract the trajectories of a target or object of interest, i.e. accurately locate a moving target in a video sequence and discriminate target from non-targets in the feature space of the sequence. So, feature descriptors can have significant effects on such discrimination. In this paper, we use the basic idea of many trackers which consists of three main components of the reference model, i.e., object modeling, object detection and localization, and model updating. However, there are major improvements in our system. Our forth component, occlusion handling, utilizes the r-spatiogram to detect the best target candidate. While spatiogram contains some moments upon the coordinates of the pixels, r-spatiogram computes region-based compactness on the distribution of the given feature in the image that captures richer features to represent the objects. The proposed research develops an efficient and robust way to keep tracking the object throughout video sequences in the presence of significant appearance variations and severe occlusions. The proposed method is evaluated on the Princeton RGBD tracking dataset considering sequences with different challenges and the obtained results demonstrate the effectiveness of the proposed method.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Novel Background Subtraction Algorithm for Dynamic Texture ScenesIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
New approach to the identification of the easy expression recognition system ...TELKOMNIKA JOURNAL
In recent years, facial recognition has been a major problem in the field of computer vision, which has attracted lots of interest in previous years because of its use in different applications by different domains and image analysis. Which is based on the extraction of facial descriptors, it is a very important step in facial recognition. In this article, we compared robust methods (SIFT, PCA-SIFT, ASIFT and SURF) to extract relevant facial information with different facial posture variations (open and unopened mouth, glasses and no glasses, open and closed eyes). The simulation results show that the detector (SURF) is better than others at finding the similarity descriptor and calculation time. Our method is based on the normalization of vector descriptors and combined with the RANSAC algorithm to cancel outliers in order to calculate the Hessian matrix with the objective of reducing the calculation time. To validate our experience, we tested four facial images databases containing several modifications. The results of the simulation show that our method is more efficient than other detectors in terms of speed of recognition and determination of similar points between two images of the same face, one belonging to the base of the text and the other one to the base driven by different modifications. This method, which can be applied on a mobile platform to analyze the content of simple images, for example, to detect driver fatigue, human-machine interaction, human-robot. Using descriptors with properties important for good accuracy and real-time response.
Digital Heritage Documentation Via TLS And Photogrammetry Case Studytheijes
In the last decade, several manual tradition measurement techniques were used to document the heritage buildings around the word; however, some of these techniques take a long time, often lack completeness, and may sometimes give unreliable information. In contrast, terrestrial laser scanning “TLS” surveys and Photogrammetry have already been undertaken in several heritage sites in the United Kingdom and other countries of Europe as a new method of documenting heritagesites. This paper focuses on using the TLS and Photogrammetry methods to document one of the important houses in Historic Jeddah, Saudi Arabia, which is Nasif Historical House, as an example of Digital Heritage Documentation (DHD).
La gestion des données au service de la performance énergétique de l’industrieTanguy Mathon
Conférence Pollutec 2016, animée par Tanguy Mathon, Directeur de blu.e by ENGIE.
Issue de la démarche innovation du groupe ENGIE, blu.e propose des solutions numériques aux industriels pour la gestion et l’optimisation de l’énergie.
blu.e collecte et analyse en continu des milliers de données pour proposer des réglages précis aux usines dans l’objectif de maximiser leur performance énergétique, avec l’appui de son réseau d’experts.
The leadership map - keynote presentationlarssudmann
Slides from the keynote presentation on the Leadership Map - a concept that is drawing on the ideas of customer mapping and leadership metrics for your leadership success
An Analysis of Various Deep Learning Algorithms for Image Processingvivatechijri
Various applications of image processing has given it a wider scope when it comes to data analysis.
Various Machine Learning Algorithms provide a powerful environment for training modules effectively to
identify various entities of images and segment the same accordingly. Rather one can observe that though the
image classifiers like the Support Vector Machines (SVM) or Random Forest Algorithms do justice to the task,
deep learning algorithms like the Artificial Neural Networks (ANN) and its subordinates, the very well-known
and extremely powerful Algorithm Convolution Neural Networks (CNN) can provide a new dimension to the
image processing domain. It has way higher accuracy and computational power for classifying images further
and segregating their various entities as individual components of the image working region. Major focus will
be on the Region Convolution Neural Networks (R-CNN) algorithm and how well it provides the pixel-level
segmentation further using its better successors like the Fast-Faster and Mask R-CNN versions.
Speed Determination of Moving Vehicles using Lucas- Kanade AlgorithmEditor IJCATR
This paper presents a novel velocity estimation method for ground vehicles. The task here is to automatically estimate
vehicle speed from video sequences acquired with a fixed mounted camera. The vehicle motion is detected and tracked along the
frames using Lucas-Kanade algorithm. The distance traveled by the vehicle is calculated using the movement of the centroid over the
frames and the speed of the vehicle is estimated. The average speed of cars is determined from various frames. The application is
developed using MATLAB and SIMULINK.
Vehicle License Plate Recognition (VLPR) is an important system for harmonious traffic. Moreover this system is helpful in many fields and places as private and public entrances, parking lots, border control and theft control. This paper presents a new framework for Sudanese VLPR system. The proposed framework uses Multi Objective Particle Swarm Optimization (MOPSO) and Connected Component Analysis (CCA) to extract the license plate. Horizontal and vertical projection will be used for character segmentation and the final recognition stage is based on the Artificial Immune System (AIS). A new dataset that contains samples for the current shape of Sudanese license plates will be used for training and testing the proposes framework.
COMPARATIVE STUDY ON VEHICLE DETECTION TECHNIQUES IN AERIAL SURVEILLANCEIJCI JOURNAL
Aerial surveillance system becomes a great trendy on past decades. Aerial surveillance vehicle tracking techniques plays a vital role and give rising to optimistic techniques continuously. This system can be very handy in various applications such as police, traffic monitoring, natural disaster and military. It is often covers large area and providing better perspective of moving objects. The detection of moving vehicle can be both from the dynamic aerial imagery, wide area motion imagery or images under low resolution and also the static in nature. It has been very difficult issue whether identify the object in the air view, the camera angles, movement objects and motionless object. This paper deals with comparative study on various vehicle detection and tracking approach in aerial videos with its experimental results and measures working condition, hit rate and false alarm rate
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and advances in image processing algorithms, have been pushing the camera based features recently.
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and advances in image processing algorithms, have been pushing the camera based features recently.
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver
Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main
sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and
advances in image processing algorithms, have been pushing the camera based features recently.
Monocular cameras face the challenge of accurate scale estimation which limits it use as a stand-alone
sensor for this application. This paper proposes an efficient system which can perform multi scale object
detection which is being patent granted and efficient 3D reconstruction using structure from motion (SFM)
framework. While the algorithms need to be accurate it also needs to operate real time in low cost
embedded hardware. The focus of the paper is to discuss how the proposed algorithms are designed in such
a way that it can be provide real time performance on low cost embedded CPU’s which makes use of only
Digital Signal processors (DSP) and vector processing cores.
AN EFFICIENT SYSTEM FOR FORWARD COLLISION AVOIDANCE USING LOW COST CAMERA & E...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and advances in image processing algorithms, have been pushing the camera based features recently. Monocular cameras face the challenge of accurate scale estimation which limits it use as a stand-alone sensor for this application. This paper proposes an efficient system which can perform multi scale object
detection which is being patent granted and efficient 3D reconstruction using structure from motion (SFM)
framework. While the algorithms need to be accurate it also needs to operate real time in low cost
embedded hardware. The focus of the paper is to discuss how the proposed algorithms are designed in such
a way that it can be provide real time performance on low cost embedded CPU’s which makes use of only Digital Signal processors (DSP) and vector processing cores.
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and advances in image processing algorithms, have been pushing the camera based features recently. Monocular cameras face the challenge of accurate scale estimation which limits it use as a stand-alone sensor for this application. This paper proposes an efficient system which can perform multi scale object detection which is being patent granted and efficient 3D reconstruction using structure from motion (SFM) framework. While the algorithms need to be accurate it also needs to operate real time in low cost embedded hardware. The focus of the paper is to discuss how the proposed algorithms are designed in such a way that it can be provide real time performance on low cost embedded CPU’s which makes use of only Digital Signal processors (DSP) and vector processing cores.
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver
Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main
sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and
advances in image processing algorithms, have been pushing the camera based features recently.
Monocular cameras face the challenge of accurate scale estimation which limits it use as a stand-alone
sensor for this application.
An Efficient System for Forward Collison Avoidance Using Low Cost Camera & Em...aciijournal
Forward Collision Avoidance (FCA) systems in automobiles is an essential part of Advanced Driver Assistance System (ADAS) and autonomous vehicles. These devices currently use, radars as the main sensor. The increasing resolution of camera sensors, processing capability of hardware chipsets and
advances in image processing algorithms, have been pushing the camera based features recently. Monocular cameras face the challenge of accurate scale estimation which limits it use as a stand-alone
sensor for this application. This paper proposes an efficient system which can perform multi scale object
detection which is being patent granted and efficient 3D reconstruction using structure from motion (SFM) framework. While the algorithms need to be accurate it also needs to operate real time in low cost embedded hardware. The focus of the paper is to discuss how the proposed algorithms are designed in such a way that it can be provide real time performance on low cost embedded CPU’s which makes use of only Digital Signal processors (DSP) and vector processing cores.
Vehicle detection and tracking techniques a concise reviewsipij
Vehicle detection and tracking applications play an important role for civilian and military applications
such as in highway traffic surveillance control, management and urban traffic planning. Vehicle detection
process on road are used for vehicle tracking, counts, average speed of each individual vehicle, traffic
analysis and vehicle categorizing objectives and may be implemented under different environments
changes. In this review, we present a concise overview of image processing methods and analysis tools
which used in building these previous mentioned applications that involved developing traffic surveillance
systems. More precisely and in contrast with other reviews, we classified the processing methods under
three categories for more clarification to explain the traffic systems.
Vehicle detection is an important issue in driver assistance systems and self-guided vehicles that includes
two stages of hypothesis generation and verification. In the first stage, potential vehicles are hypothesized
and in the second stage, all hypothesis are verified. The focus of this work is on the second stage. We
extract Pyramid Histograms of Oriented Gradients (PHOG) features from a traffic image as candidates of
feature vectors to detect vehicles. Principle Component Analysis (PCA) and Linear Discriminant Analysis
(LDA) are applied to these PHOG feature vectors as dimension reduction and feature selection tools
parallelly. After feature fusion, we use Genetic Algorithm (GA) and cosine similarity-based K Nearest
Neighbor (KNN) classification to improve the performance and generalization of the features. Our tests
show good classification accuracy of more than 97% correct classification on realistic on-road vehicle
images.
Projection Profile Based Number Plate Localization and Recognition csandit
This paper proposes algorithms to localize vehicle
number plates from natural background
images, to segment the characters from the localize
d number plates and to recognize the
segmented characters. The reported system is tested
on a dataset of 560 sample images
captured with different background under various il
luminations. The performance accuracy of
the proposed system has been calculated at each sta
ge, which is 97.1%, 95.4% and 95.72% for
localisation & extraction, character segmentation a
nd character recognition respectively. The
proposed method is also capable of localising and r
ecognising multiple number plates in
images.
PROJECTION PROFILE BASED NUMBER PLATE LOCALIZATION AND RECOGNITIONcscpconf
This paper proposes algorithms to localize vehicle number plates from natural background images to segment the characters from the localized number plates and to recognize the
segmented characters. The reported system is tested on a dataset of 560 sample images captured with different background under various illuminations. The performance accuracy of the proposed system has been calculated at each stage, which is 97.1%, 95.4% and 95.72% for
localisation & extraction, character segmentation and character recognition respectively. The proposed method is also capable of localising and recognising multiple number plates in images.
AUTOMATED MANAGEMENT OF POTHOLE RELATED DISASTERS USING IMAGE PROCESSING AND ...ijcsit
Potholes though seem inconsequential, may cause accidents resulting in loss of human life. In this paper, we present an automated system to efficiently manage the potholes in a ward by deploying geotagging and image processing techniques that overcomes the drawbacks associated with the existing
survey-oriented systems. Image processing is used for identification of target pothole regions in the 2D
images using edge detection and morphological image processing operations. A method is developed to
accurately estimate the dimensions of the potholes from their images, analyze their area and depth,estimate the quantity of filling material required and therefore enabling pothole attendance on a priority basis. This will further enable the government official to have a fully automated system for e f f e c t i v e l y ma n a g i ng pothole related disasters.
Similar to AN INNOVATIVE RESEARCH FRAMEWORK ON INTELLIGENT TEXT DATA CLASSIFICATION SYSTEM USING GENETIC ALGORITHM (20)
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
AN INNOVATIVE RESEARCH FRAMEWORK ON INTELLIGENT TEXT DATA CLASSIFICATION SYSTEM USING GENETIC ALGORITHM
1. International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 6, November 2016
DOI: 10.5121/ijaia.2016.7604 43
CENTROG FEATURE TECHNIQUE FOR VEHICLE
TYPE RECOGNITION AT DAY AND NIGHT TIMES
Martins E. Irhebhude, Philip O. Odion and Darius T. Chinyio
Faculty of Science, Department of Computer Science, Nigerian Defence Academy,
Kaduna, Nigeria.
ABSTRACT
This work proposes a feature-based technique to recognize vehicle types within day and night times.
Support vector machine (SVM) classifier is applied on image histogram and CENsus Transformed
histogRam Oriented Gradient (CENTROG) features in order to classify vehicle types during the day and
night. Thermal images were used for the night time experiments. Although thermal images suffer from low
image resolution, lack of colour and poor texture information, they offer the advantage of being unaffected
by high intensity light sources such as vehicle headlights which tend to render normal images unsuitable
for night time image capturing and subsequent analysis. Since contour is useful in shape based
categorisation and the most distinctive feature within thermal images, CENTROG is used to capture this
feature information and is used within the experiments. The experimental results so obtained were
compared with those obtained by employing the CENsus TRansformed hISTogram (CENTRIST).
Experimental results revealed that CENTROG offers better recognition accuracies for both day and night
times vehicle types recognition.
KEYWORDS
CENTROG, CENTRIST, Vehicle Type Recognition, Day-time Recognition, Night-time Recognition,
Classification
1. INTRODUCTION
A feature technique that can adequately recognise vehicle types at day and night times has been a
subject in most vision related researches. Most of the time, a feature technique that gave an
optimal recognition accuracy on day time experiment for vehicle type recognition often fails to
replicate the same accuracy result when the same feature is applied on same dataset in same
locality at night time. This is probably because, features that were used for the day time
experiment are mostly appearance related i.e. information about colour and texture appearance of
the vehicle; whereas a night time feature is mostly contour based.
As reported by [1], not much work have been done on vehicle recognition at night time on
thermal images. However, the authors proposed CENsus Transformed histogRam Oriented Gradient
(CENTROG) feature technique for vehicle recognition at night times. Also reported in [2], Iwasaki
et. al., a vehicle detection mechanism within thermal images using the Viola Jones detector was
proposed. The technique involved detecting the thermal energy reflection area of tires as a
feature. This paper therefore aims to contribute towards bridging this research gap by proposing a
technique that will recognise vehicles at both day and night times.
2. International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 6, November 2016
44
Also reported by [3] and documented by several other authors, vehicle at day time were detected
and recognised by using different approaches. A multiple feature techniques was proposed in [3]
for recognising vehicle types during the day time. The proposed feature was experimented on two
different view angles. A state-of-the-art video processing techniques for vehicle detection,
recognition and tracking was surveyed in [4]. A combination of salient geographical and shape
features of taillights and license plates extracted from the rear view of a vehicle was used in [5]
for the recognition of vehicle make and model. Extracting additional data from video stream,
besides the vehicle image itself helped improve vehicle recognition in a 3D image sets [6]. A
technique for road intersection classification was proposed in [7]. A comparative analysis
between Support Vector Machine (SVM) and deep neural network with sift features showed that
automatic feature extraction recorded higher accuracy compared with manual technique [8]. Li et
al in [9] showed that even in a congested road traffic condition an AND-OR graph (AOG) using
bottom-up inference can be used to represent and detect vehicle objects based on both front and
rear views. Similarly, [10] proposed the use of strong shadows as a feature to detect the presence
of vehicles in a congested environment. In a separate experiment, authors in [11] showed how
vehicles were partitioned into three parts; road, head and body, using a tripwire technique.
Subsequently Haar wavelet features extracted from each part with PCA performed on features
calculated to form 3 category PCA-subspaces. Further, Multiple Discriminate Analysis (MDA) is
performed on each PCA-subspace to extract features, which are subsequently trained to identify
vehicles using the Hidden Markov Model Expectation Maximisation (HMMEM) algorithm. In an
experiment by [12], a camera calibration tool was used on detected and track vehicle objects so as
to extract object parameters, which were then used for the classification of the vehicle into classes
of cars and non-cars. Vehicle objects were detected and counted using a frame differencing
technique with morphological operators: dilation and erosion [13]. In [14], the author added
simulated images to initial dataset and applied PCA on each sub-region to reduce feature sets,
computation time and hence speed-up the processing cycle. In another classification task in [15],
the foreground image size feature was extracted with two levels dilation and fill morphological
operations; and classified into small, medium and large categories. In [16], the author proposed
Scale-Invariant Feature Transform (SIFT), the Canny edge detector, k-means clustering with
Euclidean matching distance metric for inter and intra class vehicle classification as an alternative
to expensive Electronic Toll Collection (ETC) full-scale multi-lane free flow traffic system. In
[17], a technique for traffic estimation and vehicle classification using region features with a
neural network (NN) classifier was proposed. A technique for rear view based vehicle
classification was proposed in [18] with investigation of Hybrid Dynamic Bayesian Network
(HDBN) in vehicle classification. HDBN was compared with three other classifiers for the
classification of known vs unknown classes and four known classes of vehicles using tail light
and vehicle dimensions with respect to the dimensions of the license plate as feature sets.
Similarly, the width, distance from license plate and the angle between the tail light and the
license plates formed part of the eleven features sets.
As we can observe from literature, it is clear that different feature approach has been used for
recognising vehicle types at both day and night times respectively. Therefore, in this research
work, we will examine features that are contour and appearance based, so as to design a technique
that can recognise vehicle during the day and also at night times. This will help eliminate the
bottleneck associated with generating new feature sets for dataset at a given time including view
angle. Day time video dataset which contain vehicles of varied colours and shapes will be used
for the day time experiment while thermal video dataset will be explored for the night time
experiment.
3. International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 6, November 2016
45
The rest of the paper is organized as follows. Section 2 explains the foreground/background
segmentation algorithm, Gaussian Mixture Model (GMM), CENsus TRansformed hISTogram
(CENTRIST) and CENTROG descriptors along with their usage was discussed in section 3.
Section 4 provides an overview of the Support Vector Machine (SVM) classification algorithm.
An insight into the research approach is provided in section 5 and followed by experimental and
performance evaluation in Section 6. The paper is concluded in Section 7.
2. GAUSSIAN MIXTURE MODEL (GMM)
According to [19, 20], a GMM is a parametric probability density function that is represented as a
weighted sum of Gaussian distributions. The GMM technique uses a method to model each
background pixel by a mixture of k Gaussian distributions [21]. The weight of the mixture
represents the time proportion for which the pixel values stay unchanged in a scene. Probable
background colours stay longer and are more static than the foreground colours.
In [22], the recent history of each pixel, 1,..., tX X , is modelled by a mixture of K Gaussian
distributions. The probability of observing the current pixel value is defined as:
, ,
1 ,
( ) * ( , , )
K
t i t t i t
i i t
P X Xω η µ
=
= ∑ ∑ (1)
Where K is the number of distributions, ,i tω is an estimate of the weight (what portion of the data
is accounted for by this Gaussian) of the ith
Gaussian in the mixture at time t, ,i tµ is the mean
value of the ith
Gaussian in the mixture at time ,, i tt ∑ is the covariance matrix of the ith
Gaussian in the mixture at time t, and η is a Gaussian probability density function of the form:
1( )1
( )
2
1
2 2
1
( , , )
(2 ) | |
Xt tT
t tX
t n
X e
µ
µ
η µ
π
− −
∑
− −
=∑
∑
(2)
The covariance matrix is of the form:
2
,k t k
Iσ=∑ (3)
3. CENTRIST AND CENTROG DESCRIPTORS
As reported in [1] Census Transformed Histogram for Encoding Sign Information (CENTRIST)
is a visual description technique that was proposed by Wu et. al. [23] that is used to detect
topological sections or scene categories. It extracts the structural properties from within an image
while filtering out the textural details. It employs the Census Transform (CT) technique in which
an 8-bit value is computed in order to encode the signs of comparison between neighbouring
pixels. According to [24], CT is a non-parametric local transforms described as follows:
4. International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 6, November 2016
46
Let P be a pixel, I(P) its intensity (usually an 8-bit integer), and N(P) the set of pixels in some
square neighbourhood of diameter d surrounding P. All non-parametric transforms depend upon
the comparative intensities of P versus the pixels in the neighbourhood N(P).
Define ( , ')P Pξ to be 1 if ( ') ( )I P I P< and 0 otherwise.
Rτ(P) maps the local neighbourhood surrounding a pixel P to a bit representing the set of
neighbouring pixels whose intensity is less than that of P. Therefore, census transform compares
the intensity value of a pixel with its eight surrounding neighbours; in other words, CT is a
summary of local spatial structure given by equation (5) [24]:
Let N(P) = P, where ⊗ is the Minkowski sum and D is a set of displacements, and let ⊗ be
concatenation.
( ) ( , [ , ])
[ , ]
R P P P i j
i j D
τ ξ
⊗
= +
∈
(4)
Example:
2
1 0 0
1 1 (10011110) 158
1 1 0
CT⇒ ⇒ ⇒ = (5)
From the CT example above, it can be seen that if the pixel under consideration is larger than (or
equal to) one of its eight neighbours, a bit 1 is set in the corresponding location; else a bit 0 is set.
The eight bits generated from the intensity comparisons can be put together in the order of
appearance (from top to bottom, left to right) and converted to a base-10 value (e.g., binary to
decimal conversion). This is the computed CT value for the pixel under consideration. The so-
called CENTRIST descriptor therefore is the histogram of the CT image generated from an
image.
In [1], in order to compute the CENTROG features, after the image structure has been captured,
compute its CT on a computed edge image, thereafter Histogram Oriented Gradient (HOG) [25]
features is extracted from the transformed edge image. The HOG works by counting the
occurrences of gradient orientation in localized portions of an image. The HOG captures local
object appearances and shapes, which can often be characterized rather well by the distribution of
local intensity gradients, or edge directions as reported in [26]. The gradient is computed by
applying [1,0,1] and [1,0,1]T
in horizontal and vertical directions within an image. The gradient
information is collected from local cells and put into histograms using tri-linear interpolation. On
the overlapping blocks composed of neighbouring cells, normalisation is performed. The
CENTROG descriptor therefore is the HOG on the CT generated image. Some vehicle image
5. International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 6, November 2016
47
samples are shown below, (figures 1 and 2). CENTROG is a very useful technique which helps to
capture the local and global structure of a particular image effectively when the colour and
texture information are missing in it.
Figure 1: Samples Showing Night Time Vehicles
Figure 2: Samples Showing Day Time Vehicles
4. SUPPORT VECTOR MACHINE (SVM)
According to [27] SVM is a technique used to train classifiers, regressors and probability
densities that is well-founded in statistical learning theory. SVM can be used for binary and
multi-classification tasks.
4.1. BINARY CLASSIFICATION
SVM perform pattern recognition for two-class problems by determining the separating
hyperplane with maximum distance to the closest points of the training set. In this approach,
optimal classification of a separable two-class problem is achieved by maximising the width of
the margin between the two classes [28]. The margin is the distance between the discrimination
hyper-surface in n-dimensional feature space and the closest training patterns called support
vectors. If the data is not linearly separable in the input space, a non-linear transformation Φ(.)
can be applied, which maps the data points x ∈ R into a high dimensional space H, which is called
a feature space. The data is then separated as described above. The original support vector
machine classifier was designed for linear separation of two classes; however, to solve the
problem of separating more than two classes, the multi-class support vector machine was
developed.
4.2 MULTI-CLASS CLASSIFICATION
SVM was designed to solve binary classification problems. In real world classification problems
however, we can have more than two classes. In the attempt to solve q-class problems with
SVMs; training q SVMs was involved, each of which separates a single class from all remaining
classes, or training q2
machines, each of which separates a pair of classes. Multiclass
classification allows non-linearly separable classes by combining multiple 2−class classifiers.
N−class classification is accomplished by combining N, 2−class classifiers, each discriminating
between a specific class and the rest of the training set [28]. During the classification stage, a
pattern is assigned to the class with the largest positive distance between the classified pattern and
6. International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 6, November 2016
48
the individual separating hyperplane for the N binary classifiers. One of the two classes in such
multi-class sets of binary classification problems will contain a substantially smaller number of
patterns than the other class [28]. SVM classifier was chosen because of its popularity and speed
of processing. For more details on SVM algorithm, please refer to this article [29].
5. RESEARCH APPROACH
In order to recognise vehicles at both day and night times, an initial classification between day
and night time vehicles will be done. After this initial categorisation, further classification will be
done for the day and the night time datasets. Figure 3 shows the proposed methodology for
vehicle type recognition at day and night times. As shown in figure 3, the input video was
segmented using Gaussian Mixture Model (GMM) foreground/background segmentation
algorithm. This was done to enable the effective extraction of the region of interest which
contained vehicle objects which were the only moving objects in the video input.
Figure 3: Proposed System Outline for Vehicle Type Recognition at Day and Night Times
7. International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 6, November 2016
49
In the experiments, based on the number of vehicular samples available in the publicly available
night time dataset [30], we will only focus on two key types for night times: cars and trucks. For
the night images, we used a thermal camera with parameters FLIR SR-19 Thermal Camera,
White Box, Black Box, Total Video Footage Captured: 63 min of ROBB DRIVE and 1-80
OVERPASS. However, for the day vehicular images, data samples were gathered from a low
medium resolution camera that was installed on the roadside of the Sohar Highway, in Oman.
The camera was of pixel resolutions, 640 x 360, and the frame rate was 25 FPS. The data used in
the experimental analysis consisted of 10hours video footage during daytime at approximately a
450
angle from the direction of the movement of vehicles.
6. EXPERIMENTS AND PERFORMANCE EVALUATION
A number of experiments were conducted to evaluate the performance of the proposed algorithm
on vehicle type recognition at both day and night times. The experiments were conducted on
images retrieved from camera installed on a roadside in Sohar Highway, Oman was used for day
time vehicular experiments. Similarly, the video dataset given in [30] was used for night time
experiments. The results obtain from these experiments will be discussed in the following
subsections.
To evaluate the system approach, Receiver Operating Characteristic (ROC) curve will be used.
As reported in [3] ROC curve shows classification performance in detail. A ROC curve is the plot
of True Positive Rate and the False Positive Rate for different cut-off points or threshold of a
parameter [31]. It is given as:
True Positive Rate (Recall) =
( )
tp
tp fn+
(6)
False Positive Rate =
( )
fp
fn tn+
(7)
where, tp denotes the number of true positives (an instance that is positive and classified as
positive); tn denotes the number of true negatives (an instance that is negative and classified as
negative); f p denotes the number of false positives (an instance that is negative and classified as
positive) and fn denotes the number of false negatives (an instance that is positive and classified
as negative).
ROC curve visualises the following as reported:
1. It shows the trade off between sensitivity and specificity (any increase in sensitivity will
be accompanied by a decrease in specificity).
2. The closer the curve follows the left-hand border and then the top border of the ROC
space, the more accurate is the test.
3. The slope of the tangent line at a cutpoint gives the likelihood ratio (LR) for that value of
the test.
8. International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 6, November 2016
50
Similarly, the accuracy of an experiment is measured by the Area Under the ROC Curve (AUC).
An area of 1 represents a perfect test; an area of 0.5 or less represents a worthless test. Accuracy
of performance is defined as:
p n
p n p n
t t
Accuracy
t t f f
+
=
+ + +
(8)
The following is a rough guide for classifying the accuracy of a test is the traditional academic
point system as reported by [32]:
• 0.90-1 = excellent (A)
• 0.80-0.90 = good (B)
• 0.70-0.80 = fair (C)
• 0.60-0.70 = poor (D)
• 0.50-0.60 = fail (F)
Finally, the ROC curve shows the ability of the classifier to rank the positive instances relative to
the negative instances.
6.1. DAY/NIGHT VEHICULAR CATEGORISATION EXPERIMENTS
The set of input-output sample is
1 1 2 2( , ),( , ),...,( , )N Nx y x y x y (9)
where the input ix denotes the feature vector extracted from image i and the output iy is a class
label. Since we are categorising into day and night times vehicles, the class label iy encodes day
time vehicles (encoded as 1) and night time vehicles (encoded as 2) respectively; while the
extracted feature ix encodes image histogram. Image histogram feature was used for
categorisation because it effectively captures appearance information of any data. Also, there is a
clear distinction between day and night data.
The dataset consisted of approximately 1500 day and night time vehicles and was split 75:25 for
the purpose of training and testing. The vehicles captured and thus used in experimentation only
consisted of cars and trucks (night time vehicles) and cars, jeeps and trucks (day time vehicles)
hence the classification was of a binary nature, i.e. into these two classes.
Experiment results shows 100% recognition accuracy; this is due to the fact that there is
appearance distinction between the day and night time vehicles. The confusion matrix and ROC
curve of the experimental results are shown in figures 4 and 5 below.
9. International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 6, November 2016
51
Figure 4: Confusion Matrix for Day and Night Time Categorisation
Figure 5: ROC Curve for Day and Night Time Categorisation
6.2. Day Vehicular Categorisation Experiments
The set of input-output sample is
1 1 2 2( , ),( , ),...,( , )N Nx y x y x y (10)
where the input ix denotes the feature vector extracted from image i and the output iy is a class
label. Since we are categorising into vehicle types, the class label iy encodes cars (encoded as 1),
jeeps (encoded as 2), and truck (encoded as 3) respectively; while the extracted feature ix
encodes image CENTROG.
10. International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 6, November 2016
52
The dataset consisted of approximately 720-day time vehicles and was split 75:25 for the purpose
of training and testing. The vehicles captured and thus used in experimentation only consisted of
cars, jeeps and trucks hence the classification was of a multi-categorisation, i.e. into three classes.
Experiments were conducted using CENTROG and compared with CENTRIST feature
descriptor. Experimental results obtained showed that CENTROG (the proposed technique)
outperformed CENTRIST by recording a detection accuracy of 97.2% versus 95.6%. The
confusion matrix and ROC curve of the experimental results are shown below in figures 6, 7, 8
and 9 for the purpose of comparison.
Figure 6: CENTROG Confusion Matrix for Day Time Vehicle Type Recognition
Figure 7: CENTRIST Confusion Matrix for Day Time Vehicle Type Recognition
11. International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 6, November 2016
53
Figure 9: CENTRIST ROC Curve for Day Time Vehicle Type Recognition
Figure 8: CENTROG ROC Curve for Day Time Vehicle Type Recognition
6.3. NIGHT TIME VEHICULAR CATEGORISATION EXPERIMENTS
In the experiments conducted in [1], it was extensively reported that CENTROG feature
technique gave highest accuracy results for vehicle type recognition at night time. Results from
these experiments showed an accuracy of 100% for the CENTROG technique in contrast to
92.7% for the CENTRIST technique as reported.
7. CONCLUSION
In conclusion, this paper proposed a feature-based technique for vehicle type recognition at both
day and night times. Initial categorisation was carried out to classify vehicles into day and night
time types using image histogram as features. Experiments conducted gave 100% recognition
12. International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 6, November 2016
54
accuracy. In order to recognise the vehicle types the proposed features were extracted by applying
Histogram Oriented Gradient on Census Transformed images and hence termed as CENTROG.
An SVM classifier was trained on the features obtained from the two datasets (day and night time
vehicles). The proposed technique was implemented and compared with the CENTRIST feature
technique. Experimental results showed that CENTROG outperformed the CENTRIST, recording
97.2% vs 95.6% (for day time) and 100% vs 92.7% (night time) recognition accuracies
respectively, thereby exhibiting a higher classification accuracy.
Future work would involve looking into identifying more categories, such as vans, tricycles and
motorcycles etc.
REFERENCES
[1] Martins E Irhebhude, Mohammad Athar Ali, and Eran A Edirisinghe. Pedestrian detection and
vehicle type recognition using centrog features for nighttime thermal images. In Intelligent Computer
Communication and Processing (ICCP), 2015 IEEE International Conference on, pages 407–412.
IEEE, 2015.
[2] Yoichiro Iwasaki, Masato Misumi, and Toshiyuki Nakamiya. Robust vehicle detection under various
environmental conditions using an infrared thermal camera and its application to road traffic flow
monitoring. Sensors, 13(6):7756– 7773, 2013.
[3] Martins E Irhebhude, Nawahda Amin, and Eran A Edirisinghe. View invariant vehicle type
recognition and counting system using multiple features. International Journal of Computer Vision
and Signal Processing, 6(1): 20-32, 2016.
[4] Khairi Abdulrahim and Rosalina Abdul Salam. Traffic surveillance: A review of vision based vehicle
detection, recognition and tracking. International Journal of Applied Engineering Research,
11(1):713–726, 2016.
[5] Noppakun Boonsim and Simant Prakoonwit. Car make and model recognition under limited lighting
conditions at night. Pattern Analysis and Applications, pages 1–13, 2016.
[6] Jakub Sochor, Adam Herout, and Jiri Havel. Boxcars: 3d boxes as cnn input for improved fine-
grained vehicle recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 3006–3015, 2016.
[7] Jhonghyun An, Baehoon Choi, Kwee-Bo Sim, and Euntai Kim. Novel intersection type recognition
for autonomous vehicles using a multi-layer laser scanner. Sensors, 16(7):1123, 2016.
[8] Heikki Huttunen, Fatemeh Shokrollahi Yancheshmeh, and Ke Chen. Car type recognition with deep
neural networks. arXiv preprint arXiv:1602.07125, 2016.
[9] Ye Li, Bo Li, Bin Tian, and Qingming Yao. Vehicle detection based on the and-or graph for
congested traffic conditions. Intelligent Transportation Systems, IEEE Transactions on, 14(2):984–
993, 2013.
[10] Ehsan Adeli Mosabbeb, Maryam Sadeghi, and Mahmoud Fathy. A new approach for vehicle
detection in congested traffic scenes based on strong shadow segmentation. In Advances in Visual
Computing, pages 427–436. Springer, 2007.
[11] Ming Yin, Hao Zhang, Huadong Meng, and Xiqin Wang. An hmm-based algorithm for vehicle
detection in congested traffic situations. In Intelligent Transportation Systems Conference, 2007.
ITSC 2007. IEEE, pages 736–741. IEEE, 2007.
[12] S. Gupte, O. Masoud, R.F.K. Martin, and N.P. Papanikolopoulos. Detection and classification of
vehicles. Intelligent Transportation Systems, IEEE Transactions on, 3(1):37–47, 2002.
[13] PM Daigavane and PR Bajaj. Real time vehicle detection and counting method for unsupervised
traffic video on highways. International Journal of Computer Science and Network Security, 10(8),
2010.
[14] Chi-Chen Raxle Wang and J.-J.J. Lien. Automatic vehicle detection using local features;a statistical
approach. Intelligent Transportation Systems, IEEE Transactions on, 9(1):83–96, 2008.
13. International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 6, November 2016
55
[15] H.T.P. Ranga, M. Ravi Kiran, S. Raja Shekar, and S.K. Naveen Kumar. Vehicle detection and
classification based on morphological technique. In Signal and Image Processing (ICSIP), 2010
International Conference on, pages 45–48, 2010.
[16] Jun Yee Ng and Yong Haur Tay. Image-based vehicle classification system. arXiv preprint
arXiv:1204.2114, 2012.
[17] Celil Ozkurt and Fatih Camci. Automatic traffic density estimation and vehicle classification for
traffic surveillance systems using neural networks. Mathematical and Computational Applications,
14(3):187, 2009.
[18] Mehran Kafai and Bir Bhanu. Dynamic bayesian networks for vehicle classification in video.
Industrial Informatics, IEEE Transactions on, 8(1):100–109, 2012.
[19] Douglas Reynolds. Gaussian mixture models. Encyclopedia of Biometrics, pages 659–663, 2009.
[20] Zoran Zivkovic. Improved adaptive gaussian mixture model for background subtraction. In Pattern
Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, volume 2, pages
28–31. IEEE, 2004.
[21] Documentation OpenCV. Background subtraction, Accessed 23rd January 2014.
[22] Chris Stauffer and W Eric L Grimson. Adaptive background mixture models for real-time tracking. In
Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on., volume 2.
IEEE, 1999.
[23] Jianxin Wu, Christopher Geyer, and James M Rehg. Real-time human detection using contour cues.
In Robotics and Automation (ICRA), 2011 IEEE International Conference on, pages 860–867. IEEE,
2011.
[24] Ramin Zabih and John Woodfill. Non-parametric local transforms for computing visual
correspondence. In Computer VisionECCV’94, pages 151–158. Springer, 1994.
[25] Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In Computer
Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume
1, pages 886–893. IEEE, 2005.
[26] Irfan Riaz, Jingchun Piao, and Hyunchul Shin. Human detection by using centrist features for thermal
images. In International Conference Computer Graphics, Visualization, Computer Vision and Image
Processing. Citeseer, 2013.
[27] Chikahito Nakajima, Massimiliano Pontil, Bernd Heisele, and Tomaso Poggio. Full-body person
recognition system. Pattern recognition, 36(9):1997–2006, 2003.
[28] Sonka Milan, Hlavac Vaclav, and Boyle Roger. Image Processing Analysis, and Machine Vision.
Cengage Learning, Delhi, third edition, 2008.
[29] Christopher JC Burges. A tutorial on support vector machines for pattern recognition. Data mining
and knowledge discovery, 2(2):121–167, 1998.
[30] Marvin Smith, Joshua Gleason, Steve Wood, and Issa Beekun. Vehicle location by thermal images
features, Accessed 5th November 2014.
[31] Tom Fawcett. An introduction to roc analysis. Pattern recognition letters, 27(8):861–874, 2006.
[32] MD Thomas G. Tape. Interpreting diagnostic tests, retrieved 15th June, 2014.
AUTHORS
Martins Ekata Irhebhude obtained his tertiary and master degree education in Edo State,
Nigeria in 2003 and 2008 respectively. He is concluded his PhD research degree in 2015
with the Computer Science Department in Loughborough University, UK under the
supervision of Eran A. Edirisinghe PhD, a Professor of Digital Image Processing. Martins
is a staff of the Nigerian Defence Academy, Kaduna State, Nigeria since 2004 and currently
engages in teaching and research activities in and around the Defence Academy.
Research interests includes: object detection, people tracking, people re- identification, object
recognition and vision related researches.
14. International Journal of Artificial Intelligence and Applications (IJAIA), Vol. 7, No. 6, November 2016
56
Philip Oshiokhaimhele Odion got his first degree (BSc, Computer Science) in 1996 from
University of Benin, Benin-city, Edo State-Nigeria. He obtained his MSc, Computer Science
at Abubakar Tafawa Balewa University, Bauchi-Nigeria in 2006 and PhD, Computer Science
at Nigerian Defence Academy, Kaduna in 2014. Dr PO Odion is currently a Senior Lecturer
and Head of Department (HOD) of Computer Science at the Nigerian Defence Academy,
Kaduna. He has various local and international publications to his credit. His research interests are in
Software Engineering, Computer Networks and Artificial Intelligence.
Darius Tienhua Chinyio received the B. Ed. degree in Mathematics Education, and a Post
Graduate Diploma in Computer Science from Ahmadu Bello University, Zaria, Nigeria, in
1983 and 1986, respectively; and an M. Sc. in Computer Science from the University of
Lagos, Nigeria in 1991. He is currently working toward the Ph.D. degree in Computer
Science, Department of Computer Science, Nigerian Defence Academy, Kaduna, Nigeria;
under the supervision of Professor E. A. Onibere. His research interests include Computational Science,
Image Processing, and Networking.