This document proposes the design of an autonomous garbage collecting robot that uses the YOLOv3 deep learning model for object detection. It discusses creating a custom dataset of garbage items and using it to train YOLOv3 and CNN models for detecting garbage in images captured by the robot. It also describes the hardware components of the robot including Raspberry Pi, Arduino, ultrasonic sensor, DC motors and servomotors. The document compares the performance of YOLOv3 and CNN models and concludes that YOLOv3 has better accuracy, precision and F1 score for garbage detection.
This document provides an overview of recent developments in object detection using AI robots. It explores several deep learning-based object detection techniques and their advantages over traditional computer vision methods. The paper discusses how object detection is used in robotics applications like grasping, manipulating, and navigation. It also presents the results of experiments conducted to evaluate an object detection system using a robot equipped with cameras and sensors. The system uses a Fast R-CNN algorithm combined with a Kalman filter for real-time object detection and tracking.
IOT BASED TOOL GARBAGE MONITORING SYSTEMIRJET Journal
This document proposes an IOT-based tool garbage monitoring system. It uses ultrasonic sensors, an Arduino microcontroller, WiFi module and GPS to monitor waste bins and send alerts when bins are full. The system measures garbage levels in bins and sends the data via WiFi to a web page. When a bin reaches capacity, it activates a buzzer and sends a notification to authorities for collection. This automatic monitoring system aims to improve waste management efficiency and keep cities cleaner.
Algorithm of detection, classification and gripping of occluded objects by C...IJECEIAES
The following paper presents the development of an algorithm, in charge of detecting, classifying and grabbing occluded objects, using artificial intelligence techniques, machine vision for the recognition of the environment, an anthropomorphic manipulator for the manipulation of the elements. 5 types of tools were used for their detection and classification, where the user selects one of them, so that the program searches for it in the work environment and delivers it in a specific area, overcoming difficulties such as occlusions of up to 70%. These tools were classified using two CNN (convolutional neural network) type networks, a fast R-CNN (fast region-based CNN) for the detection and classification of occlusions, and a DAG-CNN (directed acyclic graph-CNN) for the classification tools. Furthermore, a Haar classifier was trained in order to compare its ability to recognize occlusions with respect to the fast R-CNN. Fast R-CNN and DAG-CNN achieved 70.9% and 96.2% accuracy, respectively, Haar classifiers with about 50% accuracy, and an accuracy of grip and delivery of occluded objects of 90% in the application, was achieved.
An Internet of Things Based Smart Waste.pptxGOWTHAMR721887
This document describes a smart waste management system using Internet of Things technologies including sensors, LoRa communication, and a TensorFlow deep learning model. Waste bins are equipped with ultrasonic sensors to detect waste level and GPS to track location. A Raspberry Pi uses a MobileNetV2 model trained on waste images to classify waste and route it into the correct compartment. Test results showed the model achieving high accuracy in waste classification and localization over time during training. The system aims to improve existing waste management through automation and real-time monitoring.
The document summarizes a student project proposal for a "Smart Waste Management System". Key points:
- The project aims to develop an IoT-based system using sensors to monitor waste bin levels in real-time and optimize garbage collection routes.
- Sensors will detect waste levels and segregation to increase efficiency. Notifications will inform drivers which bins are full.
- This addresses issues like overflowing bins, uninformed routes wasting time/resources.
- The proposal outlines the existing system, problem statement, proposed solution, scope, architectural design, implementation tools, project plan and references.
This document provides an overview of recent developments in object detection using AI robots. It explores several deep learning-based object detection techniques and their advantages over traditional computer vision methods. The paper discusses how object detection is used in robotics applications like grasping, manipulating, and navigation. It also presents the results of experiments conducted to evaluate an object detection system using a robot equipped with cameras and sensors. The system uses a Fast R-CNN algorithm combined with a Kalman filter for real-time object detection and tracking.
IOT BASED TOOL GARBAGE MONITORING SYSTEMIRJET Journal
This document proposes an IOT-based tool garbage monitoring system. It uses ultrasonic sensors, an Arduino microcontroller, WiFi module and GPS to monitor waste bins and send alerts when bins are full. The system measures garbage levels in bins and sends the data via WiFi to a web page. When a bin reaches capacity, it activates a buzzer and sends a notification to authorities for collection. This automatic monitoring system aims to improve waste management efficiency and keep cities cleaner.
Algorithm of detection, classification and gripping of occluded objects by C...IJECEIAES
The following paper presents the development of an algorithm, in charge of detecting, classifying and grabbing occluded objects, using artificial intelligence techniques, machine vision for the recognition of the environment, an anthropomorphic manipulator for the manipulation of the elements. 5 types of tools were used for their detection and classification, where the user selects one of them, so that the program searches for it in the work environment and delivers it in a specific area, overcoming difficulties such as occlusions of up to 70%. These tools were classified using two CNN (convolutional neural network) type networks, a fast R-CNN (fast region-based CNN) for the detection and classification of occlusions, and a DAG-CNN (directed acyclic graph-CNN) for the classification tools. Furthermore, a Haar classifier was trained in order to compare its ability to recognize occlusions with respect to the fast R-CNN. Fast R-CNN and DAG-CNN achieved 70.9% and 96.2% accuracy, respectively, Haar classifiers with about 50% accuracy, and an accuracy of grip and delivery of occluded objects of 90% in the application, was achieved.
An Internet of Things Based Smart Waste.pptxGOWTHAMR721887
This document describes a smart waste management system using Internet of Things technologies including sensors, LoRa communication, and a TensorFlow deep learning model. Waste bins are equipped with ultrasonic sensors to detect waste level and GPS to track location. A Raspberry Pi uses a MobileNetV2 model trained on waste images to classify waste and route it into the correct compartment. Test results showed the model achieving high accuracy in waste classification and localization over time during training. The system aims to improve existing waste management through automation and real-time monitoring.
The document summarizes a student project proposal for a "Smart Waste Management System". Key points:
- The project aims to develop an IoT-based system using sensors to monitor waste bin levels in real-time and optimize garbage collection routes.
- Sensors will detect waste levels and segregation to increase efficiency. Notifications will inform drivers which bins are full.
- This addresses issues like overflowing bins, uninformed routes wasting time/resources.
- The proposal outlines the existing system, problem statement, proposed solution, scope, architectural design, implementation tools, project plan and references.
The document describes a deep learning-based robot designed to automatically pick up garbage on grass. The robot uses a deep neural network for garbage recognition and ground segmentation using another deep neural network to guide navigation. The robot can efficiently and autonomously clean garbage on the ground in places like parks or schools with an accuracy of 95% for garbage recognition. A novel navigation strategy is proposed based on ground segmentation that allows the robot to select navigation goals and avoid non-garbage obstacles in real-time without path planning. Experimental results show the robot can achieve almost the same cleaning efficiency as traditional methods.
The document describes a social distancing detection system that uses the YOLO object detection algorithm and COCO dataset to detect people in video frames and estimate distances between them. It draws bounding boxes around detected people, with violations of the default distance threshold shown in red and non-violations in green. The number of violations and alert messages are displayed on screen to help users maintain safe distances. The system was tested on prerecorded video and images and was able to accurately detect violations and maintain social distancing.
There are many projects made by government under the smart cities and it is necessary that these systems which conflicts the smart-cities garbage systems have to be smarter. With the help of these smart cities systems, it is necessary thatpeople need easy accessibility to the garbage disposing methods as well as the collection process. It should be efficient in terms of time and fuel cost. In our propose system we are going to check garbage fill status of the dustbin by using different types of Sensor to check the status and send the message to cloud. This research paper represents to segregate Dry and Wet garbage more efficient and reliable to certain extents.
A Robotic Prototype System for Child MonitoringWaqas Tariq
This document describes a robotic prototype system for child monitoring. The system consists of a Khepera robot, host computer, and circuits to trigger lights and alarms. The robot uses image processing to find and follow a baby prop (Lego blocks) in a testing area with obstacles. It can detect when the baby enters danger zones and activate the circuits. Experimental testing showed the robot could successfully find and track the baby prop while avoiding obstacles in scenarios of increasing complexity. The system provides a starting point for developing mobile robotic child monitoring in the home.
IRJET - Swarm Robotic System for Mapping and Exploration of Extreme Envir...IRJET Journal
This document describes a proposed swarm robotic system for mapping and exploring extreme environments. A swarm robotic system uses multiple simple robots working together in a decentralized and coordinated manner to complete complex tasks. The proposed system would use small (9-10 cm) robots equipped with sensors to map their environment in 2D. The robots would communicate over an IoT network to share sensor data and build a collective map. This swarm system could explore hazardous areas like disaster sites, chemical leaks, or radiation zones that are dangerous for humans. It would allow rescuers to safely assess environments and plan operations. The document outlines the methodology, including developing the robot platform using open-source technologies, programming communication between robots over WiFi, and using computer vision to
Road signs detection using voila jone's algorithm with the help of opencvMohdSalim34
This document provides an introduction and overview of a project to develop an automatic road sign detection system using the Viola-Jones object detection framework. It discusses the motivation for the project to address safety concerns from drivers missing road signs. The document outlines the contributions of the project, which are to train a classifier using OpenCV to detect German road signs in images by implementing the Viola-Jones algorithm. It also provides details on the Viola-Jones algorithm, which combines Haar features, integral images, AdaBoost testing, and cascading classifiers to rapidly detect objects in real-time.
An assistive model of obstacle detection based on deep learning: YOLOv3 for v...IJECEIAES
The World Health Organization (WHO) reported in 2019 that at least 2.2 billion people were visual-impairment or blindness. The main problem of living for visually impaired people have been facing difficulties in moving even indoor or outdoor situations. Therefore, their lives are not safe and harmful. In this paper, we proposed an assistive application model based on deep learning: YOLOv3 with a Darknet-53 base network for visually impaired people on a smartphone. The Pascal VOC2007 and Pascal VOC2012 were used for the training set and used Pascal VOC2007 test set for validation. The assistive model was installed on a smartphone with an eSpeak synthesizer which generates the audio output to the user. The experimental result showed a high speed and also high detection accuracy. The proposed application with the help of technology will be an effective way to assist visually impaired people to interact with the surrounding environment in their daily life.
This document summarizes an academic paper that proposes a method for incrementally training object detection models to classify unseen object classes in real-time. It begins by providing background on object detection techniques like YOLO and SSD that can perform detection in a single pass. The paper aims to improve these single-shot detectors through incremental learning to classify new object classes without retraining the entire model from scratch. It conducted experiments on YOLO and VGG16 to investigate how well they can classify objects from unseen classes and whether their performance is affected by factors like background, bounding box size, or network architecture. The goal is to develop a more robust object detection method that can easily adapt to new classes of objects in real-time applications.
Abstract: This Project describes a visual sensor system used in the field of robotics for identification and tracking of the colored object. The program is designed to capture an Object through a Camera. It describes image capturing and processing techniques, followed by an introduction to actual robotic application to trace the Object using the serial COM port of the PC. The whole system of making a robot to follow an object can be divided into four blocks: image acquisition, processing of image, decision-making and motion control.
This document summarizes research comparing the HSI and YCbCr color models for detecting and classifying cotton contaminants using digital image processing and machine vision. Experiments show that the HSI color model detected and classified contaminants faster than the YCbCr model, taking 76.5 seconds versus 88.7 seconds. Feature extraction was performed on binarized and thresholded images to distinguish contaminants based on attributes like area, perimeter, solidity and extent. A naïve Bayes classifier was then used to classify contaminants into classes like nylon, hair, bark and leaf, with the HSI model achieving a lower mean square error.
This document discusses using hand gestures to control swarm robots. It describes detecting hands using skin color tracking and extracting features from hands using Gabor filters. Gestures are recognized by comparing features to a database. The swarm robots communicate using Bluetooth to reach consensus on recognized gestures and navigate accordingly. The system was implemented using foot-bot robots with a webcam for input and motors/sensors for movement and communication. The approach aims to allow effective control of swarm robots through robust gesture recognition that handles noise and environments.
This document discusses using hand gestures to control swarm robots. It describes detecting hands using skin color tracking and extracting features from hands using Gabor filters. Gestures are recognized by comparing features to a database. The swarm robots communicate using Bluetooth to reach consensus on recognized gestures and navigate accordingly. The system was implemented using foot-bot robots with a webcam for input and motors/sensors for movement and communication. The approach aims to allow effective control of swarm robots through robust gesture recognition that handles noise and environments.
The document describes a proposed system for detecting landmines using mobile robots. The system involves designing an autonomous, low-cost mobile robot that uses effective motion planning and multi-sensor fusion to detect landmines. Experimental results show that the robot is able to autonomously detect fake mines in a simulated environment using multiple low-cost sensors, with decreased false alarms compared to using a single sensor. The system aims to provide a low-cost solution to landmine detection that ensures safety for human operators.
1. The document proposes a system using drones to detect garbage and alert cleaners. A detector drone will fly over an area, take photos, and send them to a server. The server uses computer vision to identify garbage and alerts the nearest cleaner.
2. A checker drone then verifies that the garbage has been cleaned. It flies to locations where garbage was detected and uses computer vision to confirm if cleaning was completed. This system aims to help keep areas clean as part of India's Swachh Bharat mission.
3. The detector drone takes photos with a camera and GPS and sends them to a server. The server uses a convolutional neural network model trained on garbage images to identify garbage in the photos. If
The goal of the project is to run an object detection algorithm on every frame of a video, thus allowing the algorithm to detect all the objects in it, including but not limited to people, vehicles, animals etc. Object recognition and detection play a crucial role in computer vision and automated driving systems. We aim to design a system that does not compromise on performance or accuracy and provides real time solutions. With the importance of computer vision growing with each passing day, models that deliver high performance results are all the more dominant. Exponential growth in computing power as well as growing popularity in deep learning led to a stark increase in high performance algorithms that solve real world problems. Our model can be taken a step further, allowing the user the flexibility to detect only the objects that are needed at the moment despite being trained on a larger dataset. P. Rajeshwari | P. Abhishek | P. Srikanth | T. Vinod ""Object Detection: An Overview"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23422.pdf
Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/23422/object-detection-an-overview/p-rajeshwari
IRJET- Real-Time Object Detection using Deep Learning: A SurveyIRJET Journal
This document summarizes recent advances in real-time object detection using deep learning. It first provides an overview of object detection and deep learning. It then reviews popular object detection models including CNNs, R-CNNs, Fast R-CNN, Faster R-CNN, YOLO, and SSD. The document proposes modifications to existing models to improve small object detection accuracy. Specifically, it proposes using Darknet-53 with feature map upsampling and concatenation at multiple scales to detect objects of different sizes. It also describes using k-means clustering to select anchor boxes tailored to each detection scale.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IRJET- Determining the Components of Waste Assisted with Analysis of Meth...IRJET Journal
This document discusses a project that aims to analyze waste components through image analysis to enable automatic segregation and suggestions for reuse and recycling. Images of waste uploaded by users would be analyzed using content-based image retrieval to identify components as biodegradable or non-biodegradable. A web crawler would then suggest appropriate recycling techniques for biodegradable components identified. The system would use Eclipse IDE, MySQL database, Navicat database tool, and Apache Tomcat web server to build a web application to conduct this image analysis and information retrieval automatically.
Virtual environment for assistant mobile robotIJECEIAES
This paper shows the development of a virtual environment for a mobile robotic system with the ability to recognize basic voice commands, which are oriented to the recognition of a valid command of bring or take an object from a specific destination in residential spaces. The recognition of the voice command and the objects with which the robot will assist the user, is performed by a machine vision system based on the capture of the scene, where the robot is located. In relation to each captured image, a convolutional network based on regions is used with transfer learning, to identify the objects of interest. For human-robot interaction through voice, a convolutional neural network (CNN) of 6 convolution layers is used, oriented to recognize the commands to carry and bring specific objects inside the residential virtual environment. The use of convolutional networks allowed the adequate recognition of words and objects, which by means of the associated robot kinematics give rise to the execution of carry/bring commands, obtaining a navigation algorithm that operates successfully, where the manipulation of the objects exceeded 90%. Allowing the robot to move in the virtual environment even with the obstruction of objects in the navigation path.
Architectural and constructions management experience since 2003 including 18 years located in UAE.
Coordinate and oversee all technical activities relating to architectural and construction projects,
including directing the design team, reviewing drafts and computer models, and approving design
changes.
Organize and typically develop, and review building plans, ensuring that a project meets all safety and
environmental standards.
Prepare feasibility studies, construction contracts, and tender documents with specifications and
tender analyses.
Consulting with clients, work on formulating equipment and labor cost estimates, ensuring a project
meets environmental, safety, structural, zoning, and aesthetic standards.
Monitoring the progress of a project to assess whether or not it is in compliance with building plans
and project deadlines.
Attention to detail, exceptional time management, and strong problem-solving and communication
skills are required for this role.
The document describes a deep learning-based robot designed to automatically pick up garbage on grass. The robot uses a deep neural network for garbage recognition and ground segmentation using another deep neural network to guide navigation. The robot can efficiently and autonomously clean garbage on the ground in places like parks or schools with an accuracy of 95% for garbage recognition. A novel navigation strategy is proposed based on ground segmentation that allows the robot to select navigation goals and avoid non-garbage obstacles in real-time without path planning. Experimental results show the robot can achieve almost the same cleaning efficiency as traditional methods.
The document describes a social distancing detection system that uses the YOLO object detection algorithm and COCO dataset to detect people in video frames and estimate distances between them. It draws bounding boxes around detected people, with violations of the default distance threshold shown in red and non-violations in green. The number of violations and alert messages are displayed on screen to help users maintain safe distances. The system was tested on prerecorded video and images and was able to accurately detect violations and maintain social distancing.
There are many projects made by government under the smart cities and it is necessary that these systems which conflicts the smart-cities garbage systems have to be smarter. With the help of these smart cities systems, it is necessary thatpeople need easy accessibility to the garbage disposing methods as well as the collection process. It should be efficient in terms of time and fuel cost. In our propose system we are going to check garbage fill status of the dustbin by using different types of Sensor to check the status and send the message to cloud. This research paper represents to segregate Dry and Wet garbage more efficient and reliable to certain extents.
A Robotic Prototype System for Child MonitoringWaqas Tariq
This document describes a robotic prototype system for child monitoring. The system consists of a Khepera robot, host computer, and circuits to trigger lights and alarms. The robot uses image processing to find and follow a baby prop (Lego blocks) in a testing area with obstacles. It can detect when the baby enters danger zones and activate the circuits. Experimental testing showed the robot could successfully find and track the baby prop while avoiding obstacles in scenarios of increasing complexity. The system provides a starting point for developing mobile robotic child monitoring in the home.
IRJET - Swarm Robotic System for Mapping and Exploration of Extreme Envir...IRJET Journal
This document describes a proposed swarm robotic system for mapping and exploring extreme environments. A swarm robotic system uses multiple simple robots working together in a decentralized and coordinated manner to complete complex tasks. The proposed system would use small (9-10 cm) robots equipped with sensors to map their environment in 2D. The robots would communicate over an IoT network to share sensor data and build a collective map. This swarm system could explore hazardous areas like disaster sites, chemical leaks, or radiation zones that are dangerous for humans. It would allow rescuers to safely assess environments and plan operations. The document outlines the methodology, including developing the robot platform using open-source technologies, programming communication between robots over WiFi, and using computer vision to
Road signs detection using voila jone's algorithm with the help of opencvMohdSalim34
This document provides an introduction and overview of a project to develop an automatic road sign detection system using the Viola-Jones object detection framework. It discusses the motivation for the project to address safety concerns from drivers missing road signs. The document outlines the contributions of the project, which are to train a classifier using OpenCV to detect German road signs in images by implementing the Viola-Jones algorithm. It also provides details on the Viola-Jones algorithm, which combines Haar features, integral images, AdaBoost testing, and cascading classifiers to rapidly detect objects in real-time.
An assistive model of obstacle detection based on deep learning: YOLOv3 for v...IJECEIAES
The World Health Organization (WHO) reported in 2019 that at least 2.2 billion people were visual-impairment or blindness. The main problem of living for visually impaired people have been facing difficulties in moving even indoor or outdoor situations. Therefore, their lives are not safe and harmful. In this paper, we proposed an assistive application model based on deep learning: YOLOv3 with a Darknet-53 base network for visually impaired people on a smartphone. The Pascal VOC2007 and Pascal VOC2012 were used for the training set and used Pascal VOC2007 test set for validation. The assistive model was installed on a smartphone with an eSpeak synthesizer which generates the audio output to the user. The experimental result showed a high speed and also high detection accuracy. The proposed application with the help of technology will be an effective way to assist visually impaired people to interact with the surrounding environment in their daily life.
This document summarizes an academic paper that proposes a method for incrementally training object detection models to classify unseen object classes in real-time. It begins by providing background on object detection techniques like YOLO and SSD that can perform detection in a single pass. The paper aims to improve these single-shot detectors through incremental learning to classify new object classes without retraining the entire model from scratch. It conducted experiments on YOLO and VGG16 to investigate how well they can classify objects from unseen classes and whether their performance is affected by factors like background, bounding box size, or network architecture. The goal is to develop a more robust object detection method that can easily adapt to new classes of objects in real-time applications.
Abstract: This Project describes a visual sensor system used in the field of robotics for identification and tracking of the colored object. The program is designed to capture an Object through a Camera. It describes image capturing and processing techniques, followed by an introduction to actual robotic application to trace the Object using the serial COM port of the PC. The whole system of making a robot to follow an object can be divided into four blocks: image acquisition, processing of image, decision-making and motion control.
This document summarizes research comparing the HSI and YCbCr color models for detecting and classifying cotton contaminants using digital image processing and machine vision. Experiments show that the HSI color model detected and classified contaminants faster than the YCbCr model, taking 76.5 seconds versus 88.7 seconds. Feature extraction was performed on binarized and thresholded images to distinguish contaminants based on attributes like area, perimeter, solidity and extent. A naïve Bayes classifier was then used to classify contaminants into classes like nylon, hair, bark and leaf, with the HSI model achieving a lower mean square error.
This document discusses using hand gestures to control swarm robots. It describes detecting hands using skin color tracking and extracting features from hands using Gabor filters. Gestures are recognized by comparing features to a database. The swarm robots communicate using Bluetooth to reach consensus on recognized gestures and navigate accordingly. The system was implemented using foot-bot robots with a webcam for input and motors/sensors for movement and communication. The approach aims to allow effective control of swarm robots through robust gesture recognition that handles noise and environments.
This document discusses using hand gestures to control swarm robots. It describes detecting hands using skin color tracking and extracting features from hands using Gabor filters. Gestures are recognized by comparing features to a database. The swarm robots communicate using Bluetooth to reach consensus on recognized gestures and navigate accordingly. The system was implemented using foot-bot robots with a webcam for input and motors/sensors for movement and communication. The approach aims to allow effective control of swarm robots through robust gesture recognition that handles noise and environments.
The document describes a proposed system for detecting landmines using mobile robots. The system involves designing an autonomous, low-cost mobile robot that uses effective motion planning and multi-sensor fusion to detect landmines. Experimental results show that the robot is able to autonomously detect fake mines in a simulated environment using multiple low-cost sensors, with decreased false alarms compared to using a single sensor. The system aims to provide a low-cost solution to landmine detection that ensures safety for human operators.
1. The document proposes a system using drones to detect garbage and alert cleaners. A detector drone will fly over an area, take photos, and send them to a server. The server uses computer vision to identify garbage and alerts the nearest cleaner.
2. A checker drone then verifies that the garbage has been cleaned. It flies to locations where garbage was detected and uses computer vision to confirm if cleaning was completed. This system aims to help keep areas clean as part of India's Swachh Bharat mission.
3. The detector drone takes photos with a camera and GPS and sends them to a server. The server uses a convolutional neural network model trained on garbage images to identify garbage in the photos. If
The goal of the project is to run an object detection algorithm on every frame of a video, thus allowing the algorithm to detect all the objects in it, including but not limited to people, vehicles, animals etc. Object recognition and detection play a crucial role in computer vision and automated driving systems. We aim to design a system that does not compromise on performance or accuracy and provides real time solutions. With the importance of computer vision growing with each passing day, models that deliver high performance results are all the more dominant. Exponential growth in computing power as well as growing popularity in deep learning led to a stark increase in high performance algorithms that solve real world problems. Our model can be taken a step further, allowing the user the flexibility to detect only the objects that are needed at the moment despite being trained on a larger dataset. P. Rajeshwari | P. Abhishek | P. Srikanth | T. Vinod ""Object Detection: An Overview"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23422.pdf
Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/23422/object-detection-an-overview/p-rajeshwari
IRJET- Real-Time Object Detection using Deep Learning: A SurveyIRJET Journal
This document summarizes recent advances in real-time object detection using deep learning. It first provides an overview of object detection and deep learning. It then reviews popular object detection models including CNNs, R-CNNs, Fast R-CNN, Faster R-CNN, YOLO, and SSD. The document proposes modifications to existing models to improve small object detection accuracy. Specifically, it proposes using Darknet-53 with feature map upsampling and concatenation at multiple scales to detect objects of different sizes. It also describes using k-means clustering to select anchor boxes tailored to each detection scale.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IRJET- Determining the Components of Waste Assisted with Analysis of Meth...IRJET Journal
This document discusses a project that aims to analyze waste components through image analysis to enable automatic segregation and suggestions for reuse and recycling. Images of waste uploaded by users would be analyzed using content-based image retrieval to identify components as biodegradable or non-biodegradable. A web crawler would then suggest appropriate recycling techniques for biodegradable components identified. The system would use Eclipse IDE, MySQL database, Navicat database tool, and Apache Tomcat web server to build a web application to conduct this image analysis and information retrieval automatically.
Virtual environment for assistant mobile robotIJECEIAES
This paper shows the development of a virtual environment for a mobile robotic system with the ability to recognize basic voice commands, which are oriented to the recognition of a valid command of bring or take an object from a specific destination in residential spaces. The recognition of the voice command and the objects with which the robot will assist the user, is performed by a machine vision system based on the capture of the scene, where the robot is located. In relation to each captured image, a convolutional network based on regions is used with transfer learning, to identify the objects of interest. For human-robot interaction through voice, a convolutional neural network (CNN) of 6 convolution layers is used, oriented to recognize the commands to carry and bring specific objects inside the residential virtual environment. The use of convolutional networks allowed the adequate recognition of words and objects, which by means of the associated robot kinematics give rise to the execution of carry/bring commands, obtaining a navigation algorithm that operates successfully, where the manipulation of the objects exceeded 90%. Allowing the robot to move in the virtual environment even with the obstruction of objects in the navigation path.
Similar to Garbage_Collecting_Robot_Using_YOLOv3_Deep_Learning_Model (1).pdf (20)
Architectural and constructions management experience since 2003 including 18 years located in UAE.
Coordinate and oversee all technical activities relating to architectural and construction projects,
including directing the design team, reviewing drafts and computer models, and approving design
changes.
Organize and typically develop, and review building plans, ensuring that a project meets all safety and
environmental standards.
Prepare feasibility studies, construction contracts, and tender documents with specifications and
tender analyses.
Consulting with clients, work on formulating equipment and labor cost estimates, ensuring a project
meets environmental, safety, structural, zoning, and aesthetic standards.
Monitoring the progress of a project to assess whether or not it is in compliance with building plans
and project deadlines.
Attention to detail, exceptional time management, and strong problem-solving and communication
skills are required for this role.
Practical eLearning Makeovers for EveryoneBianca Woods
Welcome to Practical eLearning Makeovers for Everyone. In this presentation, we’ll take a look at a bunch of easy-to-use visual design tips and tricks. And we’ll do this by using them to spruce up some eLearning screens that are in dire need of a new look.
2. Freedom). The working starts at arm envelope, space and
increasing its efficiency.By using different algorithms and
simulation,the system is enabled to add grip,hold and place
capability. The traditional methods of designing a robotic arm
will be outperformed by the proposed approach.
In another paper [4] using a modified You Only Look
Once v3 (YOLOv3) method a vision-based water surface
garbage capture robot has been developed.The detection scales
of YOLOv3 are simplified from 3 to 2 to improve the real-time
detection performance.The anchor boxes of the training data
set are reclustered for replacing some of the original YOLOv3
prior anchor boxes that are not appropriate to the data set.
Detection speed and accuracy of the modified YOLOv3 were
found to be better than that of other object detection algo-
rithms.
II. DATASET
With more than 500 handpicked images, a dataset cus-
tomized for garbage collection robot was created. Five classes
of garbage were chosen namely bottle, can, food packet,
paper ball, and plastic bag. These images were then labelled
manually using LabelImg. LabelImg is a graphical annotation
tool which is used for labelling object bounding boxes in
images. After labelling an image classes.txt file is generated
which contains all the classes which is annotated. For every
image labelled there will a corresponding .txt file which
includes the metadata. The metadata will include object id,
center x, center y, width, height.
III. METHODOLOGY
The whole project can be described three phases:
• Object Detection
• Garbage Identification
• Garbage Collection
The block diagram and flow chart of the proposed model is
shown in figure 1 and figure 2 respectively.
Fig. 1. Block diagram
A. Object Detection
The garbage collecting robot is movable and it moves until
the ultrasonic sensor detects another object. Ultrasonic sensor
is capable of detecting objects of about 20 cm distance and
Fig. 2. Flow Chart
when an object is detected it will stop the dc motor so that
the robot doesn’t move further. Webcam will get activated and
the image will be send to the Raspberry pi. Garbage detection
using deep learning mechanism is started.
B. Garbage Detection
Garbage detection is done using YOLOv3 algorithm. The
captured image is given to Raspberry pi. There it ensures
that the correct object is detected as garbage. A comparative
study is done for YOLOv3 and Convolutional Neural Network
(CNN).
1) You Only Look Once (YOLO v3): A modified YOLOv3
network detects the garbage lying on the ground. YOLOv3
training was done using Google Colab. The object detection
code written in python was given to Raspberry pi 3b+ and run.
The garbage were detected.Transfer learning method is used
for YOLOv3 Object detection.
The overall prediction probability is calculated using the
formula:
Ntp
Ntp + Nfp
Ntp is the total number of correctly detected objects.
Nfp is the total number of incorrectly detected objects.
• Total number of correctly detected objects = 63
• Total number of incorrectly detected objects = 4
• Total number of object detected = 67
Using the above formula , the prediction probability of
YOLO v3 is calculated and 94.02 % efficiency was obtained.
Authorized licensed use limited to: Sharda University. Downloaded on May 24,2022 at 06:03:49 UTC from IEEE Xplore. Restrictions apply.
3. 70 percent of images from the dataset was given for training
and 30 percent for testing.
2) Convolutional Neural Network (CNN) : Convolutional
Neural Network composes of various convolutional and pool-
ing layers. The input image is given to the first convolutional
layer and output is obtained as activation map. Filters are
applied to the convolutional layer which extract relevant
features from the input image to pass further. To minimize the
number of parameters, pooling layers are added. Before doing
the prediction, several convolutional and pooling layers are
added. The output layer in CNN is a fully connected layer. The
output is generated via output layer and a comparison is made
with error generation. This is then back propagated to update
the filter(weights) and bias values. For training and testing of
CNN, a dataset was created and the parameters were modified
for this project. The training was performed on Google Colab.
• Layer size is given as 64 and 3 convolutional layers were
used
• Filter used is 3x3 and the max pooling size is 2x2.
• In the dense layer, sigmoid activation function is used.
• Batch size = 32 and epoch = 100
• Validation split given as 70 percent for training and 30
percent for testing.
C. Garbage Collection
Using YOLOv3 algorithm, the Raspberry pi will transmit
a signal to the Arduino if the object detected is identified as
garbage. Arduino commence to operate the servo motors of
the garbage collecting robot. At first, the arm’s servo motor
causes the arms of the robot to go down and after that the
grabber servos rotate to collect the garbage. The arm’s servos
cause the arm to go upward and dispose the garbage to a bin
placed over the robot. If the object is not identified as garbage,
the robot continues to move forward.
IV. HARDWARE
A. Raspberry Pi 3B+
The Raspberry Pi is a mini computer board that comprises
of CPU, GPU, USB ports, I/O pins, WiFi, Bluetooth, USB
and network boot and power over ethernet facility. It is a
64 bit processor with 1GB RAM. It consists of Broadcom
BCM2837B0 chipset accompanied with with 1.4GHz Quad-
Core ARM Cortex-A53, 4 cores and consists of 40 pin header
(26 GPIOs). It has 4 USB, 2.0 ports Gigabit Ethernet and 2
pin set header. PoE (power over Ethernet) is a vital feature
included in the device.
B. Arduino Uno R3
Arduino Uno R3 is a microcontroller based on ATmega 328
that includes 14 digital I/O pins and 6 analog I/O pins.
• Operating Voltage : 5V
• Input voltage range : 7V to 12V
• Input voltage (limit) : 6V to 20 V
• DC Current for each I/O Pin : 20 mA
• DC Current used for 3.3V Pin : 50 mA.
• Flash Memory : -32 KB
• Speed of the CLK : 16 MHz
0.5 KB memory is used by the boot loader with SRAM 2 KB
and EEPROM 1 KB. It has an inbuilt LED.
C. Ultrasonic sensor
Ultrasonic sensor is a module which measures distance
using ultrasonic waves. The sensor head emits an ultrasonic
wave and receives the wave reflected back from the target.
It measures the distance to the target by measuring the time
between the emission and reception. To find the distance to
an object, the below mentioned formula can be used:
Distance = (speed of sound x time taken) / 2
• Supply voltage : 5V
• Global current consumption : 15 mA
• Ultrasonic Frequency : 40k Hz
• Maximal Range : 400 cm
• Minimal Range : 3 cm
• Resolution : 1 cm
• Trigger Pulse Width : 10 µs
• Outline Dimension : 43x20x15 mm
D. DC Motors
DC motor is a class of electrical machines that converts
direct current electrical power into mechanical power. The
most common types focus on the forces produced by magnetic
fields. In this project, two DC motors are used on the right
and the left each, giving a total of four.
E. Webcam
A webcam is a video camera that feeds or streams an image
or video in real time to or through a computer network,for
instance, the Internet. Webcam software helps users to record
a video or stream the video on the Internet. As video stream-
ing over the internet demands more bandwidth, compressed
formats are generally used.
F. Servomotors
A servomotor is a rotary actuator or linear actuator which
permits precise control of angular or linear position, velocity
and acceleration. It comprises of a suitable motor coupled to a
sensor for position feedback. This project uses 5 servo motors.
V. CIRCUIT SIMULATION OF ROBOT
The circuit simulation of the robot is done using a 3D
modelling online platform ’Autodesk Tinkercad’. The circuit
comprises of 5 servomotors and 4 DC motors. First servo
motor serves as the base of the structure followed by second
motor which is placed in the upper arm. Third and fourth
motor is placed in the lower arm and the final one serves as
gripper.
An ultrasonic distance sensor is used to calculate the
distance of the object to be detected. Threshold value of
distance is taken as 150 cm here. If an object is located at
a distance below 150 cm, four DC motors will start rotating
Authorized licensed use limited to: Sharda University. Downloaded on May 24,2022 at 06:03:49 UTC from IEEE Xplore. Restrictions apply.
4. simultaneously symbolizing the 4 wheels of the robot. When
an object is positioned at a distance beyond 150cm, rotation
of DC motor will stop and the servomotors will begin rotating
one after the other. This is to show the movement of the robotic
arm.
VI. AUTODESK FUSION 360
The three dimensional model of the garbage collecting robot
is done using Autodesk fusion 360. The model consists of
a base with four wheels and a robotic arm with grabber. It
also consist of five servo motors in which two servo motors
are placed in the lower arm and one each in gripper, upper
arm and the base. The servo motor used for this model is
Nema 23 servo motor. The Standard NEMA23 is 22.4Kgcm
holding torque. Rated for 200W AC servo applications up to
2000RPM.
VII. RESULT AND ANALYSIS
The comparison for different algorithms are listed in Table
1.
TABLE I
COMPARISON OF ALGORITHMS
Parameters YOLOv3 CNN
Accuracy 0.9341 0.849
Precision 0.9722 0.85
Recall 0.84 0.8491
F1 Score 0.9012 0.8512
The equation for Precision is given as:
TruePositives
TruePositives + FalsePositives
The equation for Recall is given as:
TruePositives
TruePositives + FalseNegatives
The equation for F1-score can thus be determined by:
2 ∗
Precision ∗ Recall
Precision + Recall
Based on these evaluation parameters, YOLOv3 is found to
have better detection than CNN. YOLOv3 has higher values
of precision, accuracy and F1 score but CNN has a greater
recall value.
TABLE II
CONFUSION MATRIX FOR YOLOV3 ALGORITHM
Classes Accuracy Precision Recall F1 Score
Plastic Bottle 0.9376 0.9131 0.8325 0.8709
Paper Ball 0.9149 0.9372 0.8662 0.9032
Can 0.9365 0.8922 0.8560 0.8737
Plastic bag 0.9642 0.9561 0.8753 0.9139
Food Packet 0.8567 0.8715 0.8393 0.8550
Fig. 3. Robotic circuit simulation
Fig. 4. 3D Model of the robot
Fig. 5. YOLOv3 Output
Authorized licensed use limited to: Sharda University. Downloaded on May 24,2022 at 06:03:49 UTC from IEEE Xplore. Restrictions apply.
5. Fig. 6. CNN Output
VIII. CONCLUSION
The proposed work was successfully implemented. Ob-
ject detection algorithms, YOLOv3 and CNN, were used
for garbage detection, for both image detection as well as
livestream. Transfer learning method was used for YOLOv3
algorithm in which dataset of five classes of garbage was cre-
ated and labelled using the LabellImg tool and training of the
dataset was done using Google colab. Finally testing was done
and got an accuracy of 0.9341%. For CNN algorithm same
set of dataset was used and acquired the accuracy of about
0.849%. These algorithms were evaluated based on different
parameters like Accuracy, Precision, Recall and F1 Score. A
virtual 3D model of the robot was created on Autodesk Fusion
360 platform. The circuit simulation was also done for the
robot in which the server motors and the DC motors move
based on object detection and object identification.
The current work only includes a virtual representation
of the robot. Therefore, the future goal involves bringing the
virtual model into a reality by making it. Garbage detection
can be evaluated using other image detection algorithms like
RCNN for better accuracy or speed. Also the number of
images in the dataset could be improved to increase accuracy.
The number of classes in the dataset can also be expanded
to improve the range of garbage that can be detected and col-
lected. Garbage segregation is also another possible expansion
of this project.
REFERENCES
[1] Xiali Li, Manjun Tian, Shihan Kong, Licheng Wu and Junzhi Yu,”A
modified YOLOv3 detection method for vision-based water surface
garbage capture robot”, International Journal of Advanced Robotic
Systems May-June 2020: 1–11
[2] Redmon J and Farhadi A. YOLOv3: an incremental improvement, arXiv
preprint arXiv:1804.02767, April 2020
[3] Daniel Octavian Melinte, Ana-Maria Travediu and Dan N. Du-
mitriu, ”Deep Convolutional Neural Networks Object Detector
for Real-Time Waste Identification”, Appl. Sci. 2020, 10, 7301;
doi:10.3390/app10207301
[4] Omkar Masurekar, Omkar Jadhav, Prateek Kulkarni, Shubham Patil,
”Real Time Object Detection Using YOLOv3”, International Research
Journal of Engineering and Technology (IRJET), Volume: 07 Issue: 03
Mar 2020.
[5] Richardson Santiago Teles de Menezes, Rafael Marrocos Magalhaes
and Helton Maia,” Object Recognition Using Convolutional Neural
Networks”, DOI: http://dx.doi.org/10.5772/intechopen.89726
[6] Virendra, Apoorva Mishra, Ritu Tiwari, Robotics Intelligent System
Design Lab ABV-Indian Institute of Information Technology Man-
agement, Gwalior, India, ”Robotic Gripper Arm System with Effective
Working Envelope”, International Conference on Intelligent Computing
and Control Systems (ICICCS 2018) IEEE Xplore Compliant Part
Number: CFP18K74-ART; ISBN:978-1-5386-2842-3
[7] Pranav Adarsh ,Pratibha , Manoj Kumar, ”YOLO v3-Tiny: Object
Detection and Recognition using one stage improved model”, 6th
International Conference on Advanced Computing & Communication
Systems (ICACCS) ,2020.
[8] Mr. Rakshith Ranganath, Ms. Bhawna Sharma, Ms. Pooja AR, Mr. Ro-
han C Jadhav, Ms.Asha A,”Autonomous Garbage Collecting Robot Wall-
E”,IJSRD - International Journal for Scientific Research Development,
ISSN (online): 2321-0613.
[9] G. Sivasankar,B. Durgalakshmi,K .Seyatha,”Autonomous Trash Collect-
ing Robot” International Journal of Engineering Research Technology
(IJERT) ISSN: 2278-0181.
[10] Saurav Kumar , Drishti Yadav , Himanshu Gupta , Om Prakash
Verma , Irshad Ahmad Ansari and Chang Wook Ahn , Department
of Instrumentation and Control Engineering, Dr. B. R. Ambedkar
National Institute of Technology Jalandhar, Punjab 144011, India ,
Novel YOLOv3 Algorithm-Based Deep Learning Approach for Waste
Segregation: Towards Smart Waste Management, Electronics 2021, 10,
14 https://dx.doi.org/10.3390/ electronics100100141.
Authorized licensed use limited to: Sharda University. Downloaded on May 24,2022 at 06:03:49 UTC from IEEE Xplore. Restrictions apply.