DEPARTMENT OF MECHANICAL ENGINEERING
MANIT Bhopal
Ph.D. Seminar-I
AN OVERVIEW OF REAL TIME OBJECT DETECTION
TECHNIQUES FOR ADDAS
GUIDED BY
Dr. Rajesh Purohit
Professor
Department of Mechanical
Engineering
PRESENTED BY
Vivek Mishra
Ph.D. Scholar
Scholar No. 2331601004
CONTENTS
• Introduction
• Overviews of Various RTOD Techniques
• Literature Review
• Challenges
• Future Trends
• References and Bibliography
INTRODUCTION
• Real-time object detection (RTOD) refers to the process of
identifying and locating objects within an image or a video stream in
real-time.
• This technique involves algorithms and models that can swiftly
analyze visual data, recognize various objects present in the data,
and precisely outline or label them while maintaining a rapid
processing speed, typically enabling detection within milliseconds or
at video frame rates.
• RTOD plays a crucial role in applications such as autonomous
vehicles, surveillance systems, robotics, and augmented reality,
where timely and accurate identification of objects is essential for
decision-making and interaction.
Fig:1. Sample Image of Real-time object detection (RTOD) [6] [7]
1(a) 1(b)
OVERVIEW OF VARIOUS TECHNIQUES OF RTOD
Real-time object detection (RTOD) employs various methods
and algorithms to swiftly and accurately detect objects within
images or video streams. Some of the commonly used methods
in RTOD include:
1. Deep Learning-Based Approaches: Utilizing
Convolutional Neural Networks (CNNs) and architectures
like:
• YOLO (You Only Look Ones)
• SSD (Single Shot Multi-box Detector)
• Faster R-CNN etc.
Cont’d…
• YOLO (You Only Look Onces)
YOLO (You Only Look Once) is a popular real-time object detection
algorithm known for its speed and accuracy. Its architecture can be
explained as follows:
Input Processing: YOLO takes an entire image as input and divides
it into a grid. Each grid cell predicts multiple bounding boxes along
with class probabilities.
Feature Extraction: The input image passes through a
convolutional neural network (CNN), extracting features and
creating a high-dimensional representation of the image.
Fig.2 Example of CNN
Grid Division: The image is divided into an S x S grid. Each grid cell
predicts bounding boxes and their confidence scores. YOLO predicts
these bounding boxes using regression for the coordinates (x, y) of the
bounding box, width, height, and the confidence score for each box.
Fig:3 Sample Image of Grid Division [10] [[11]
4(a) 4(b)
Bounding Box Coordinates:
(x,y) coordinates representing the center of the bounding box relative to
the grid cell.
bw and ℎh representing the width and height of the bounding box,
usually normalized by the image width and height
Confidence Score: Each bounding box prediction includes a
confidence score, indicating the model's confidence that the box
contains an object.
Class Prediction: Each grid cell also predicts class probabilities for the
objects present within the bounding boxes. These class probabilities
represent the likelihood of different object categories. Like car, person,
dog etc.
Non-Max Suppression: After predictions, YOLO uses non-maximum
suppression to refine the bounding boxes by eliminating duplicate
detections and keeping the most confident ones that means selecting
the higher one.
Fig:4. Sample Image of Non-max Suppression [8]
Output: The final output is a list of bounding boxes along with their
class labels and confidence scores, providing the detected objects
and their positions in the image.
Fig:5. Sample image of output [9]
Cont’d…
2. Feature-Based Methods: These methods involve extracting
specific features from images and using machine learning
classifiers.
• Histogram of Oriented Gradients (HOG): Detects objects
by analyzing gradient orientation in image regions.
Fig:6. Sample image of HOG
Haar Cascades: Utilizes patterns to identify objects within an image
Cont’d…
Fig :7. Sample image
Cont’d…
3. Real-Time Implementation Optimization: Techniques focused
on improving inference speed.
• Quantization: Reducing precision in neural network weights to
speed up computations.
• Model Pruning: Removing redundant or less critical network
connections to optimize inference speed.
• Hardware Acceleration: Utilizing specialized hardware like
GPUs (Graphics processing unit), TPUs (Tensor Processing
Units), or dedicated ASICs (Aplication-specific integrated
circuits) for faster processing.
Various Applications Based on RTOD
Autonomous Vehicles: RTOD enables vehicles to perceive and react
to their surroundings in real-time, detecting pedestrians, vehicles,
traffic signs, and obstacles for navigation and safety.
Surveillance and Security: In security systems, RTOD helps in
detecting unauthorized individuals, intruders, or suspicious activities in
real-time, enhancing surveillance.
Robotics: It plays a crucial role in enabling robots to navigate and
interact with their environments by identifying objects and obstacles in
real-time.
Healthcare: It assists in medical imaging applications for detecting
anomalies, tumors, or specific organs within medical scans, aiding
in diagnostics and treatment planning.
Retail and Inventory Management: RTOD can track and manage
inventory, assist in cashier-less checkout systems, and analyze
customer behavior for marketing purposes.
Augmented Reality (AR) and Virtual Reality (VR): Integrating
RTOD into AR/VR technologies enables real-time interaction and
object recognition within these immersive environments.
Industrial Automation: RTOD helps in monitoring and inspecting
manufacturing processes, detecting defects, ensuring quality
control, and enhancing overall.
Sports Analytics: In sports, RTOD is used to track players'
movements, analyze game scenarios, and provide real-time
statistics for coaching and broadcasting purposes.
Environmental Monitoring: RTOD assists in monitoring wildlife,
tracking endangered species, and analyzing environmental changes
in real-time.
Smart Cities: It contributes to various aspects of urban
management, such as traffic monitoring, waste management, and
public safety.
LITERATURE REVIEW
Name of the Author(s) PaperTitle Findings
Juanxia et al. Dec
(2022) [2]
Application of leakage pre-
warning system for
hazardous chemical storage
tank based on YOLOv3-
prePReLU algorithm
• In this paper, 6000 pictures under
different experimental conditions were
collected when storage tank discharged
continuously, and then an image dataset
of tank leakage was generated. Then it
was trained by YOLOv3, YOLOv3-
PostPReLU, and YOLOv3-prePReLU
algorithms, the experimental results
showed that the mean average precision
(mAP) of YOLOv3-prePReLU algorithm
was 0.89, which was more accurate than
that of other algorithms.
LITERATURE REVIEW
Name of the
Author(s)
PaperTitle Findings
Yurika Permana
sari et al.
December 2022
[1]
Innovative Region
Convolutional Neural
Network Algorithm For
Object Identification
• The purpose of this study was to
analyze the development of object
identification in the search for the best
algorithm in terms of the speed and
efficiency of identification.
• The use of the CNN algorithm in the
identification of image objects, starting
with the region CNN technique, is
improved with Fast R-CNN, Faster-
CNN, and Mask R-CNN.
• The researcher has developed
Algorithm from facial recognition and
the identification of moving images.
LITERATURE REVIEW
Name of the
Author(s)
PaperTitle Findings
Punam
Sunil Raskar et al.
October 2021 [3]
Real time object-based
video forgery detection
using YOLO (V2)
This paper has proposed a new approach
for the detection of Copy-Move attack in
passive blind videos. Object-based
forgery detection approach is
implemented using fast and real-time
object detector “You Only Look Once -
Version 2″YOLO (V2).
LITERATURE REVIEW
Name of the
Author(s)
PaperTitle Findings
Xiaozhi Chen et al.,
(2020) [4]
Deep Multi-modal Object
Detection and Semantic
Segmentation for
Autonomous Driving
1. This review paper attempts to
systematically summarize
methodologies and discuss challenges
for deep multi-modal object detection
and semantic segmentation in
autonomous driving
2. This paper presents an approach
that combines multi-modal
information (camera and
LiDAR) for real-time object
detection and semantic
segmentation in autonomous
LITERATURE REVIEW
Name of the
Author(s)
5
PaperTitle Findings
Baoping Xao.
et al., (2021)
[5]
Real-Time Object Detection
Algorithm of Autonomous
Vehicles Based on Improved
YOLOv5s
1. This paper proposes a real-time
object detection algorithm based
on improved YOLOv5s. By
adding shallow high-resolution
features and changing the size of
output feature map, the detection
ability of the algorithm for small
objects is significantly improved.
2. Experimental results show that
the improved YOLOv5s
algorithm enhances the detection
ability of small objects and proves
its feasibility in various complex
road scenes.
CHALLENGES
1.Real-Time Processing Constraints: Balancing accuracy and speed
is challenging due to the need for immediate processing in real-time
applications.
2.Hardware Limitations: Resource constraints on devices,
especially for edge computing and embedded systems, pose
challenges for running complex models efficiently.
3.Variability in Object Scales and Contexts: Handling objects of
various scales and complex contexts in real-world scenarios demands
robustness.
4.Detection of Small Objects: Ensuring accurate detection of small
or occluded objects remains a challenge.
5.Model Adaptability: Models need to adapt to different
environmental conditions, lighting changes, and diverse scenarios.
FUTURE TRENDS
1. Efficiency Improvements: Further optimization of models for
faster processing without compromising accuracy.
2. Hardware Acceleration: Advancements in specialized hardware
(GPUs, TPUs) and architectures (neuromorphic chips) for better
performance.
3. Continued Research in Algorithms: Development of novel
architectures and algorithms for improved accuracy and efficiency.
4. Contextual Understanding: Integrating contextual information
to enhance object detection in complex scenes.
5. Semi-Supervised and Unsupervised Learning: Exploring
methods that reduce the dependency on labeled data for training
models.
REFERENCES
1. Yurika Permanasari, Budi Nurani Ruchjana , Setiawan Hadi and Juli Rejito
(2022) Innovative Region Convolutional Neural Network Algorithm for Object
Identification Journal of Open Innovation: Technology, Market, and Complexity
Volume 8, Issue4, December 2022, 182
2. Juanxia He , Yao Xiao , Liwen Huang , Angang Li , Yan Chen ,Ye Ma , Wen Li,
Dezhi Liu , Yongzhong Zhan Application of leakage pre-warning system for
hazardous chemical storage tank based on YOLOv3-prePReLU algorithm,
Journal of Loss Prevention in the Process Industries 80 (2022) 104905
3. Punam Sunil Raskara,, Sanjeevani Kiran Shahb, Real time object-based video
forgery detection using YOLO (V2), Forensic Science International 327 (2021)
110979
4. Di Feng, Christian Haase-Schütz Lars Rosenbaum, Heinz Hertlein, Claudius
Glaeser, Fabian Timm, Werner Wiesbeck, Klaus Dietmayer, Deep Multi-modal
Object Detection and Semantic Segmentation for Autonomous Driving: Datasets,
Methods, and Challenges, [Submitted on 21 Feb 2019 (v1), last revised 8 Feb
2020 (this version, v4)]
5. Baoping Xiao; Jinghua Guo; Zhifei He, Real-Time Object Detection Algorithm
of Autonomous Vehicles Based on Improved YOLOv5s, Proceedings of the 2021
5th CAA International Conference on Vehicular Control and Intelligence (CVCI),
Tianjin, China, October 29-31, 2021
, , ,
, , ,
Cont’d
6. Imerit, https://imerit.net/blog/real-time-object-detection-using-yolo/
7. Upwork, https://www.upwork.com/en-gb/services/product/development-it-image-
processing-data-annotation-computer-vision-project-1667438170865958912
8. Hackster, https://www.hackster.io/Elephant-Robotics-Official/revealing-the-
potential-of-mycobot-ai-kit-vision-algorithms-f4eeb1
9. https://blog.roboauto.cz/how-it-works-object-detection-2ae4448efa3f
10. MDPI, https://www.mdpi.com/2073-8994/11/10/1205
11. Reddit,
https://www.reddit.com/r/mildlyinfuriating/comments/diavp5/when_two_cars_on_a_
major_highway_go_the_exact/?rdt=42485
12. M. Aloraini, M. Sharifzadeh, D. Schonfeld, Sequential and patch analyses for object
removal video forgery detection and localization, IEEE Trans. Circuits Syst. Video
Technol. 8215 (c) (2020) 1–14, https://doi.org/10.1109/TCSVT.2020.2993004
13. N. Antony, B.R. Devassy, Implementation of image/video copy- move forgery detection
using brute-force matching, in: Proceedings of the 2018 2nd International Conference on
Trends in Electronics and Informatics (ICOEI), IEEE, (Icoei), (2018) pp. 1085–1090.
14. C.C. Chen, H. Wang, C.S. Lin, An efficiency enhanced cluster expanding block
algorithm for copy-move forgery detection, Multimedia Tools Appl. 76 (24) (2017)
26503–26522, https://doi.org/10.1007/s11042-016-4179-3
15. C. Liang, Y. Li, J. Luo, Automatic detection of object-based forgery in advanced video,
IEEE Trans. Circuits Syst. Video Technol. 26 (11) (2016) 2138–2151, https://
doi.org/10.1109/TCSVT.2015.2473436
16. C. Liang, Y. Li, J. Luo, Coarse-to-fine copy-move forgery detection for video forensics,
IEEE Access 6 (2018) 25323–25335, https://doi.org/10.1109/ACCESS.2018.281962
Cont’d
17. H. Kaur, N. Jindal, Deep convolutional neural network for graphics forgery
detection in video, Wireless Personal Communications, Springer US, 2020, https://
doi.org/10.1007/s11277-020-07126-3
18. Li, F., & Huang, T. (2014). Video copy-move forgery detection and localization
based on structural similarity. In Proceedings of the 3rd International Conference
on Multimedia Technology (ICMT 2013) (pp. 63-76). Springer, Berlin, Heidelberg.
doi: 10.1007/978-3-642-41407-7_7.
19. S.Y. Liao, T.Q. Huang, Video copy-move forgery detection and localization based
on Tamura texture features, in: Proceedings of the 2013 6th International Congress
on Image and Signal Processing, CISP 2013, (2013). doi:
10.1109/CISP.2013.6745286.
20. G.S. Lin, J.F. Chang, Detection of frame duplication forgery in videos based on
spatial and temporal analysis, Int. J. Pattern Recognit. Artif. Intell. 26 (7) (2012)
1–18, https://doi.org/10.1142/S0218001412500176
Cont’d
THANK YOU.

Seminar -I PPT Vivek RT-Object Detection.pptx

  • 1.
    DEPARTMENT OF MECHANICALENGINEERING MANIT Bhopal Ph.D. Seminar-I AN OVERVIEW OF REAL TIME OBJECT DETECTION TECHNIQUES FOR ADDAS GUIDED BY Dr. Rajesh Purohit Professor Department of Mechanical Engineering PRESENTED BY Vivek Mishra Ph.D. Scholar Scholar No. 2331601004
  • 2.
    CONTENTS • Introduction • Overviewsof Various RTOD Techniques • Literature Review • Challenges • Future Trends • References and Bibliography
  • 3.
    INTRODUCTION • Real-time objectdetection (RTOD) refers to the process of identifying and locating objects within an image or a video stream in real-time. • This technique involves algorithms and models that can swiftly analyze visual data, recognize various objects present in the data, and precisely outline or label them while maintaining a rapid processing speed, typically enabling detection within milliseconds or at video frame rates. • RTOD plays a crucial role in applications such as autonomous vehicles, surveillance systems, robotics, and augmented reality, where timely and accurate identification of objects is essential for decision-making and interaction.
  • 4.
    Fig:1. Sample Imageof Real-time object detection (RTOD) [6] [7] 1(a) 1(b)
  • 5.
    OVERVIEW OF VARIOUSTECHNIQUES OF RTOD Real-time object detection (RTOD) employs various methods and algorithms to swiftly and accurately detect objects within images or video streams. Some of the commonly used methods in RTOD include: 1. Deep Learning-Based Approaches: Utilizing Convolutional Neural Networks (CNNs) and architectures like: • YOLO (You Only Look Ones) • SSD (Single Shot Multi-box Detector) • Faster R-CNN etc.
  • 6.
    Cont’d… • YOLO (YouOnly Look Onces) YOLO (You Only Look Once) is a popular real-time object detection algorithm known for its speed and accuracy. Its architecture can be explained as follows: Input Processing: YOLO takes an entire image as input and divides it into a grid. Each grid cell predicts multiple bounding boxes along with class probabilities. Feature Extraction: The input image passes through a convolutional neural network (CNN), extracting features and creating a high-dimensional representation of the image. Fig.2 Example of CNN
  • 7.
    Grid Division: Theimage is divided into an S x S grid. Each grid cell predicts bounding boxes and their confidence scores. YOLO predicts these bounding boxes using regression for the coordinates (x, y) of the bounding box, width, height, and the confidence score for each box. Fig:3 Sample Image of Grid Division [10] [[11] 4(a) 4(b)
  • 8.
    Bounding Box Coordinates: (x,y)coordinates representing the center of the bounding box relative to the grid cell. bw and ℎh representing the width and height of the bounding box, usually normalized by the image width and height Confidence Score: Each bounding box prediction includes a confidence score, indicating the model's confidence that the box contains an object. Class Prediction: Each grid cell also predicts class probabilities for the objects present within the bounding boxes. These class probabilities represent the likelihood of different object categories. Like car, person, dog etc.
  • 9.
    Non-Max Suppression: Afterpredictions, YOLO uses non-maximum suppression to refine the bounding boxes by eliminating duplicate detections and keeping the most confident ones that means selecting the higher one. Fig:4. Sample Image of Non-max Suppression [8]
  • 10.
    Output: The finaloutput is a list of bounding boxes along with their class labels and confidence scores, providing the detected objects and their positions in the image. Fig:5. Sample image of output [9]
  • 11.
    Cont’d… 2. Feature-Based Methods:These methods involve extracting specific features from images and using machine learning classifiers. • Histogram of Oriented Gradients (HOG): Detects objects by analyzing gradient orientation in image regions. Fig:6. Sample image of HOG
  • 12.
    Haar Cascades: Utilizespatterns to identify objects within an image Cont’d… Fig :7. Sample image
  • 13.
    Cont’d… 3. Real-Time ImplementationOptimization: Techniques focused on improving inference speed. • Quantization: Reducing precision in neural network weights to speed up computations. • Model Pruning: Removing redundant or less critical network connections to optimize inference speed. • Hardware Acceleration: Utilizing specialized hardware like GPUs (Graphics processing unit), TPUs (Tensor Processing Units), or dedicated ASICs (Aplication-specific integrated circuits) for faster processing.
  • 14.
    Various Applications Basedon RTOD Autonomous Vehicles: RTOD enables vehicles to perceive and react to their surroundings in real-time, detecting pedestrians, vehicles, traffic signs, and obstacles for navigation and safety. Surveillance and Security: In security systems, RTOD helps in detecting unauthorized individuals, intruders, or suspicious activities in real-time, enhancing surveillance. Robotics: It plays a crucial role in enabling robots to navigate and interact with their environments by identifying objects and obstacles in real-time.
  • 15.
    Healthcare: It assistsin medical imaging applications for detecting anomalies, tumors, or specific organs within medical scans, aiding in diagnostics and treatment planning. Retail and Inventory Management: RTOD can track and manage inventory, assist in cashier-less checkout systems, and analyze customer behavior for marketing purposes. Augmented Reality (AR) and Virtual Reality (VR): Integrating RTOD into AR/VR technologies enables real-time interaction and object recognition within these immersive environments. Industrial Automation: RTOD helps in monitoring and inspecting manufacturing processes, detecting defects, ensuring quality control, and enhancing overall.
  • 16.
    Sports Analytics: Insports, RTOD is used to track players' movements, analyze game scenarios, and provide real-time statistics for coaching and broadcasting purposes. Environmental Monitoring: RTOD assists in monitoring wildlife, tracking endangered species, and analyzing environmental changes in real-time. Smart Cities: It contributes to various aspects of urban management, such as traffic monitoring, waste management, and public safety.
  • 17.
    LITERATURE REVIEW Name ofthe Author(s) PaperTitle Findings Juanxia et al. Dec (2022) [2] Application of leakage pre- warning system for hazardous chemical storage tank based on YOLOv3- prePReLU algorithm • In this paper, 6000 pictures under different experimental conditions were collected when storage tank discharged continuously, and then an image dataset of tank leakage was generated. Then it was trained by YOLOv3, YOLOv3- PostPReLU, and YOLOv3-prePReLU algorithms, the experimental results showed that the mean average precision (mAP) of YOLOv3-prePReLU algorithm was 0.89, which was more accurate than that of other algorithms.
  • 18.
    LITERATURE REVIEW Name ofthe Author(s) PaperTitle Findings Yurika Permana sari et al. December 2022 [1] Innovative Region Convolutional Neural Network Algorithm For Object Identification • The purpose of this study was to analyze the development of object identification in the search for the best algorithm in terms of the speed and efficiency of identification. • The use of the CNN algorithm in the identification of image objects, starting with the region CNN technique, is improved with Fast R-CNN, Faster- CNN, and Mask R-CNN. • The researcher has developed Algorithm from facial recognition and the identification of moving images.
  • 19.
    LITERATURE REVIEW Name ofthe Author(s) PaperTitle Findings Punam Sunil Raskar et al. October 2021 [3] Real time object-based video forgery detection using YOLO (V2) This paper has proposed a new approach for the detection of Copy-Move attack in passive blind videos. Object-based forgery detection approach is implemented using fast and real-time object detector “You Only Look Once - Version 2″YOLO (V2).
  • 20.
    LITERATURE REVIEW Name ofthe Author(s) PaperTitle Findings Xiaozhi Chen et al., (2020) [4] Deep Multi-modal Object Detection and Semantic Segmentation for Autonomous Driving 1. This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving 2. This paper presents an approach that combines multi-modal information (camera and LiDAR) for real-time object detection and semantic segmentation in autonomous
  • 21.
    LITERATURE REVIEW Name ofthe Author(s) 5 PaperTitle Findings Baoping Xao. et al., (2021) [5] Real-Time Object Detection Algorithm of Autonomous Vehicles Based on Improved YOLOv5s 1. This paper proposes a real-time object detection algorithm based on improved YOLOv5s. By adding shallow high-resolution features and changing the size of output feature map, the detection ability of the algorithm for small objects is significantly improved. 2. Experimental results show that the improved YOLOv5s algorithm enhances the detection ability of small objects and proves its feasibility in various complex road scenes.
  • 22.
    CHALLENGES 1.Real-Time Processing Constraints:Balancing accuracy and speed is challenging due to the need for immediate processing in real-time applications. 2.Hardware Limitations: Resource constraints on devices, especially for edge computing and embedded systems, pose challenges for running complex models efficiently. 3.Variability in Object Scales and Contexts: Handling objects of various scales and complex contexts in real-world scenarios demands robustness. 4.Detection of Small Objects: Ensuring accurate detection of small or occluded objects remains a challenge. 5.Model Adaptability: Models need to adapt to different environmental conditions, lighting changes, and diverse scenarios.
  • 23.
    FUTURE TRENDS 1. EfficiencyImprovements: Further optimization of models for faster processing without compromising accuracy. 2. Hardware Acceleration: Advancements in specialized hardware (GPUs, TPUs) and architectures (neuromorphic chips) for better performance. 3. Continued Research in Algorithms: Development of novel architectures and algorithms for improved accuracy and efficiency. 4. Contextual Understanding: Integrating contextual information to enhance object detection in complex scenes. 5. Semi-Supervised and Unsupervised Learning: Exploring methods that reduce the dependency on labeled data for training models.
  • 24.
    REFERENCES 1. Yurika Permanasari,Budi Nurani Ruchjana , Setiawan Hadi and Juli Rejito (2022) Innovative Region Convolutional Neural Network Algorithm for Object Identification Journal of Open Innovation: Technology, Market, and Complexity Volume 8, Issue4, December 2022, 182 2. Juanxia He , Yao Xiao , Liwen Huang , Angang Li , Yan Chen ,Ye Ma , Wen Li, Dezhi Liu , Yongzhong Zhan Application of leakage pre-warning system for hazardous chemical storage tank based on YOLOv3-prePReLU algorithm, Journal of Loss Prevention in the Process Industries 80 (2022) 104905 3. Punam Sunil Raskara,, Sanjeevani Kiran Shahb, Real time object-based video forgery detection using YOLO (V2), Forensic Science International 327 (2021) 110979 4. Di Feng, Christian Haase-Schütz Lars Rosenbaum, Heinz Hertlein, Claudius Glaeser, Fabian Timm, Werner Wiesbeck, Klaus Dietmayer, Deep Multi-modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges, [Submitted on 21 Feb 2019 (v1), last revised 8 Feb 2020 (this version, v4)] 5. Baoping Xiao; Jinghua Guo; Zhifei He, Real-Time Object Detection Algorithm of Autonomous Vehicles Based on Improved YOLOv5s, Proceedings of the 2021 5th CAA International Conference on Vehicular Control and Intelligence (CVCI), Tianjin, China, October 29-31, 2021 , , , , , ,
  • 25.
    Cont’d 6. Imerit, https://imerit.net/blog/real-time-object-detection-using-yolo/ 7.Upwork, https://www.upwork.com/en-gb/services/product/development-it-image- processing-data-annotation-computer-vision-project-1667438170865958912 8. Hackster, https://www.hackster.io/Elephant-Robotics-Official/revealing-the- potential-of-mycobot-ai-kit-vision-algorithms-f4eeb1 9. https://blog.roboauto.cz/how-it-works-object-detection-2ae4448efa3f 10. MDPI, https://www.mdpi.com/2073-8994/11/10/1205 11. Reddit, https://www.reddit.com/r/mildlyinfuriating/comments/diavp5/when_two_cars_on_a_ major_highway_go_the_exact/?rdt=42485
  • 26.
    12. M. Aloraini,M. Sharifzadeh, D. Schonfeld, Sequential and patch analyses for object removal video forgery detection and localization, IEEE Trans. Circuits Syst. Video Technol. 8215 (c) (2020) 1–14, https://doi.org/10.1109/TCSVT.2020.2993004 13. N. Antony, B.R. Devassy, Implementation of image/video copy- move forgery detection using brute-force matching, in: Proceedings of the 2018 2nd International Conference on Trends in Electronics and Informatics (ICOEI), IEEE, (Icoei), (2018) pp. 1085–1090. 14. C.C. Chen, H. Wang, C.S. Lin, An efficiency enhanced cluster expanding block algorithm for copy-move forgery detection, Multimedia Tools Appl. 76 (24) (2017) 26503–26522, https://doi.org/10.1007/s11042-016-4179-3 15. C. Liang, Y. Li, J. Luo, Automatic detection of object-based forgery in advanced video, IEEE Trans. Circuits Syst. Video Technol. 26 (11) (2016) 2138–2151, https:// doi.org/10.1109/TCSVT.2015.2473436 16. C. Liang, Y. Li, J. Luo, Coarse-to-fine copy-move forgery detection for video forensics, IEEE Access 6 (2018) 25323–25335, https://doi.org/10.1109/ACCESS.2018.281962 Cont’d
  • 27.
    17. H. Kaur,N. Jindal, Deep convolutional neural network for graphics forgery detection in video, Wireless Personal Communications, Springer US, 2020, https:// doi.org/10.1007/s11277-020-07126-3 18. Li, F., & Huang, T. (2014). Video copy-move forgery detection and localization based on structural similarity. In Proceedings of the 3rd International Conference on Multimedia Technology (ICMT 2013) (pp. 63-76). Springer, Berlin, Heidelberg. doi: 10.1007/978-3-642-41407-7_7. 19. S.Y. Liao, T.Q. Huang, Video copy-move forgery detection and localization based on Tamura texture features, in: Proceedings of the 2013 6th International Congress on Image and Signal Processing, CISP 2013, (2013). doi: 10.1109/CISP.2013.6745286. 20. G.S. Lin, J.F. Chang, Detection of frame duplication forgery in videos based on spatial and temporal analysis, Int. J. Pattern Recognit. Artif. Intell. 26 (7) (2012) 1–18, https://doi.org/10.1142/S0218001412500176 Cont’d
  • 28.