Abnormal moving objects detection is an essential issue for video surveillance. In order to judge whether the behavior of objects is abnormal, such as pedestrians walk back and forth, walk across the street, or scooters drive the wrong way, the main method is through computer vision technique to analyze objects as pedestrians, cars, and so on in video. Traditional abnormal moving objects detection aims at particular circumstances or requirement to predefine particular detection rules which the application of abnormal moving objects detection is restricted. Besides, if numerous abnormal moving objects are detected at the same time, surveillance system is overloaded with operation. Owing to this reason, in this paper, we expect to design a set of learning model which does not predefine abnormal rules and can detect a variety of abnormal moving objects automatically in different environments.
To achieve the above goal, the first thing is to detect the moving objects in video. The proposed method in this paper utilizes Gaussian Mixture Model (GMM) to detect foreground objects and remove shadows of objects by shadow removal. Then, adoptive mean shift algorithm with Kalman filter is proposed to track these moving objects. Finally, Kalman filter is used to smooth trajectory.
After collecting the trajectories of moving objects, abnormal moving object detection process proceeds. At first, for this trajectory information, take advantage of Self-Organizing Incremental Neural Network (SOINN) to learn and build a normal trajectory model which is a foundation to determine whether follow-up moving objects are abnormal. The average learning time is 7 to 55 seconds.
The experiment monitors and analyzes different circumstances, such as School campus, roads, and one-way street. The system based on the proposed method can detect abnormal moving objects with the accuracy 100% in school campus, 98.3% in roads, and 98.8% in one-way street. The overall execution time is short and about 0.033 to 0.067 seconds, and it can be executed in real-time.
Instantaneous Object Detection By Blob AssessmentIJRES Journal
In recent years, visual surveillance has gained importance in security, law enforcement and military applications. In this paper a novel framework for detecting flat and non-flat abandoned objects at a public place and determines which one remains stationary. In this prototype abandoned objects are detected by matching a reference and a target video sequence. The reference video is taken by a camera when there is no suspicious object in the scene. The target video is taken by a camera following the same route and may contain extra objects .The two videos aligned to find the corresponding frame pairs and finally the abandoned objects are identified. Four simple but effective ideas are proposed to achieve the objective: an inter-sequence geometric alignment is used to find all possible suspicious areas, an intra-sequence geometric alignment to remove false alarms caused by high objects, a local appearance comparison between two aligned intra-sequence frames to remove false alarms in flat areas, and a temporal filtering step to confirm the existence of false alarms.
This project uses Python and OpenCV to detect and track objects in videos and from a webcam. It has two modules: one to track objects in uploaded system videos and another to track objects with the webcam. OpenCV algorithms like dense optical flow, sparse optical flow and Kalman filtering are used to track objects by locating them in successive frames. Tracking provides benefits over repeated detection like being faster and able to track objects when detection fails due to occlusion. The project screenshots demonstrate uploading a video and tracking objects within it as well as tracking objects from the webcam stream.
Video surveillance Moving object detection& tracking Chapter 1 ahmed mokhtar
Our Project was a security project with CNN (Machine learning ) By using detection of human and tracking and camera start record after detection a human then make alarm for the owner .
TRACKING OF PARTIALLY OCCLUDED OBJECTS IN VIDEO SEQUENCESPraveen Pallav
The document describes a proposed algorithm for tracking partially occluded objects in video sequences. The algorithm uses background subtraction and morphological operations to create a binary mask and detect regions of interest. It then applies Lucas Kanade optical flow tracking and maintains a dictionary to track multiple objects across frames. The algorithm was tested on standard and custom databases and was able to track objects when partially occluded by combining color, motion, and feature cues. Potential applications of the algorithm include human-computer interaction, anomaly detection, traffic surveillance, and robot navigation.
Computer vision has received great attention over the last two decades.
This research field is important not only in security-related software but also in the advanced interface between people and computers, advanced control methods, and many other areas.
Multiple object tracking (MOT) involves localizing and identifying multiple moving objects over time using video input. MOT has various applications including human-computer interaction, surveillance, and medical imaging. It allows too many detected objects to be matched across frames and tracks objects even if detection fails in some frames. However, challenges include implementing real-time tracking due to batch-based algorithms and solving identity switches and fragmentation when detections are missed. Common MOT methods include Faster R-CNN for detection, Kalman filters for prediction, CNNs for appearance features, and the Hungarian algorithm for data association and tracking.
Computer vision has received great attention over the last two decades.
This research field is important not only in security-related software, but also in advanced interface between people and computers, advanced control methods and many other areas.
A Fast Laser Motion Detection and Approaching Behavior Monitoring Method for ...toukaigi
1. The document describes a method for a Moving Object Alarm System (MOAS) that uses a laser sensor to detect moving objects, monitor their trajectories and approaching speed, and provide alerts if approaching in a dangerous manner.
2. The method defines a boundary around the monitored area and uses a fan-shaped grid to efficiently detect continuous moving objects. Object association across time is determined by updating a deviation matrix measuring changes in range, angle, and size of detected objects.
3. Outdoor experiments tested passing, approaching, and crossing objects, finding the method effectively detected motion and monitored approaching behavior in real-time.
Instantaneous Object Detection By Blob AssessmentIJRES Journal
In recent years, visual surveillance has gained importance in security, law enforcement and military applications. In this paper a novel framework for detecting flat and non-flat abandoned objects at a public place and determines which one remains stationary. In this prototype abandoned objects are detected by matching a reference and a target video sequence. The reference video is taken by a camera when there is no suspicious object in the scene. The target video is taken by a camera following the same route and may contain extra objects .The two videos aligned to find the corresponding frame pairs and finally the abandoned objects are identified. Four simple but effective ideas are proposed to achieve the objective: an inter-sequence geometric alignment is used to find all possible suspicious areas, an intra-sequence geometric alignment to remove false alarms caused by high objects, a local appearance comparison between two aligned intra-sequence frames to remove false alarms in flat areas, and a temporal filtering step to confirm the existence of false alarms.
This project uses Python and OpenCV to detect and track objects in videos and from a webcam. It has two modules: one to track objects in uploaded system videos and another to track objects with the webcam. OpenCV algorithms like dense optical flow, sparse optical flow and Kalman filtering are used to track objects by locating them in successive frames. Tracking provides benefits over repeated detection like being faster and able to track objects when detection fails due to occlusion. The project screenshots demonstrate uploading a video and tracking objects within it as well as tracking objects from the webcam stream.
Video surveillance Moving object detection& tracking Chapter 1 ahmed mokhtar
Our Project was a security project with CNN (Machine learning ) By using detection of human and tracking and camera start record after detection a human then make alarm for the owner .
TRACKING OF PARTIALLY OCCLUDED OBJECTS IN VIDEO SEQUENCESPraveen Pallav
The document describes a proposed algorithm for tracking partially occluded objects in video sequences. The algorithm uses background subtraction and morphological operations to create a binary mask and detect regions of interest. It then applies Lucas Kanade optical flow tracking and maintains a dictionary to track multiple objects across frames. The algorithm was tested on standard and custom databases and was able to track objects when partially occluded by combining color, motion, and feature cues. Potential applications of the algorithm include human-computer interaction, anomaly detection, traffic surveillance, and robot navigation.
Computer vision has received great attention over the last two decades.
This research field is important not only in security-related software but also in the advanced interface between people and computers, advanced control methods, and many other areas.
Multiple object tracking (MOT) involves localizing and identifying multiple moving objects over time using video input. MOT has various applications including human-computer interaction, surveillance, and medical imaging. It allows too many detected objects to be matched across frames and tracks objects even if detection fails in some frames. However, challenges include implementing real-time tracking due to batch-based algorithms and solving identity switches and fragmentation when detections are missed. Common MOT methods include Faster R-CNN for detection, Kalman filters for prediction, CNNs for appearance features, and the Hungarian algorithm for data association and tracking.
Computer vision has received great attention over the last two decades.
This research field is important not only in security-related software, but also in advanced interface between people and computers, advanced control methods and many other areas.
A Fast Laser Motion Detection and Approaching Behavior Monitoring Method for ...toukaigi
1. The document describes a method for a Moving Object Alarm System (MOAS) that uses a laser sensor to detect moving objects, monitor their trajectories and approaching speed, and provide alerts if approaching in a dangerous manner.
2. The method defines a boundary around the monitored area and uses a fan-shaped grid to efficiently detect continuous moving objects. Object association across time is determined by updating a deviation matrix measuring changes in range, angle, and size of detected objects.
3. Outdoor experiments tested passing, approaching, and crossing objects, finding the method effectively detected motion and monitored approaching behavior in real-time.
This document summarizes a research paper on detecting and tracking human motion based on background subtraction. The proposed method initializes the background using the median of multiple frames. It then extracts moving objects by subtracting the current frame from the background and applying a dynamic threshold. Noise is removed using filters and morphology operations. Shadows are accounted for using projection analysis to accurately detect human bodies. Tracking involves computing the centroid of detected objects in each frame to analyze position and velocity over time. Experimental results showed the method runs quickly and accurately for real-time detection of human motion.
This document presents a method for tracking moving objects in video sequences using affine flow parameters combined with illumination insensitive template matching. The method extracts affine flow parameters from frames to model local object motion using affine transformations. It then applies template matching with illumination compensation to track objects across frames while being robust to illumination changes. The method is evaluated on various indoor and outdoor database videos and is shown to effectively track objects without false detections, handling issues like illumination variations, camera motion and dynamic backgrounds better than other methods.
This document discusses various techniques for visual object tracking, including tracking whole objects, medium/fine level features, and facial feature points. It covers representation of tracked objects using templates, contours, and other models. Evaluation methods like normalized cross-correlation and boosted detectors are introduced. Simple tracking strategies like global search and contour tracking are described alongside their limitations. More advanced techniques like Lucas-Kanade tracking using gradient descent, mean-shift tracking, and regression-based tracking using linear and non-linear predictors are summarized. The use of motion models like the Kalman filter to incorporate temporal consistency is also mentioned.
IRJET- Moving Object Detection using Foreground Detection for Video Surveil...IRJET Journal
This document summarizes a research paper that proposes a new method for detecting moving objects in videos using foreground detection and background subtraction. The key steps of the proposed method include initializing a background model using the median of initial frames, dynamically updating the background model to adapt to lighting changes, subtracting the background model from current frames and applying a threshold to detect moving objects, and using morphological operations and projection analysis to extract human bodies and remove noise. The experimental results showed that the proposed method can accurately and reliably detect moving human bodies in real-time video surveillance.
This document reviews various methods for object tracking in video sequences. It discusses object detection, classification, and tracking techniques reported in previous research. The key methods covered include background subtraction, optical flow, Kalman filtering, and particle filtering. The document also provides a table summarizing several papers on object tracking, listing the techniques proposed and results achieved in each. It concludes that existing probability-based tracking works well for single objects but proposes improving the technique to track multiple objects.
Analysis of Human Behavior Based On Centroid and Treading TrackIJMER
This document discusses a video surveillance system that uses background subtraction and centroid tracking to analyze human behavior in videos. It begins with an introduction and overview of previous work on motion detection methods. It then describes the proposed system, which uses an adaptive background subtraction method to detect moving objects and extract centroid features for tracking. Experimental results show the system can detect abnormal behaviors by analyzing changes in an object's centroid movement and treading track over time. The system is able to distinguish between normal and irregular behaviors with high accuracy.
The document discusses object tracking in computer vision. It begins with an introduction and overview of applications of object tracking. It then discusses object representation, detection, tracking algorithms and methodologies. It compares different tracking methods and provides an example of object tracking in MATLAB. Key steps in object tracking include object detection, tracking the detected objects across frames using algorithms like point tracking, kernel tracking and silhouette tracking. Common challenges with object tracking are also summarized.
The document proposes a robust abandoned object detection system based on measuring an object's life-cycle state. It uses a double-background framework to extract unmoving object candidates and filters them using appearance features. A finite state machine then models each object's life-cycle state to determine if it has been abandoned, accounting for occlusion or illumination changes. The system was tested on 10 videos and achieved low false alarm and missing rates, showing it can feasibly detect abandoned objects.
This document summarizes a research paper that presents a framework for detecting and tracking objects in real-time video based on color. The proposed methodology uses a webcam to capture video, performs color-based filtering and image processing to isolate the target object, and analyzes the object's motion over time to track its path. Key steps include Euclidean filtering to isolate the object's color, converting to grayscale for faster processing, contour extraction to delineate the object's shape, and analyzing metrics like the Hurst exponent and Lyapunov exponent to detect chaos in the object's motion over time. The goal is to develop an efficient and real-time system for color-based object detection and tracking.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
The document describes a proposed system for real-time object tracking and learning using template matching. The system uses a live video stream and enables the tracking, learning, and detection of real-time objects. It selects an object of interest via cropping and then tracks it with a bounding box. Template matching is used to match the selected object with regions of interest in subsequent frames to mark its location. If a match is found, principal component analysis is used. The system also introduces a PN discrimination algorithm using background subtraction to increase frame processing speed and improve template matching accuracy. This allows the system to overcome limitations of existing methods and enable long-term, real-time object tracking.
An Object Detection, Tracking And Parametric Classification– A ReviewIRJET Journal
This document summarizes object detection techniques for video processing. It discusses how object detection is the first and important step for any video analysis. It then reviews several approaches for object detection, including background subtraction, frame differencing, optical flow, and temporal differencing. The document also summarizes trends in object detection techniques presented in various research papers from 2009 to 2014, highlighting advantages and limitations of the different approaches.
Computer m
emory is expensive and the recording of data captured by a webcam needs memory. I
n order to minimize the
memory usage in recording data from human motion as recorded from the webcam, this algorithm will use motion
detection as applied to a process to measure the change in speed or vector of an object in the field of view. This
applicat
ion only works if there is a motion detected and it will automatically save the captured image in its designated
folder.
This paper represents a survey of various methods of video surveillance system which improves the security. The aim of this paper is to review of various moving object detection technics. This paper focuses on detection of moving objects in video surveillance system. Moving body detection is first important task for any video surveillance system. Detection of moving object is a challenging task. Tracking is required in higher level applications that require the location and shape of object in every frame. In this survey,paper described about optical flow method, Background subtraction, frame differencing to detect moving object. It also described tracking method based on Morphology technique.
Keywords -- Frame separation, Pre-processing, Object detection using frame difference, Optical flow,
Temporal Differencing and background subtraction. Object tracking
Survey on video object detection & trackingijctet
This document summarizes previous work on video object detection and tracking techniques. It discusses research papers that used techniques like active contour modeling, gradient-based attraction fields, neural fuzzy networks, and region-based contour extraction for object tracking. Background subtraction, frame differencing, optical flow, spatio-temporal features, Kalman filtering, and contour tracking are described as common video object detection techniques. The challenges of multi-object data association and state estimation for tracking multiple objects are also mentioned.
This document discusses object tracking techniques in computer vision. It begins by defining object tracking as segmenting an object from video frames and observing its motion and position over time. There are several challenges to object tracking, including illumination changes, object occlusion, and camera motion. The document then describes two main approaches to object tracking: feature-based methods which extract image features to track objects, and kernel-based methods which represent objects using shapes and track their motion. It provides examples of kernel tracking methods like mean shift and discusses challenges like overlapping objects. In conclusion, the document implemented and compared mean shift, CAMShift and contour tracking algorithms for object tracking.
MULTIPLE OBJECTS TRACKING IN SURVEILLANCE VIDEO USING COLOR AND HU MOMENTSsipij
Multiple objects tracking finds its applications in many high level vision analysis like object behaviour
interpretation and gait recognition. In this paper, a feature based method to track the multiple moving
objects in surveillance video sequence is proposed. Object tracking is done by extracting the color and Hu
moments features from the motion segmented object blob and establishing the association of objects in the
successive frames of the video sequence based on Chi-Square dissimilarity measure and nearest neighbor
classifier. The benchmark IEEE PETS and IEEE Change Detection datasets has been used to show the
robustness of the proposed method. The proposed method is assessed quantitatively using the precision and
recall accuracy metrics. Further, comparative evaluation with related works has been carried out to exhibit
the efficacy of the proposed method.
26.motion and feature based person trackingsajit1975
This paper proposes a method for tracking people in indoor surveillance videos with challenges like illumination changes and occlusions. It uses background subtraction to detect moving objects and extracts color features to distinguish between occluded objects. The method tracks people by matching color clusters between frames and handles occlusions by using color information to accurately assign unique tags to each tracked person. Experiments on PETS dataset demonstrate the effectiveness of using color features for occlusion handling and person tracking in challenging indoor scenes.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
This document summarizes a research paper on detecting and tracking human motion based on background subtraction. The proposed method initializes the background using the median of multiple frames. It then extracts moving objects by subtracting the current frame from the background and applying a dynamic threshold. Noise is removed using filters and morphology operations. Shadows are accounted for using projection analysis to accurately detect human bodies. Tracking involves computing the centroid of detected objects in each frame to analyze position and velocity over time. Experimental results showed the method runs quickly and accurately for real-time detection of human motion.
This document presents a method for tracking moving objects in video sequences using affine flow parameters combined with illumination insensitive template matching. The method extracts affine flow parameters from frames to model local object motion using affine transformations. It then applies template matching with illumination compensation to track objects across frames while being robust to illumination changes. The method is evaluated on various indoor and outdoor database videos and is shown to effectively track objects without false detections, handling issues like illumination variations, camera motion and dynamic backgrounds better than other methods.
This document discusses various techniques for visual object tracking, including tracking whole objects, medium/fine level features, and facial feature points. It covers representation of tracked objects using templates, contours, and other models. Evaluation methods like normalized cross-correlation and boosted detectors are introduced. Simple tracking strategies like global search and contour tracking are described alongside their limitations. More advanced techniques like Lucas-Kanade tracking using gradient descent, mean-shift tracking, and regression-based tracking using linear and non-linear predictors are summarized. The use of motion models like the Kalman filter to incorporate temporal consistency is also mentioned.
IRJET- Moving Object Detection using Foreground Detection for Video Surveil...IRJET Journal
This document summarizes a research paper that proposes a new method for detecting moving objects in videos using foreground detection and background subtraction. The key steps of the proposed method include initializing a background model using the median of initial frames, dynamically updating the background model to adapt to lighting changes, subtracting the background model from current frames and applying a threshold to detect moving objects, and using morphological operations and projection analysis to extract human bodies and remove noise. The experimental results showed that the proposed method can accurately and reliably detect moving human bodies in real-time video surveillance.
This document reviews various methods for object tracking in video sequences. It discusses object detection, classification, and tracking techniques reported in previous research. The key methods covered include background subtraction, optical flow, Kalman filtering, and particle filtering. The document also provides a table summarizing several papers on object tracking, listing the techniques proposed and results achieved in each. It concludes that existing probability-based tracking works well for single objects but proposes improving the technique to track multiple objects.
Analysis of Human Behavior Based On Centroid and Treading TrackIJMER
This document discusses a video surveillance system that uses background subtraction and centroid tracking to analyze human behavior in videos. It begins with an introduction and overview of previous work on motion detection methods. It then describes the proposed system, which uses an adaptive background subtraction method to detect moving objects and extract centroid features for tracking. Experimental results show the system can detect abnormal behaviors by analyzing changes in an object's centroid movement and treading track over time. The system is able to distinguish between normal and irregular behaviors with high accuracy.
The document discusses object tracking in computer vision. It begins with an introduction and overview of applications of object tracking. It then discusses object representation, detection, tracking algorithms and methodologies. It compares different tracking methods and provides an example of object tracking in MATLAB. Key steps in object tracking include object detection, tracking the detected objects across frames using algorithms like point tracking, kernel tracking and silhouette tracking. Common challenges with object tracking are also summarized.
The document proposes a robust abandoned object detection system based on measuring an object's life-cycle state. It uses a double-background framework to extract unmoving object candidates and filters them using appearance features. A finite state machine then models each object's life-cycle state to determine if it has been abandoned, accounting for occlusion or illumination changes. The system was tested on 10 videos and achieved low false alarm and missing rates, showing it can feasibly detect abandoned objects.
This document summarizes a research paper that presents a framework for detecting and tracking objects in real-time video based on color. The proposed methodology uses a webcam to capture video, performs color-based filtering and image processing to isolate the target object, and analyzes the object's motion over time to track its path. Key steps include Euclidean filtering to isolate the object's color, converting to grayscale for faster processing, contour extraction to delineate the object's shape, and analyzing metrics like the Hurst exponent and Lyapunov exponent to detect chaos in the object's motion over time. The goal is to develop an efficient and real-time system for color-based object detection and tracking.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
The document describes a proposed system for real-time object tracking and learning using template matching. The system uses a live video stream and enables the tracking, learning, and detection of real-time objects. It selects an object of interest via cropping and then tracks it with a bounding box. Template matching is used to match the selected object with regions of interest in subsequent frames to mark its location. If a match is found, principal component analysis is used. The system also introduces a PN discrimination algorithm using background subtraction to increase frame processing speed and improve template matching accuracy. This allows the system to overcome limitations of existing methods and enable long-term, real-time object tracking.
An Object Detection, Tracking And Parametric Classification– A ReviewIRJET Journal
This document summarizes object detection techniques for video processing. It discusses how object detection is the first and important step for any video analysis. It then reviews several approaches for object detection, including background subtraction, frame differencing, optical flow, and temporal differencing. The document also summarizes trends in object detection techniques presented in various research papers from 2009 to 2014, highlighting advantages and limitations of the different approaches.
Computer m
emory is expensive and the recording of data captured by a webcam needs memory. I
n order to minimize the
memory usage in recording data from human motion as recorded from the webcam, this algorithm will use motion
detection as applied to a process to measure the change in speed or vector of an object in the field of view. This
applicat
ion only works if there is a motion detected and it will automatically save the captured image in its designated
folder.
This paper represents a survey of various methods of video surveillance system which improves the security. The aim of this paper is to review of various moving object detection technics. This paper focuses on detection of moving objects in video surveillance system. Moving body detection is first important task for any video surveillance system. Detection of moving object is a challenging task. Tracking is required in higher level applications that require the location and shape of object in every frame. In this survey,paper described about optical flow method, Background subtraction, frame differencing to detect moving object. It also described tracking method based on Morphology technique.
Keywords -- Frame separation, Pre-processing, Object detection using frame difference, Optical flow,
Temporal Differencing and background subtraction. Object tracking
Survey on video object detection & trackingijctet
This document summarizes previous work on video object detection and tracking techniques. It discusses research papers that used techniques like active contour modeling, gradient-based attraction fields, neural fuzzy networks, and region-based contour extraction for object tracking. Background subtraction, frame differencing, optical flow, spatio-temporal features, Kalman filtering, and contour tracking are described as common video object detection techniques. The challenges of multi-object data association and state estimation for tracking multiple objects are also mentioned.
This document discusses object tracking techniques in computer vision. It begins by defining object tracking as segmenting an object from video frames and observing its motion and position over time. There are several challenges to object tracking, including illumination changes, object occlusion, and camera motion. The document then describes two main approaches to object tracking: feature-based methods which extract image features to track objects, and kernel-based methods which represent objects using shapes and track their motion. It provides examples of kernel tracking methods like mean shift and discusses challenges like overlapping objects. In conclusion, the document implemented and compared mean shift, CAMShift and contour tracking algorithms for object tracking.
MULTIPLE OBJECTS TRACKING IN SURVEILLANCE VIDEO USING COLOR AND HU MOMENTSsipij
Multiple objects tracking finds its applications in many high level vision analysis like object behaviour
interpretation and gait recognition. In this paper, a feature based method to track the multiple moving
objects in surveillance video sequence is proposed. Object tracking is done by extracting the color and Hu
moments features from the motion segmented object blob and establishing the association of objects in the
successive frames of the video sequence based on Chi-Square dissimilarity measure and nearest neighbor
classifier. The benchmark IEEE PETS and IEEE Change Detection datasets has been used to show the
robustness of the proposed method. The proposed method is assessed quantitatively using the precision and
recall accuracy metrics. Further, comparative evaluation with related works has been carried out to exhibit
the efficacy of the proposed method.
26.motion and feature based person trackingsajit1975
This paper proposes a method for tracking people in indoor surveillance videos with challenges like illumination changes and occlusions. It uses background subtraction to detect moving objects and extracts color features to distinguish between occluded objects. The method tracks people by matching color clusters between frames and handles occlusions by using color information to accurately assign unique tags to each tracked person. Experiments on PETS dataset demonstrate the effectiveness of using color features for occlusion handling and person tracking in challenging indoor scenes.
Similar to Abnormal Object Detection under Various Environments Using Self-Organizing Incremental Neural Networks (20)
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
3. Intelligent surveillance
Labor-intensive jobs are replaced by
machines
Security, traffic monitoring, crime
prevention…
Monitoring any abnormal events or
suspicious activities.
Self-learning
To detect abnormal objects in different
environments automatically.
The main functions include moving object
tracking , learning of activity patterns and
abnormal object detection.
INTRODUCTION
4. Explicit event recognition
All events are pre-defined in the knowledge base.
Modeling of heterogeneous events by labeling them with high-level
semantic descriptors.
The disadvantages of explicit event recognition
It is unable to learn an unknown event automatically.
It is difficult to pre-define all object activities.
The nature of event varies depending on the environment.
REVIEW OF RELATED WORKS
5. Abnormal object detection
Learning of activity patterns
activity models are constructed from environment.
Abnormal object is low frequency activities occurred in the scene.
Trajectory clustering algorithm
Hidden Markov models (HMMS)
Fuzzy self-organized map (FSOM)
Support Vector Machine (SVM)
REVIEW OF RELATED WORKS (CONT’D)
6. INTRODUCTION: SYSTEM ARCHITECTURE
Moving Object Tracking Phase
Learning Phase Detection Phase
Object
Profiles
Object
Profiles
Normal
Trajectory
Module
Abnormal
Object
Detection
SOINN
Learning
Abnormality
Results
Object
Profiles
Object
Profiles
Collect
Trajectory
Information
Multi-Object
Tracking Model
Trajectory
PostProcessing
Object Detection
Model
Camera
GMM
Background
Modeling
7. Gaussian mixture model
Every pixel in the image is modeled as the mixture
of k Gaussian distributions.
The pixel values with high occurrence and low
variation are deemed as the background.
MOVING OBJECT DETECTION/TRACKING :
BACKGROUND MODEL
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Gaussian Mixture Model
𝑤𝑖,𝑡 ∙ 𝜂(𝑥 𝑡; 𝜇𝑖,𝑡, ∑𝑖,𝑡)
𝑘
𝑖=1
𝑤𝑖 is the respective weight value.
𝜂(𝑥 𝑡; 𝜇𝑖,𝑡, ∑𝑖,𝑡) is the ith Gaussian
distribution.
𝜇𝑖,𝑡, ∑𝑖,𝑡 are the mean and
standard deviation, respectively.
k is the number of Gaussian
distribution
8. Gaussian mixture model
The background model of a pixel (x,y) over the
learning period
𝑥 𝑡 is the pixel value at t time
𝑝 𝑥 𝑡 = ∑ 𝑤𝑖,𝑡 ∙ 𝜂(𝑥 𝑡; 𝜇 𝑖,𝑡, ∑ 𝑖,𝑡)𝑘
𝑖=1
MOVING OBJECT DETECTION/TRACKING :
BACKGROUND MODEL
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Image Sequence
t
Background
Gaussian Mixture Model
Gaussian Mixture Model
9. Parameters update of GMM
𝑤𝑖,𝑡 = 1 − 𝛼 ∙ 𝑤𝑖,𝑡−1 + 𝛼
𝜇 𝑖,𝑡 = 1 − 𝜌 ∙ 𝜇 𝑖,𝑡−1 + 𝜌 ∙ 𝑥 𝑡
𝜎𝑖,𝑡
2
= 1 − 𝜌 ∙ 𝜎𝑖,𝑡−1
2
+ 𝜌 ∙ (𝑥 𝑡−𝜇 𝑖,𝑡)T ∙ (𝑥 𝑡−𝜇 𝑖,𝑡)
MOVING OBJECT DETECTION/TRACKING :
BACKGROUND MODEL
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
PixelsPixelsPixels
Is match
k Gaussian
distribution?
Replace
k Gaussian
distribution
Update
k Gaussian
distribution
Yes
No
𝑥 𝑡 − 𝜇𝑖,𝑡−1 ≤ 𝑐 ∙ 𝜎𝑖,𝑡−1
10. Target extraction
Background subtraction method is used to obtain
the foreground image.
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
MOVING OBJECT DETECTION/TRACKING :
FOREGROUND DETECTION
𝐹𝑡 𝑥, 𝑦 =
1, 𝑖𝑓 x 𝑡 − 𝜇 𝐵,𝑡−1 > 𝐷 ∙ ∑B,𝑡−1
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
11. MOVING OBJECT DETECTION/TRACKING :
SHADOW REMOVAL
Shadow characteristic
A shadow covered a pixel by decreased its
brightness, and the hue value does not change.
Two information criteria
brightness distortion
chromatic distortion
Morphological operation
Eliminate some small
fragments.
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
eotherwis
τ(x,y)B(x,y)I
τ(x,y))B(x,y)(Iβ
(x,y)B
(x,y)I
if α
(x,y)
H
H
k
H
k
S
S
k
S
kV
k
V
k
0
1
Shadow
12. MOVING OBJECT DETECTION/TRACKING :
BLOBS TRACKING
Blobs (Binary large object)
Connected component labeling
Moving object filter
Noise, non-moving objects, waving trees
Stable size and speed in successive frames
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
t
t-t+
Blob
Candidate
Moving
Object
Moving
Object
13. MOVING OBJECT DETECTION/TRACKING :
OCCLUSION HANDLING
Scenarios for tracking in multiple moving
objects
Non-occlusion phase
Occlusion phase
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Object-1
Object-2
Object-1
and
Object-2
Object-1
Object-2
14. MOVING OBJECT DETECTION/TRACKING :
OCCLUSION HANDLING
Mean Shift Algorithm
Mean shift algorithm climbs the gradient of a
probability distribution to find the nearest domain
mode (peak)
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
15. MOVING OBJECT DETECTION/TRACKING :
OCCLUSION HANDLING
Mean Shift Algorithm
1. Choose the initial location of the search window.
2. Calculate the PDI of histogram of the object.
3. Use Mean shift algorithm to find the search
window center, and then update the location of the
object.
4. Go to step 3. Repeat the above steps until
convergence
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Tracking
16. MOVING OBJECT DETECTION/TRACKING :
OCCLUSION HANDLING
Mean Shift Algorithm
Find the centroid of the object in the search window.
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Compute zeroth moment within W:
Find the first moment for x and y:
Compute the centroid within W:
𝑀00 = 𝐼(𝑥, 𝑦)
(𝑥,𝑦)∈𝑊
𝑀10 = 𝑥𝐼 𝑥, 𝑦
𝑥,𝑦 ∈𝑊
, 𝑀01 = 𝑦𝐼 𝑥, 𝑦
𝑥,𝑦 ∈𝑊
𝑥 𝑐, 𝑦𝑐 = (
𝑀10
𝑀00
,
𝑀01
𝑀00
)
17. MOVING OBJECT DETECTION/TRACKING :
OCCLUSION HANDLING
Disadvantages of mean shift
Initialize the search window of a target object
manually. Therefore, it is not applicable to the
automated intelligent surveillance system.
Influenced by time or illumination, histograms of a
target object cannot be updated automatically.
When the histogram between the target and the
background is similar, the tracking would easily fail.
When two moving objects with similar histograms
and occlusion occur. Two tracking windows would
follow only one moving object and the other one is
not followed by any tracking window.
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
18. Modified Mean Shift
More information, such as foreground mask and
moving direction of objects, is added into the back-
projection image(PDI).
MOVING OBJECT DETECTION/TRACKING :
OCCLUSION HANDLING
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Tracking
19. Steps of Modified Mean Shift
1. Initial location of the search window by making
use of Blobs tracking.
2. Use Kalman filter to predict the location of an
object and set the location as the initial search
window location of mean shift tracking method.
3. Use foreground mask to decrease the influence of
background in the back-projection image of the
object. Use Mean shift algorithm to find the search
window center, and then update the location of the
object.
4. Go to Step 3. Repeat the above steps until
convergence (the search window location moves
less than a preset threshold).
5. Use Kalman filter to correct the search window
location. It can provide a better estimation of
object position.
MOVING OBJECT DETECTION/TRACKING :
OCCLUSION HANDLING
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
20. Modified Mean Shift
Use Kalman filter to correct the search window
location. It can provide a better estimation of object
position.
It can achieve a stable and accurate mean shift
tracking result.
It can give more accurate the location and the size
of the search window for mean shift, and solve the
occlusion problem
MOVING OBJECT DETECTION/TRACKING :
OCCLUSION HANDLING
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Mean Shift Kalman filter
21. Tracking flowchart
MOVING OBJECT DETECTION/TRACKING :
HANDLING OF A MISSED TRACKING OBJECT
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Is Occlusion?
Blobs
Tracking
Mean Shift
Kalman Filter
Foreground
Mask
Current
ImageBlobBlobBlobsNew
BlobList
Blobs
Matching
No
Update
Yes
Multi-Object
Tracking Model
22. MOVING OBJECT DETECTION/TRACKING :
HANDLING OF A MISSED TRACKING OBJECT
Reason of missed tracking object
The speed of the moving object is fast
network transmission delay
Kalman corrected
predict the position after 𝜏 frames of an object
Using velocity predicted the position of a Blob.
𝑝 𝑘𝑎𝑙𝑚𝑒𝑛 = 𝑝 𝑜𝑟𝑖𝑔 + 𝑉𝑘𝑎𝑙𝑚𝑒𝑛 ∗ 𝜏
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
𝑡𝑡 + 𝜏
24. MOVING OBJECT DETECTION/TRACKING :
HANDLING OF A MISSED TRACKING OBJECT
Kalman smoothing trajectory
By the light and shadow in the practical
environment. Therefore, trajectory is often jagged
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
26. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Characteristics of SOINN
Characteristics
Unsupervised learning method
Neurons are self-organized with no predefined
network structure and size
Approximate the topological structure of input data
Robust to noise
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
input data input data
SOINN: Self−organizing incremental neural network
27. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Structure of SOINN
Based on SOM (Self-Organizing Map)
Two-layer competitive network
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
First Layer
Second Layer
… …Input Layer
First layer: Competitive
for input data
Second layer:
Competitive for output of
first layer
Output topology structure
and weight vector of
second layer
28. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Algorithm of SOINN
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
𝐴 = 𝑐1, 𝑐2
Initialize:
It has only two nodes.
32. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Algorithm of SOINN
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Insert input data
if 𝜉 − 𝑊𝑠1
> 𝑇𝑠1
or 𝜉 − 𝑊𝑠2
> 𝑇𝑠2
,
then 𝐴 = 𝐴 ∪ 𝑟 and 𝑊𝑟 = 𝜉.
33. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Algorithm of SOINN
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Input new pattern
Calculate the
similarity threshold.
34. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Algorithm of SOINN
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Connection between winner
and second winner.
𝐶 = 𝐶 ∪ (𝑠1, 𝑠2)
35. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Algorithm of SOINN
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
It has this structure
36. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Algorithm of SOINN
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Find the winner and
second winner of input data.
37. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Algorithm of SOINN
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Calculate the
similarity threshold.
38. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Algorithm of SOINN
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Update the weight vector of
winner and its neighbors.
∆𝑊𝑠1
= 𝜖1 𝜉 − 𝑊𝑠1
∆𝑊𝑖 = 𝜖2(𝜉 − 𝑊𝑖)(∀𝑖 ∈ 𝑁𝑠1
)
𝜖1 = 1/𝑀𝑠1
、𝜖2 = 1/100𝑀𝑖.
39. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Algorithm of SOINN
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
It has this structure.
40. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Algorithm of SOINN
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Find the nodes whose
neighbor is less than or equal to 1.
if 𝐿𝑖 = 0 ∀𝑖 ∈ 𝐴 or 𝐿𝑖 = 1 ∀𝑖 ∈ 𝐴 ,
then 𝐴 = 𝐴{𝑖}
41. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Algorithm of SOINN
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Delete such nodes.
42. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Normal trajectory module
Trajectory information of objects are collected.
When trajectories are inputted, SOINN is used to
construct a normal trajectory module.
The module is then used to analyze moving objects
in the real-time camera frame and find out
abnormal objects.
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Normal Trajectory Module
Position Velocity
43. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Abnormal object detection
Decide whether an observed object is abnormal or
not .
For ith object trajectory
𝑇𝑖 = { 𝑥1, 𝑦1, 𝑑𝑥1, 𝑑𝑦1), ⋯ , (𝑥 𝑛, 𝑦 𝑛, 𝑑𝑥 𝑛, 𝑑𝑦 𝑛 }
𝑇𝑖 is matched to the normal trajectory module.
Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
Normal Trajectory Module
Position Velocity
47. MOVING OBJECT DETECTION/TRACKING :
TRAJECTORY FEATURE EXTRACTION
Algorithm of SOINN
In the real world, definition of abnormality is a
fuzzy concept. In addition, the occurrence of
abnormal objects is continuous, which is not
discrete.Object detection
Current image
Anomaly detection
Learning trajectory
Occlusion Handling
Object tracking
𝑅d = 𝐷𝑠𝑢𝑚 − 𝑇𝑠𝑢𝑚 𝐷𝑠𝑢𝑚 + 𝑇𝑠𝑢𝑚
𝐶 = 𝐶 +
1, 𝑖𝑓 𝑅 𝑑 > 𝑅 𝑇
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝐴𝑂 =
𝑡𝑟𝑢𝑒, 𝑖𝑓 𝐶 ≥ 𝐶 𝑇
𝑓𝑎𝑙𝑠𝑒, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝑅 𝑇 from 0.6 to 0.8
49. Foreground detection
Including the GMM,
shadow removal method,
morphological operation
Avg. execution time:
0.025 – 0.030 sec / per
image(320x240)
EXPERIMENTAL RESULTS :
MOVING OBJECT TRACKING
50. Multi-objects tracking
Including the Blobs tracking,
modified mean shift
In the occlusion case, each
occluded object takes extra
0.002 – 0.003 sec to handle
occlusion.
Avg. execution time: 0.004 –
0.018 sec / per
image(320x240)
EXPERIMENTAL RESULTS :
MOVING OBJECT TRACKING
51. Comparison of Scenarios characteristics
Scenarios
Trajectory
complexity
Speed of
objects
Occlusion
frequency
School campus High Slow Medium
Roads Medium Fast High
One-Way Street Low Very fast Medium
EXPERIMENTAL RESULTS :
MOVING OBJECT TRACKING
52. Tracking successful rate:
classified as successful and failed in different resolution
Successful: tracked successfully for more than 90% of the ground
truth
Failed: tracked successfully for less than 90% of the ground truth
Avg. successful rate: 96.9%
Scenarios
Number
of objects
Successful rate (%)
𝟑𝟐𝟎 ∗ 𝟐𝟒𝟎 𝟐𝟒𝟎 ∗ 𝟏𝟖𝟎 𝟏𝟔𝟎 ∗ 𝟏𝟐𝟎
School campus 124 99.1% 100% 99.1%
Roads 158 96.8% 96.2% 96.2%
One-Way Street 88 94.3% 95.5% 95.5%
EXPERIMENTAL RESULTS :
MOVING OBJECT TRACKING
53. Reason of tracking error
Between object and background color is similar
Object size is over large(佔鏡頭畫面過大)
Object speed is over fast
EXPERIMENTAL RESULTS :
MOVING OBJECT TRACKING
57. Performance of Abnormal object detection
ACC is the ratio of the sum of all detected object is truly for all
objects.
RC is the ratio of the probability that the abnormal object are
detected.
EXPERIMENTAL RESULTS :
ABNORMAL OBJECT DETECTION
Seconds
True
Positives
True
Negatives
False
Positives
False
Negatives
Accuracy
(ACC)
Recall
(RC)
School campus 13 137 0 0 100.0% 100.0%
Roads 10 231 3 1 98.3% 90.9%
One-Way Street 18 63 1 0 98.8% 100%
𝐴𝐶𝐶 =
𝑇𝑃 + 𝑇𝑁
𝑇𝑃 + 𝐹𝑁 + 𝐹𝑃 + 𝑇𝑁
, 𝑅𝐶 =
𝑇𝑃
𝑇𝑃 + 𝐹𝑁
58. Reason of misdetection
Most of the misdetection cases are caused by tracking error. Then,
the wrong trajectory information is mismatched with the normal
trajectory model.
If there are too many trajectory instances for the SOINN to learn, the
complexity is high and it may cause over-fitting problem.
EXPERIMENTAL RESULTS :
MOVING OBJECT TRACKING
59. The proposed method can solve the problem of object
occlusion. Effective the extract object trajectories, and reduce
their noise.
The proposed method is a self-organizing method to learn
trajectories for abnormal object detection.
Abnormal object detection under 3 different scenarios.
Campus squares, roads and one-way street.
Avg. accuracy is 99% and recall is 96.7%.
Avg. execution time is from 33 to 67 milliseconds(real-time).
CONCLUSIONS
60. Foreground detection
More good background model and shadow removal method
To detection under different weather conditions.
Occlusion handling
Using V. Papadourakis, A. Argyros and GMM in object modeling can
be also refined to improve the tracking accuracy.
Learning method
A faster and more efficient
Using S. Furao, T. Ogura and O. Hasegawa to reinforce the original
SOINN.
Abnormal threshold
More accurate abnormal threshold value
FUTURE WORKS