The document describes an algorithm for detecting abandoned objects in crowded environments using video surveillance. The algorithm has 4 sub-events:
1) Detection of unattended bags in video frames using blob analysis techniques like background subtraction and morphological operations.
2) Tracking the detected objects across frames using properties like area, centroid and shape.
3) Identifying abandoned objects by checking if objects remain stationary for a threshold number of frames.
4) Tracing back through previous frames to find the likely owner of the abandoned object. Computational modules and visualization of results are also discussed.
Development of wearable object detection system & blind stick for visuall...Arkadev Kundu
It is a wearable device. It has a camera, and it detects all living and non living object. This module detects moving object also. It is made with raspberry pi 3, and a camera. One headphone connect with raspberry pi. When this module detects items, it gave a sound output through headphone. Hence the blind man know that item, which is in-front of him or her. We made it in very low budget, and it is very helpful for visually challenged people. And the Blind stick help him to detect obstacles.
Development of wearable object detection system & blind stick for visuall...Arkadev Kundu
It is a wearable device. It has a camera, and it detects all living and non living object. This module detects moving object also. It is made with raspberry pi 3, and a camera. One headphone connect with raspberry pi. When this module detects items, it gave a sound output through headphone. Hence the blind man know that item, which is in-front of him or her. We made it in very low budget, and it is very helpful for visually challenged people. And the Blind stick help him to detect obstacles.
발표자: 고영준 (고려대 박사과정)
발표일: 2017.6.
개요:
Algorithms to segment objects in a video sequence will be presented.
First, I will introduce a primary object segmentation algorithm based on region augmentation and reduction. Second, collaborative detection, tracking, and segmentation for online multiple object segmentation will be presented.
With cheap cameras becoming ubiquitous the camera has become probably the most
important sensor for many applications.
However extracting usable information from the images produced by cameras is
non-trivial. There have been many published successes in recent years using deep
learning (multi-layered convolutional neural networks) but it’s not always
necessary to apply such techniques to get useful results for many applications.
This talk will focus on “classical” machine vision using java and the OpenCV
library. We’ll start with a quick refresher on how image data is represented and
then cover topics such as determining if an image is blurred (and therefore
unusable) and then explore a number of techniques such as shape and face
detection.
Real-time Computer Vision With Ruby - OSCON 2008Jan Wedekind
Computer vision software requires image- and video-file-I/O as well as camera access and fast video display. Ruby and existing open source software allowed us to develop a machine vision library combining performance and flexibility in an unprecedented way. Native array operations are used to implement a variety of machine vision algorithms. This research was funded by the Nanorobotics grant.
Volumetric Lighting for Many Lights in Lords of the FallenBenjamin Glatzel
In this session I’m going to give you an in-depth insight into the design and the implementation of the volumetric lighting system we’ve developed for ‘Lords of the Fallen’. The system allows the simulation of countless volumetric lighting effects in parallel while still being a feasible solution on next-gen consoles.
This presentation was held at the Digital Dragons 2014 conference.
Videos shown during the talk are available here: http://bglatzel.movingblocks.net/publications
Smart home security using Telegram chatbotSanjay Crúzé
A smart security which combines motion detection and face recognition to accurately pin point and detect intruders in user's home and sends alert images, footages as per commands through a telegram chatbot.
Presentation given by Neil Rubens at the Centre for Database and Information Systems (Prof. Ricci), Free University of Bozen-Bolzano
For more information see http://activeintelligence.org/research/al-rs/
Lecture prepared by Mark Billinghurst on Augmented Reality tracking. Taught on October 18th 2016 by Dr. Gun Lee as part of the COMP 4010 VR class at the University of South Australia.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/videantis/embedded-vision-training/videos/pages/may-2015-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Marco Jacobs, Vice President of Marketing at videantis, presents the "3D from 2D: Theory, Implementation, and Applications of Structure from Motion" tutorial at the May 2015 Embedded Vision Summit.
Structure from motion uses a unique combination of algorithms that extract depth information using a single 2D moving camera. Using a calibrated camera, feature detection, and feature tracking, the algorithms calculate an accurate camera pose and a 3D point cloud representing the surrounding scene.
This 3D scene information can be used in many ways, such as for automated car parking, augmented reality, and positioning. Marco introduces the theory behind structure from motion, provides some representative applications that use it, and explores an efficient implementation for embedded applications.
APPLICATION OF SIGNAL PROCESSING IN RADIO ASTRONOMY SYSTEMSSaumya Tiwari
We discuss pulsar detection and timing pulsar profiling. We also consider image formation and radio maps cleaning using the CLEAN algorithm. Finally, we briefly discuss some future radio telescopes, which will consist of distributed phased arrays with a large number of elements.
This project is called ‘Voice Controlled Home Automation project using Arduino’
which enables a user to control the home appliances through voice commands sent to
an Android app i.e AMR voice app.
발표자: 고영준 (고려대 박사과정)
발표일: 2017.6.
개요:
Algorithms to segment objects in a video sequence will be presented.
First, I will introduce a primary object segmentation algorithm based on region augmentation and reduction. Second, collaborative detection, tracking, and segmentation for online multiple object segmentation will be presented.
With cheap cameras becoming ubiquitous the camera has become probably the most
important sensor for many applications.
However extracting usable information from the images produced by cameras is
non-trivial. There have been many published successes in recent years using deep
learning (multi-layered convolutional neural networks) but it’s not always
necessary to apply such techniques to get useful results for many applications.
This talk will focus on “classical” machine vision using java and the OpenCV
library. We’ll start with a quick refresher on how image data is represented and
then cover topics such as determining if an image is blurred (and therefore
unusable) and then explore a number of techniques such as shape and face
detection.
Real-time Computer Vision With Ruby - OSCON 2008Jan Wedekind
Computer vision software requires image- and video-file-I/O as well as camera access and fast video display. Ruby and existing open source software allowed us to develop a machine vision library combining performance and flexibility in an unprecedented way. Native array operations are used to implement a variety of machine vision algorithms. This research was funded by the Nanorobotics grant.
Volumetric Lighting for Many Lights in Lords of the FallenBenjamin Glatzel
In this session I’m going to give you an in-depth insight into the design and the implementation of the volumetric lighting system we’ve developed for ‘Lords of the Fallen’. The system allows the simulation of countless volumetric lighting effects in parallel while still being a feasible solution on next-gen consoles.
This presentation was held at the Digital Dragons 2014 conference.
Videos shown during the talk are available here: http://bglatzel.movingblocks.net/publications
Smart home security using Telegram chatbotSanjay Crúzé
A smart security which combines motion detection and face recognition to accurately pin point and detect intruders in user's home and sends alert images, footages as per commands through a telegram chatbot.
Presentation given by Neil Rubens at the Centre for Database and Information Systems (Prof. Ricci), Free University of Bozen-Bolzano
For more information see http://activeintelligence.org/research/al-rs/
Lecture prepared by Mark Billinghurst on Augmented Reality tracking. Taught on October 18th 2016 by Dr. Gun Lee as part of the COMP 4010 VR class at the University of South Australia.
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/videantis/embedded-vision-training/videos/pages/may-2015-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Marco Jacobs, Vice President of Marketing at videantis, presents the "3D from 2D: Theory, Implementation, and Applications of Structure from Motion" tutorial at the May 2015 Embedded Vision Summit.
Structure from motion uses a unique combination of algorithms that extract depth information using a single 2D moving camera. Using a calibrated camera, feature detection, and feature tracking, the algorithms calculate an accurate camera pose and a 3D point cloud representing the surrounding scene.
This 3D scene information can be used in many ways, such as for automated car parking, augmented reality, and positioning. Marco introduces the theory behind structure from motion, provides some representative applications that use it, and explores an efficient implementation for embedded applications.
APPLICATION OF SIGNAL PROCESSING IN RADIO ASTRONOMY SYSTEMSSaumya Tiwari
We discuss pulsar detection and timing pulsar profiling. We also consider image formation and radio maps cleaning using the CLEAN algorithm. Finally, we briefly discuss some future radio telescopes, which will consist of distributed phased arrays with a large number of elements.
This project is called ‘Voice Controlled Home Automation project using Arduino’
which enables a user to control the home appliances through voice commands sent to
an Android app i.e AMR voice app.
Nowadays customers need to stand in a queue in supermarkets for billing. So here we are introducing a service application by the name of “QueueLess”, its basically self-billing system.
Nowadays the building is designed for glasses, so to clean it labor are recruited. We are making a robot which will clean buildings without much of manual work.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
2. INTRODUCTION
• Visual surveillance systems today consist of a large number of cameras, usually monitored
by a relatively small team of human operators.
• Recent studies have shown that the average human can focus on tracking the movements
of up to four dynamic targets simultaneously, and can efficiently detect changes to the
attended targets but not the neighboring distractors.
• When targets and distractors are too close, it becomes difficult to individuate the targets
and maintain tracking efficiently.
• Further, according to the classical spotlight theory of visual attention, people can attend to
only one region of space (i.e. area in view) at a time, or at most, two.
• Simply stated, the human visual processing capability and attentiveness required for the
effective monitoring of crowded scenes or multiple screens within a surveillance system is
limited.
4. COMPUTATIONAL MODULE
I. Detection of Unattended Baggage
• The goal of the first module of the algorithm is the detection of any stationary baggage. Until such an
event occurs, it is unnecessary to track and monitor all ongoing activities in the scene. Doing so not
only cuts computational costs but also avoids ambiguities born of inaccuracies in tracking in the
presence of much movement and occlusion.
• The representation of bags is established using typical shape and size characteristics. The classifier is
trained off-line, using the following features:
• Compactness – the ratio of area to squared perimeter (multiplied by 4π for normalization)
• Solidity ratio – the extent to which the blob area covers the convex hull area
• Eccentricity – the ratio of major axis to minor axis of an ellipse that envelopes the blob
• To ensure that the bag remains stationary while left alone as well as to reinforce the decision of the
classifier, each suspect blob is tracked over a number of consecutive frames (usually, around 10) to
check for the consistency of detection and position, before declaring it as unattended and moving on
to look for its potential owner(s).
5.
6. CURRENT APPROACH: BLOB
ANALYSIS SYSTEM
• Extract a region of interest (ROI), thus eliminating video areas that are unlikely to
contain abandoned objects.
• Perform video segmentation using background subtraction.
• Track objects based on their area and centroid statistics.
• Visualize the results.
7. EXTRACT A REGION OF INTEREST
(ROI)
• It is defined as roi(x y width height)x y are image that has to be focused(portion of
image that has to be performed)
8. PERFORM VIDEO SEGMENTATION USING
BACKGROUND SUBTRACTION
• Create a Color Space Converter System object to convert the RGB image to Y'CbCr
format.
• Create Threshold scale factor.
• Create a Morphological Close System object to fill in small gaps in the detected
objects.
Track objects based on their area and centroid statistics.
13. CODES• roi = [100 80 360 240]; % defining region of interest roi(x y width height)x y are image that has to be focused(portion of image that has to be performed)
• % Maximum number of objects to track
• maxNumObj = 200;
• % Number of frames that an object must remain stationary before an alarm is
• % raised
• alarmCount = 45;
• % Maximum number of frames that an abandoned object can be hidden before it
• % is no longer tracked
• maxConsecutiveMiss = 4;
• areaChangeFraction = 20; % Maximum allowable change in object area in percent
• centroidChangeFraction = 30; % Maximum allowable change in object centroid in percent
• % Minimum ratio between the number of frames in which an object is detected
• % and the total number of frames, for that object to be tracked.
• minPersistenceRatio = 0.3;
• % Offsets for drawing bounding boxes in original input video
• PtsOffset = int32(repmat([roi(1), roi(2), 0, 0],[maxNumObj 1]));%Convert to 32-bit signed integer, repmat: Repeat copies of a matrix
• %%
• % Create a VideoFileReader System object to read video from a file.
• hVideoSrc = vision.VideoFileReader;
• hVideoSrc.Filename = 'Abandoned_Bag1.mp4';
• hVideoSrc.VideoOutputDataType = 'single';
14. • %%
• % Create a ColorSpaceConverter System object to convert the RGB image to
• % Y'CbCr format.
• hColorConv = vision.ColorSpaceConverter('Conversion', 'RGB to YCbCr'); %Y?CbCr color space is a mathematical coordinate transformation from an associated RGB color
space.
• %%
• % Create a ColorSpaceConverter System object to convert the RGB image to
• % Y'CbCr format.
• hAutothreshold = vision.Autothresholder('ThresholdScaleFactor', 1.3);
• %%
• % Create a MorphologicalClose System object to fill in small gaps in the detected objects.
• hClosing = vision.MorphologicalClose('Neighborhood', strel('square',5));
• %%
• % Create a BlobAnalysis System object to find the area, centroid, and bounding
• % box of the objects in the video.
• hBlob = vision.BlobAnalysis('MaximumCount', maxNumObj, 'ExcludeBorderBlobs', true);
• hBlob.MinimumBlobArea = 100;
• hBlob.MaximumBlobArea = 2500;
• %%
• % Create System objects to display results.
• pos = [10 300 roi(3)+25 roi(4)+25];
• hAbandonedObjects = vision.VideoPlayer('Name', 'Abandoned Objects', 'Position', pos);
• pos(1) = 46+roi(3); % move the next viewer to the right
• hAllObjects = vision.VideoPlayer('Name', 'All Objects', 'Position', pos);
• pos = [80+2*roi(3) 300 roi(3)-roi(1)+25 roi(4)-roi(2)+25];
• hThresholdDisplay = vision.VideoPlayer('Name', 'Threshold', 'Position', pos);
15. • %% Video Processing Loop
• % Create a processing loop to perform abandoned object detection on the input
• % video. This loop uses the System objects you instantiated above.
• firsttime = true;
• while ~isDone(hVideoSrc)
• Im = step(hVideoSrc);
•
• % Select the region of interest from the original video
• OutIm = Im(roi(2):end, roi(1):end, :);
• YCbCr = step(hColorConv, OutIm);
• CbCr = complex(YCbCr(:,:,2), YCbCr(:,:,3));
• % Store the first video frame as the background
• if firsttime
• firsttime = false;
• BkgY = YCbCr(:,:,1);
• BkgCbCr = CbCr;
• end
• SegY = step(hAutothreshold, abs(YCbCr(:,:,1)-BkgY));
• SegCbCr = abs(CbCr-BkgCbCr) > 0.05;
16. •
• % Fill in small gaps in the detected objects
• Segmented = step(hClosing, SegY | SegCbCr);
• % Perform blob analysis
• [Area, Centroid, BBox] = step(hBlob, Segmented);
• % Call the helper function that tracks the identified objects and
• % returns the bounding boxes and the number of the abandoned objects.
• [OutCount, OutBBox] = videoobjtracker(Area, Centroid, BBox, maxNumObj, ...
• areaChangeFraction, centroidChangeFraction, maxConsecutiveMiss, ...
• minPersistenceRatio, alarmCount);
• % Display the abandoned object detection results
• Imr = insertShape(Im,'FilledRectangle',OutBBox+PtsOffset,...
• 'Color','red','Opacity',0.5);
• % Call the helper function that tracks the identified objects and
• % returns the bounding boxes and the number of the abandoned objects.
• [OutCount, OutBBox] = videoobjtracker(Area, Centroid, BBox, maxNumObj, ...
• areaChangeFraction, centroidChangeFraction, maxConsecutiveMiss, ...
• minPersistenceRatio, alarmCount);
17. • % Display the abandoned object detection results
• Imr = insertShape(Im,'FilledRectangle',OutBBox+PtsOffset,...
• 'Color','red','Opacity',0.5);
• % insert number of abandoned objects in the frame
• Imr = insertText(Imr, [1 1], OutCount);
• step(hAbandonedObjects, Imr);
• BlobCount = size(BBox,1);
• BBoxOffset = BBox + int32(repmat([roi(1) roi(2) 0 0],[BlobCount 1]));
• Imr = insertShape(Im,'Rectangle',BBoxOffset,'Color','green');
• % Display all the detected objects
• % insert number of all objects in the frame
• Imr = insertText(Imr, [1 1], OutCount);
• Imr = insertShape(Imr,'Rectangle',roi);
• %Imr = step(hDrawBBox, Imr, roi);
• step(hAllObjects, Imr);
• % Display the segmented video
• SegBBox = PtsOffset;
• SegBBox(1:BlobCount,:) = BBox;
• SegIm = insertShape(double(repmat(Segmented,[1 1 3])),'Rectangle', SegBBox,'Color','green');
• %SegIm = step(hDrawRectangles3, repmat(Segmented,[1 1 3]), SegBBox);
• step(hThresholdDisplay, SegIm);
• end
release(hVideoSrc);
• h=msgbox('The object has been detected!')
20. REFERENCES
• Research paper of Medha Bhargava, Chia-Chih Chen, M. S. Ryoo, and J. K. Aggarwal
from University of Austin.
• Research paper “Multiple Object Tracking” by C. Sears and Z. Pylyshyn
• Research paper by J. Martinez-del-Rincon, J. Elías Herrero, Jorge Jómez and Carlos
Orrite Uruñuela, “Automatic Left Luggage Detection and Tracking using Multi-Camera.
• Anoop Mathew.
21. THANK YOU
M A D E BY : A R U S H I C H A U D H R Y
A N D
S A U M YA T I WA R I