Human behaviour analysis is being of great interest in the field of artificial intelligence. Specifically, human action recognition deals with the lowest level of semantic interpretation of meaningful human behaviours, as walking, sitting or falling. In the field of ambient-assisted living, the recognition of such actions at home can support several safety and health care services for the independent living of elderly or impaired people at home. In this sense, this thesis aims to provide valuable advances in vision-based human action recognition for ambient-assisted living scenarios. Initially, a taxonomy is proposed in order to classify the different levels of human behaviour analysis and join existing definitions. Then, a human action recognition method is presented, that is based on fusion of multiple cameras and key pose sequence recognition, and performs in real time. By relying on fusion of multiple views, sufficient correlated data can be obtained despite possible occlusions, noise and unfavourable viewing angles. A visual feature is proposed that only relies on the boundary points of the human silhouette, and does not need the actual RGB colour image. Furthermore, several optimisations and extensions of this method are proposed. In this regard, evolutionary algorithms are employed for the selection of scenario-specific configurations. As a result, the robustness and accuracy of the classification are significantly improved.\linebreak In order to support online learning of such parameters, an adaptive and incremental learning technique is introduced. Last but not least, the presented method is also extended to support the recognition of human actions in continuous video streams. Outstanding results have been obtained on several publicly available datasets achieving the desired robustness required by real-world applications. Therefore, this thesis may pave the way for more advanced human behaviour analysis techniques, such as the recognition of complex activities, personal routines and abnormal behaviour detection.
Presentatie die ik samen met Nikki Demandt heb gegeven op 19-12-2011 in het kader van de course Selforganization, cognition and social systems van de Rijksuniversiteit Groningen
Presentatie die ik samen met Nikki Demandt heb gegeven op 19-12-2011 in het kader van de course Selforganization, cognition and social systems van de Rijksuniversiteit Groningen
This presentation was prepared by Ishara Amarasekera based on the paper, Activity Recognition using Cell Phone Accelerometers by Jennifer R. Kwapisz, Gary M. Weiss and Samuel A. Moore.
This presentation contains a summary of the content provided in this research paper and was presented as a paper discussion for the course, Mobile and Ubiquitous Application Development in Computer Science.
HUMAN MOTION DETECTION AND TRACKING FOR VIDEO SURVEILLANCEAswinraj Manickam
An approach to detect and track groups of people in video-surveillance applications, and to automatically recognize their behavior.
This method keeps track of individuals moving together by maintaining a spacial and temporal group coherence.
First, people are individually detected and tracked. Second, their trajectories are analyzed over a temporal window and clustered using the Mean-Shift algorithm.
A coherence value describes how well a set of people can be described as a group. Furthermore, we propose a formal event description language.
The group events recognition approach is successfully validated on 4 camera views from 3 data sets: an airport, a subway, a shopping center corridor and an entrance hall.
This presentation was prepared by Ishara Amarasekera based on the paper, Activity Recognition using Cell Phone Accelerometers by Jennifer R. Kwapisz, Gary M. Weiss and Samuel A. Moore.
This presentation contains a summary of the content provided in this research paper and was presented as a paper discussion for the course, Mobile and Ubiquitous Application Development in Computer Science.
HUMAN MOTION DETECTION AND TRACKING FOR VIDEO SURVEILLANCEAswinraj Manickam
An approach to detect and track groups of people in video-surveillance applications, and to automatically recognize their behavior.
This method keeps track of individuals moving together by maintaining a spacial and temporal group coherence.
First, people are individually detected and tracked. Second, their trajectories are analyzed over a temporal window and clustered using the Mean-Shift algorithm.
A coherence value describes how well a set of people can be described as a group. Furthermore, we propose a formal event description language.
The group events recognition approach is successfully validated on 4 camera views from 3 data sets: an airport, a subway, a shopping center corridor and an entrance hall.
Bring your own idea - Visual learning analyticsJoris Klerkx
Workshop on visual learning analytics that was part of LASI 2014 - http://www.solaresearch.org/events/lasi-2/lasi2014/
Examples of learning dashboards were presented during the workshop by Sven Charleer:
http://www.slideshare.net/svencharleer/learning-dashboard-visual-learning-analytics-workshop-lasi2014-h-harvard
People are used to search solutions to their information needs within their trusted connections. Analyzing online social users' behaviors we have studied several ways to retrieve a set of potential experts for the question we want to answer.
Mammography is currently the dominant imaging modality for the early detection of breast cancer. However, its robustness in distinguishing malignancy is relatively low, resulting in a large number of unnecessary biopsies. A computer-aided diagnosis (CAD) scheme, capable of visually justifying its results, is expected to aid the decision made by radiologists. Content-based image retrieval (CBIR) accounts for a promising paradigm in this direction. Facing this challenge, we introduce a CBIR scheme that utilizes the extracted features as input to a support vector machine (SVM) ensemble. The final features used for CBIR comprise the participation value of each SVM. The retrieval performance of the proposed scheme has been evaluated quantitatively on the basis of the standard measures. In the experiments, a set of 90 mammograms is used, derived from a widely adopted digital database for screening mammography. The experimental results show the improved performance of the proposed scheme.
Mammography is currently the dominant imaging modality for the early detection of breast cancer. However, its robustness in distinguishing malignancy is relatively low, resulting in a large number of unnecessary biopsies. A computer-aided diagnosis (CAD) scheme, capable of visually justifying its results, is expected to aid the decision made by radiologists. Content-based image retrieval (CBIR) accounts for a promising paradigm in this direction. Facing this challenge, we introduce a CBIR scheme that utilizes the extracted features as input to a support vector machine (SVM) ensemble. The final features used for CBIR comprise the participation value of each SVM. The retrieval performance of the proposed scheme has been evaluated quantitatively on the basis of the standard measures. In the experiments, a set of 90 mammograms is used, derived from a widely adopted digital database for screening mammography. The experimental results show the improved performance of the proposed scheme.
This presentation explores our collaborative strategies and work for designing and building OVAL (Oklahoma Virtual Academic Laboratory), a multi-disciplinary, multi-user academic virtual reality (VR) system.
For more information:
https://github.com/OUETL/OVAL
bill.endres@ou.edu
Better neuroimaging data processing: driven by evidence, open communities, an...Gael Varoquaux
My current thoughts about methods validity and design in brain imaging.
Data processing is a significant part of a neuroimaging study. The choice of corresponding methods and tools is crucial. I will give an opinionated view how on a path to building better data processing for neuroimaging. I will take examples on endeavors that I contributed to: defining standards for functional-connectivity analysis, the nilearn neuroimaging tool, the scikit-learn machine-learning toolbox -an industry standard with a million regular users. I will cover not only the technical process -statistics, signal processing, software engineering- but also the epistemology of methods development. Methods govern our results, they are more than a technical detail.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
Vision-based Recognition of Human Behaviour for Intelligent Environments
1. Vision-based Recognition of Human
Behaviour for Intelligent Environments
Alexandros Andre Chaaraoui
Departamento de Tecnolog´ Inform´tica y Computaci´n
ıa
a
o
Universidad de Alicante
alexandros@ua.es
Supervisor: Dr. Francisco Fl´rez-Revuelta
o
January 20, 2014
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
1 / 45
5. Introduction
Objectives
Introduction (II)
In this thesis, our goal is to support the development of AAL services
for smart homes with advances in human behaviour analysis.
Main objectives
1
Establish the research framework
2
Propose a method for the recognition of human behaviour
3
Satisfy specific demands of AAL services
4
Reach robustness for different scenarios
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
5 / 45
7. Research framework
Related work
Research framework
Proposed taxonomy
Figure 1: Human Behaviour Analysis levels — Classification proposed
in Chaaraoui et al. (2012).
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
7 / 45
8. Research framework
Proposal
Conclusions of the analysis
Motion
Motion, pose and gaze estimation is the most resolved level.
Action
Action recognition is currently receiving the greatest interest both from
research and industry.
Activity-Behaviour
Activity and long-term behaviour recognition is performed directly
based on low-level sensor data, instead of using action recognition.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
8 / 45
9. Research framework
Proposal
Camera 1
Long-term
Analysis
Setup
(activities,
inhabitants,
objects…),
Profiles and
Log
Camera 2
...
Camera N
Motion
Detection
Motion
Detection
...
Motion
Detection
Human
Behaviour
Analysis
Human
Behaviour
Analysis
...
Human
Behaviour
Analysis
Multi-view Human Behaviour Analysis
Environmental
Sensor Information
Reasoning
System
Privacy
Event
Alarm
Actuators
Caregiver
Figure 2: Architecture of the intelligent monitoring system to promote
independent living at home and support AAL services.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
9 / 45
13. Contributions
Pose representation
Radial Summary (I)
Figure 4: Overview of the feature extraction process: 1) All the contour points are
assigned to the corresponding radial bin; 2) for each bin, a summary representation
is obtained (example with 18 bins).
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
13 / 45
14. Contributions
Pose representation
Radial Summary (II)
Figure 5: Graphical explanation of the statistical range frange of a sample
radial bin.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
14 / 45
15. Contributions
Fusion of multiple views
Fusion of multiple views
Multiple views of the same field of view provide
Additional characteristic data
Advantages with respect to occlusions
Advantages with respect to unfavourable viewing angles
(ambiguous actions)
However, difficulties have been observed
The recognition does not necessarily improve
Performance issues (temporal and spatial)
Burdensome, highly-restricted systems (3D pose estimation,
calibrated and synchronised camera networks, ...)
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
15 / 45
16. Contributions
Fusion of multiple views
Weighted feature fusion scheme
Weights are learnt for each view and action:
Figure 6: Overview of the feature fusion process of the multi-view pose
representation. This example shows five different views of a specific pose taken
from the walk action class from the IXMAS dataset (Weinland et al., 2006).
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
16 / 45
17. Contributions
Action classification
Action classification
Key Poses
K Key
Poses
...
...
...
Bag of Key
Poses
K Key
Poses
Sequences of
Key Poses
Figure 7: Outline of the learning stage. Using the pose representations, key
poses are obtained for each action. In this way, a bag-of-key-poses model is
learnt. The temporal relation between key poses is modelled using sequences
of key poses.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
17 / 45
18. Contributions
Action classification
Bag of key poses
Action1
K Key Poses
Action2
K Key Poses
...
...
...
...
ActionA
K Key Poses
Bag of Key Poses
Figure 8: Learning scheme of the bag-of-key-poses model. For each action
class, K key poses are obtained separately and then joined together.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
18 / 45
19. Contributions
Action classification
Recognition
Sequences
of Key Poses
Sequence
Matching
DTW
Action
Recognition
Figure 9: Outline of the recognition stage. The unknown sequence of key
poses is obtained and compared to the known sequences. Through sequence
matching, the action of the video sequence can be recognised.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
19 / 45
20. Contributions
Evolutionary-based optimisation
Evolutionary-based optimisation
Genetic feature subset selection
By means of a genetic algorithm for binary feature selection, the
interesting body parts can be selected, and redundant or noisy body
parts can be ignored.
Figure 10: Example of a result provided by genetic feature selection (dismissed
radial bins are shaded in grey).
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
20 / 45
21. Contributions
Evolutionary-based optimisation
Evolutionary-based optimisation
Genetic feature subset selection
Coevolutionary optimisation
Simultaneous selection of training instances, features and
parameter values
Coevolution enables to split the problem in subproblems of
optimisation with a common goal (Wiegand, 2004)
Cooperative coevolution allows to consider intrinsic dependencies
among optimisation goals (Derrac et al., 2012)
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
21 / 45
22. Contributions
Evolutionary-based optimisation
Evolutionary-based optimisation
Genetic feature subset selection
Coevolutionary optimisation
Adaptive learning
Evolving bag of key poses
Supports incremental and adaptive learning of new data
Applies selection of training instances, features and parameter
values
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
22 / 45
23. Contributions
Continuous recognition
Continuous recognition
In AAL, human action recognition has to be applied to continuous
video streams.
This requires:
To detect meaningful human actions online
And to recognise the appropriate action in real-time
We propose:
To learn action zones, i.e. the most discriminative parts of
action performances
The usage of a sliding and growing window technique for
recognition
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
23 / 45
24. Contributions
Continuous recognition
Action zones
Figure 11: Sample silhouettes of a waving sequence of the DAI RGBD
dataset. The action zone that should be extracted is highlighted.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
24 / 45
25. Contributions
Continuous recognition
Action zones
Figure 12: Evidence values H(t) of each action class and the detected action
zones are shown for a scratch head sequence of the IXMAS dataset.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
25 / 45
27. Results
Evaluation methodology
Results
Experimentation has been performed on:
Single- and multi-view datasets (up to five views)
Manually- and automatically-extracted silhouettes (including
depth-based segmentation)
Using the following cross validations:
Leave one sequence out (LOSO)
Leave one actor out (LOAO)
Leave one view out (LOVO)
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
27 / 45
28. Results
Benchmarks
Weizmann dataset
Table 1: Comparison of recognition rates and speeds obtained on the
Weizmann dataset (Gorelick et al., 2007) with other state-of-the-art
approaches.
Approach
˙
Ikizler and Duygulu (2007)
Tran and Sorokin (2008)
Fathi and Mori (2008)
Actions
9
10
10
Test
LOSO
LOSO
LOSO
Rate
100%
100%
100%
fps
N/A
N/A
N/A
Hern´ndez et al. (2011)
a
Cheema et al. (2011)
Sadek et al. (2012)
10
9
10
LOAO
LOSO
LOAO
90.3%
91.6%
97.8%
98
56
18
Our approach
Our approach
10
10
LOSO
LOAO
93.5%
97.8%
188
188
Optimised approach
10
LOAO
100%
210
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
28 / 45
29. Results
Benchmarks
MuHAVi-14 dataset
Table 2: Comparison of recognition rates and speeds obtained on the
MuHAVi-14 dataset (Singh et al., 2010) with other state-of-the-art
approaches.
Approach
LOSO
LOAO
LOVO
fps
Singh et al. (2010)
Eweiwi et al. (2011)
82.4%
91.9%
61.8%
77.9%
42.6%
55.8%
N/A
N/A
Cheema et al. (2011)
86.0%
73.5%
50.0%
56
98.5%
94.1%
59.6%
99
100%
100%
-
-
Our approach
Optimised approach
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
29 / 45
30. Results
Benchmarks
MuHAVi-8 dataset
Table 3: Comparison of recognition rates and speeds obtained on the
MuHAVi-8 dataset (Singh et al., 2010) with other state-of-the-art approaches.
M´todo
e
LOSO
LOAO
LOVO
fps
Singh et al. (2010)
Mart´
ınez-Contreras et al. (2009)
Eweiwi et al. (2011)
97.8%
98.4%
98.5%
76.4%
85.3%
50.0%
38.2%
N/A
N/A
N/A
Cheema et al. (2011)
95.6%
83.1%
57.4%
56
Our approach
100%
100%
82.4%
98
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
30 / 45
31. Results
Benchmarks
IXMAS dataset
Table 4: Comparison with other multi-view human action recognition
approaches of the state of the art. The rates obtained in the LOAO cross
validation performed on the IXMAS dataset are shown.
Approach
Actions
Actors
Views
Rate
fps
Yan et al. (2008)
Wu et al. (2011)
Cilla et al. (2012)
Weinland et al. (2006)
Cilla et al. (2013)
Holte et al. (2012)
11
12
11
11
11
13
12
12
12
10
10
12
4
4
5
5
5
5
78%
89.4%
91.3%
93.3%
94.0%
100%
N/A
N/A
N/A
N/A
N/A
N/A
Cherla et al. (2008)
Weinland et al. (2010)
13
11
N/A
10
4
5
80.1%
83.5%
20
∼500
Our approach
11
12
5
91.4%
207
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
31 / 45
32. Results
Benchmarks
RGBD datasets
DAI RGBD dataset
Table 5: Cross validation results obtained on our multi-view depth dataset.
Approach
LOSO
LOAO
fps
Our approach
94.4%
100%
80
DHA dataset
Table 6: LOSO cross validation results obtained on the DHA dataset (Lin
et al., 2012) (10 Weizmann actions).
Approach
LOSO
Lin et al. (2012)
Our approach
Alexandros Andre Chaaraoui (UA)
fps
90.8%
N/A
95.2%
99
PhD Thesis
January 20, 2014
32 / 45
33. Results
Benchmarks
Continuous recognition
Table 7: Obtained results applying CHAR and segment analysis evaluation
(LOAO). Results are detailed using the segmented sequences or the proposed
action zones.
Dataset
Approach
F1 -measure
IXMAS
IXMAS
Segmented sequences
Action zones
0.504
0.705
Weizmann
Weizmann
Segmented sequences
Action zones
0.693
0.928
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
33 / 45
35. Concluding remarks
Discussion
Discussion
Silhouettes
2D silhouettes can be difficult to obtain and they are view dependant
Privacy
Privacy concerns of indoor monitoring
Intelligent monitoring system with privacy protection
The method only relies on the boundary of the silhouette
Validation of the proposed method
The classification method based on the bag of key poses has also been
validated for gaming and NUI (Chaaraoui et al., 2014, 2013;
Climent-P´rez et al., 2013)
e
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
35 / 45
36. Concluding remarks
Conclusions
Conclusions
1
Proposal of a 2D template-based non-parametric method for
human action recognition & optimisations and extensions
2
Specific demands of AAL services have been satisfied: relaxed
camera setup requirements, adaptive learning, continuous
recognition and real-time execution
3
The HAR method based on a bag-of-key-poses model handles
single- and multi-view recognition proficiently.
4
State-of-the-art recognition rates have been achieved,
outperforming the best known rates in several benchmarks.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
36 / 45
37. Concluding remarks
Future work
Future work
Future directions of this work:
Pose representation: Other local and global features
Key poses: Generation algorithms
Bag-of-key-poses model: Applications
Distance metrics: Key poses and sequences of key poses
Evaluation and deployment
However, two main future lines stand out:
Recognition of complex activities based on action sequences
and multi-modal data
Feature fusion techniques, e.g. for recognition of subtle movements
or gestures
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
37 / 45
38. Other information and details
Other information and details
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
38 / 45
39. Other information and details
Research projects and grants
Research projects and grants
Intelligent system for follow-up and promotion of personal
autonomy: Computer vision system for the monitoring of
activities of daily living at home – Sistema de visi´n para la
o
monitorizaci´n de la actividad diaria en el hogar. Spanish Ministry of Science and
o
Innovation and Valencian Ministry of Education, Culture and Sport (TALISMAN+,
Technical University of Madrid, University of Deusto, University of Castile-La
Mancha and University of Alicante)
PhD. Research Fellowship – Programa VALi+d para investigadores en
formaci´n. Valencian Ministry of Education, Culture and Sport (ACIF/2011/160)
o
Research Collaboration Stay – Digital Imaging Research Centre, Faculty
of Science, Engineering and Computing, Kingston University. Kingston upon
Thames, UK. (Funded by VALi+d, BEFPI/2013/015)
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
39 / 45
40. Other information and details
Other activities
Other activities
Teaching collaboration – Information Technology. First year core subject of
Degree in Sound and Image in Telecommunication Engineering
Reviewer of international journals – Neurocomputing (Elsevier),
Pervasive Computing (IEEE), EURASIP Journal on Image and Video Processing
(Springer), Expert Systems With Applications (Elsevier)
Conference session chair – IEEE/RSJ Intelligent Robots and Systems
(IROS 2012), Genetic and Evolutionary Computation Conference (GECCO 2013)
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
40 / 45
41. Other information and details
Intl. review
International review
This thesis has been approved
by the following reviewers for the
International PhD Honourable Mention
Dr. Jean-Christophe Nebel
(Kingston University, UK)
Dr. Jes´s Mart´
u
ınez del Rinc´n
o
(Queen’s University of Belfast, UK)
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
41 / 45
42. Other information and details
Publications
Publications
Journals
I Chaaraoui, A.A., Climent-P´rez, P., Fl´rez-Revuelta, F., 2012. A Review on
e
o
Vision Techniques Applied to Human Behaviour Analysis for Ambient-Assisted
Living. Expert Systems with Applications. Citations: 10
II Chaaraoui, A.A., Climent-P´rez, P., Fl´rez-Revuelta, F., 2013.
e
o
Silhouette-based Human Action Recognition Using Sequences of Key Poses.
Pattern Recognition Letters. Citations: 6
III Chaaraoui, A.A., Fl´rez-Revuelta, F., 2013. Optimizing Human Action
o
Recognition Based on a Cooperative Coevolutionary Algorithm. Engineering
Applications of Artificial Intelligence.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
42 / 45
43. Other information and details
Publications
Publications
IV Chaaraoui, A.A., Padilla-L´pez, J.R., Climent-P´rez, P., Fl´rez-Revuelta, F.,
o
e
o
2014. Evolutionary Joint Selection to Improve Human Action Recognition
with RGB-D Devices. Expert Systems with Applications.
V Chaaraoui, A.A., Fl´rez-Revuelta, F., 2014. A Low-Dimensional Radial
o
Silhouette-based Feature for Fast Human Action Recognition Fusing Multiple
Views. Information Fusion. Under review
VI Chaaraoui, A.A., Fl´rez-Revuelta, F., 2014. Adaptive Human Action
o
Recognition With an Evolving Bag of Key Poses. IEEE Transactions on
Autonomous Mental Development. Under review
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
43 / 45
44. Other information and details
Publications
Publications
Conferences and workshops
I Intl. Symposium on Ambient Intelligence (ISAmI 2012)
II 3rd Intl. Workshop on Human Behavior Understanding (HBU 2012),
IEEE/RSJ Intl. Conference on Intelligent Robots and Systems (IROS 2012)
III 11th Mexican Intl. Conference on Artificial Intelligence (MICAI 2012)
IV Genetic and Evolutionary Computation Conference (GECCO 2013)
V 5th Intl. Work-conference on Ambient Assisted Living (IWAAL 2013)
VI 3rd Workshop on Consumer Depth Cameras for Computer Vision
(CDC4CV13), IEEE Intl. Conference on Computer Vision (ICCV 2013)
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
44 / 45
45. Other information and details
Publications
“We can only see a short distance ahead, but we can see
plenty there that needs to be done.” (Turing, 1950)
Vision-based Recognition of Human Behaviour
for Intelligent Environments
PhD Thesis
Alexandros Andre Chaaraoui
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
45 / 45
46. References
References
Chaaraoui, A. A., P. Climent-P´rez, and F. Fl´rez-Revuelta (2012). A review on vision
e
o
techniques applied to human behaviour analysis for ambient-assisted living. Expert
Systems with Applications 39 (12), 10873 – 10888.
Chaaraoui, A. A., J. R. Padilla-L´pez, P. Climent-P´rez, and F. Fl´rez-Revuelta (2014).
o
e
o
Evolutionary joint selection to improve human action recognition with RGB-D devices.
Expert Systems with Applications 41 (3), 786 – 794. Methods and Applications of
Artificial and Computational Intelligence.
Chaaraoui, A. A., J. R. Padilla-L´pez, and F. Fl´rez-Revuelta (2013). Fusion of skeletal
o
o
and silhouette-based features for human action recognition with RGB-D devices. In
IEEE 14th International Conference on Computer Vision Workshops, 2013. ICCV
Workshops 2013. To be presented in 3rd Workshop on Consumer Depth Cameras for
Computer Vision (CDC4CV13).
Cheema, S., A. Eweiwi, C. Thurau, and C. Bauckhage (2011). Action recognition by
learning discriminative key poses. In IEEE 13th International Conference on Computer
Vision Workshops, 2011. ICCV Workshops 2011, pp. 1302 –1309.
Cherla, S., K. Kulkarni, A. Kale, and V. Ramasubramanian (2008). Towards fast,
view-invariant human action recognition. In IEEE Conference on Computer Vision and
Pattern Recognition Workshops, 2008. CVPRW 2008, pp. 1 – 8.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
46 / 45
47. References
References
Cilla, R., M. A. Patricio, A. Berlanga, and J. M. Molina (2012). A probabilistic,
discriminative and distributed system for the recognition of human actions from multiple
views. Neurocomputing 75 (1), 78 – 87. Brazilian Symposium on Neural Networks (SBRN
2010), International Conference on Hybrid Artificial Intelligence Systems (HAIS 2010).
Cilla, R., M. A. Patricio, A. Berlanga, and J. M. Molina (2013). Human action recognition
with sparse classification and multiple-view learning. Expert Systems. DOI
10.1111/exsy.12040.
Climent-P´rez, P., A. A. Chaaraoui, J. R. Padilla-L´pez, and F. Fl´rez-Revuelta (2013).
e
o
o
Optimal joint selection for skeletal data from RGB-D devices using a genetic algorithm.
In I. Batyrshin and M. Mendoza (Eds.), Advances in Computational Intelligence, Volume
7630 of Lecture Notes in Computer Science, pp. 163 – 174. Springer Berlin / Heidelberg.
Derrac, J., I. Triguero, S. Garc´ and F. Herrera (2012). A co-evolutionary framework for
ıa,
nearest neighbor enhancement: Combining instance and feature weighting with instance
selection. In E. Corchado, V. Sn´ˇel, A. Abraham, M. Wo´niak, M. Gra˜a, and S.-B.
as
z
n
Cho (Eds.), Hybrid Artificial Intelligent Systems, Volume 7209 of Lecture Notes in
Computer Science, pp. 176 – 187. Springer Berlin / Heidelberg.
Eweiwi, A., S. Cheema, C. Thurau, and C. Bauckhage (2011). Temporal key poses for
human action recognition. In IEEE 13th International Conference on Computer Vision
Workshops, 2011. ICCV Workshops 2011, pp. 1310 –1317.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
47 / 45
48. References
References
Fathi, A. and G. Mori (2008). Action recognition by learning mid-level motion features. In
IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008, pp.
1 – 8.
Gorelick, L., M. Blank, E. Shechtman, M. Irani, and R. Basri (2007). Actions as space-time
shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 29 (12), 2247
– 2253.
Hern´ndez, J., A. Montemayor, J. Jos´ Pantrigo, and A. S´nchez (2011). Human action
a
e
a
´
recognition based on tracking features. In J. Ferr´ndez, J. Alvarez S´nchez, F. de la Paz,
a
a
and F. Toledo (Eds.), Foundations on Natural and Artificial Computation, Volume 6686
of Lecture Notes in Computer Science, pp. 471 – 480. Springer Berlin / Heidelberg.
Holte, M., B. Chakraborty, J. Gonzalez, and T. Moeslund (2012). A local 3-D motion
descriptor for multi-view human action recognition from 4-D spatio-temporal interest
points. IEEE Journal of Selected Topics in Signal Processing 6 (5), 553 – 565.
˙
Ikizler, N. and P. Duygulu (2007). Human action recognition using distribution of oriented
rectangular patches. In A. Elgammal, B. Rosenhahn, and R. Klette (Eds.), Human
Motion - Understanding, Modeling, Capture and Animation, Volume 4814 of Lecture
Notes in Computer Science, pp. 271 – 284. Springer Berlin / Heidelberg.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
48 / 45
49. References
References
Lin, Y.-C., M.-C. Hu, W.-H. Cheng, Y.-H. Hsieh, and H.-M. Chen (2012). Human action
recognition and retrieval using sole depth information. In Proceedings of the 20th ACM
international conference on Multimedia, MM ’12, New York, NY, USA, pp. 1053 – 1056.
ACM.
Mart´
ınez-Contreras, F., C. Orrite-Urunuela, E. Herrero-Jaraba, H. Ragheb, and S. Velastin
(2009). Recognizing human actions using silhouette-based HMM. In IEEE Int.
Conference on Advanced Video and Signal Based Surveillance, 2009. AVSS 2009, pp. 43
– 48.
Sadek, S., A. Al-Hamadi, B. Michaelis, and U. Sayed (2012). A fast statistical approach for
human activity recognition. International Journal of Intelligence Science 2 (1), 9 – 15.
Singh, S., S. Velastin, and H. Ragheb (2010). MuHAVi: A multicamera human action video
dataset for the evaluation of action recognition methods. In IEEE Int. Conference on
Advanced Video and Signal Based Surveillance, 2010. AVSS 2010, pp. 48 – 55. IEEE.
Tran, D. and A. Sorokin (2008). Human activity recognition with metric learning. In
D. Forsyth, P. Torr, and A. Zisserman (Eds.), European Conference on Computer
Vision. ECCV 2008, Volume 5302 of Lecture Notes in Computer Science, pp. 548 – 561.
Springer Berlin / Heidelberg.
Turing, A. M. (1950). Computing machinery and intelligence. Mind 59 (236), 433 – 460.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
49 / 45
50. References
References
¨
Weinland, D., M. Ozuysal, and P. Fua (2010). Making action recognition robust to
occlusions and viewpoint changes. In K. Daniilidis, P. Maragos, and N. Paragios (Eds.),
European Conference on Computer Vision. ECCV 2010, Volume 6313 of Lecture Notes
in Computer Science, pp. 635 – 648. Springer Berlin / Heidelberg.
Weinland, D., R. Ronfard, and E. Boyer (2006). Free viewpoint action recognition using
motion history volumes. Computer Vision and Image Understanding 104 (2-3), 249 –
257.
Wiegand, R. P. (2004). An analysis of cooperative coevolutionary algorithms. Ph. D. thesis,
George Mason University, Fairfax, VA, USA.
Wu, X., D. Xu, L. Duan, and J. Luo (2011). Action recognition using context and
appearance distribution features. In IEEE Conference on Computer Vision and Pattern
Recognition, 2011. CVPR 2011, pp. 489 – 496.
Yan, P., S. Khan, and M. Shah (2008). Learning 4D action feature models for arbitrary
view action recognition. In IEEE Conference on Computer Vision and Pattern
Recognition, 2008. CVPR 2008, pp. 1 – 7.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
50 / 45
51. Copyright
Copyright
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0
Unported License. You are free to copy, distribute and transmit the work under the following conditions:
1) you must attribute the work in the manner specified by the author or licensor (but not in any way that
suggests that they endorse you or your use of the work); 2) you may not use this work for commercial
purposes; and 3) you may not alter, transform, or build upon this work. With the understanding that:
1) any of the above conditions can be waived if you get permission from the copyright holder; 2) where
the work or any of its elements is in the public domain under applicable law, that status is in no way
affected by the license; and 3) in no way are any of the following rights affected by the license: your fair
dealing or fair use rights, other applicable copyright exceptions and limitations and rights other persons
may have either in the work itself or in how the work is used, such as publicity or privacy rights. Please
see http://creativecommons.org/licenses/by-nc-nd/3.0/ for greater detail.
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
51 / 45
52. Copyright
Author’s contact details
Alexandros Andre Chaaraoui
alexandros@ua.es
www.alexandrosandre.com
Departamento de Tecnolog´ Inform´tica y Computaci´n
ıa
a
o
Universidad de Alicante
Carretera San Vicente del Raspeig s/n
E-03690 San Vicente del Raspeig (Alicante) - Spain
Alexandros Andre Chaaraoui (UA)
PhD Thesis
January 20, 2014
52 / 45