This presentation summarizes two papers on human activity recognition using smartphones and wearable sensors. The first paper uses a CNN-LSTM model to recognize both specific activities and transitions using a public dataset, achieving 95.87% accuracy. The second paper proposes a lightweight RNN-LSTM model for edge devices using a different dataset, focusing on capturing temporal dependencies with LSTM to recognize activities and transitions. The presentation then discusses the proposed project which will use the WISDM dataset and libraries like TensorFlow to build models for activity recognition.
1. The document discusses model interpretation and techniques for interpreting machine learning models, especially deep neural networks.
2. It describes what model interpretation is, its importance and benefits, and provides examples of interpretability algorithms like dimensionality reduction, manifold learning, and visualization techniques.
3. The document aims to help make machine learning models more transparent and understandable to humans in order to build trust and improve model evaluation, debugging and feature engineering.
[DSC Adria 23]Davor Horvatic Human-Centric Explainable AI In Time Series Anal...DataScienceConferenc1
To fully trust, accept, and adopt newly emerging AI solutions in our everyday lives and practices, we need human-centric explainable AI that can provide human-understandable interpretations for their algorithmic behaviour and outcomes—consequently enabling us to control and continuously improve their performance, robustness, fairness, accountability, transparency, and explainability throughout the entire lifecycle of AI applications. The recently emerging trend within diverse and multidisciplinary research forms the basis of the next wave of AI. In this talk, we will present research that plans to produce interpretable deep learning models for time series analysis with a broad scope of applications.
This document summarizes Rebeen Ali Hamad's PhD thesis on developing robust deep learning models for human activity recognition using sensor data. The thesis addressed key challenges in HAR including imbalanced class problems and reducing the need for large labeled datasets. Some of the contributions included a dilated causal convolution model with self-attention to improve recognition accuracy, a joint temporal model to handle imbalanced data, and a cross-domain learning approach using shared representations to reduce labeling efforts. Evaluation results demonstrated improved performance over existing methods on several HAR datasets. Future work opportunities involve hybrid algorithm-data level models, better attention mechanisms, and recognizing multi-user concurrent activities.
Graph neural networks (GNNs) are neural network architectures that operate on graph-structured data. GNNs iteratively update node representations by aggregating neighbor representations and can be used for tasks like node classification. There are many frontiers for GNN research, including graph generation/transformation, dynamic/heterogeneous graphs, and applications in domains that can be modeled with graphs like social networks and drug discovery. Automated machine learning techniques are also being applied to GNNs.
This presentation summarizes two papers on human activity recognition using smartphones and wearable sensors. The first paper uses a CNN-LSTM model to recognize both specific activities and transitions using a public dataset, achieving 95.87% accuracy. The second paper proposes a lightweight RNN-LSTM model for edge devices using a different dataset, focusing on capturing temporal dependencies with LSTM to recognize activities and transitions. The presentation then discusses the proposed project which will use the WISDM dataset and libraries like TensorFlow to build models for activity recognition.
1. The document discusses model interpretation and techniques for interpreting machine learning models, especially deep neural networks.
2. It describes what model interpretation is, its importance and benefits, and provides examples of interpretability algorithms like dimensionality reduction, manifold learning, and visualization techniques.
3. The document aims to help make machine learning models more transparent and understandable to humans in order to build trust and improve model evaluation, debugging and feature engineering.
[DSC Adria 23]Davor Horvatic Human-Centric Explainable AI In Time Series Anal...DataScienceConferenc1
To fully trust, accept, and adopt newly emerging AI solutions in our everyday lives and practices, we need human-centric explainable AI that can provide human-understandable interpretations for their algorithmic behaviour and outcomes—consequently enabling us to control and continuously improve their performance, robustness, fairness, accountability, transparency, and explainability throughout the entire lifecycle of AI applications. The recently emerging trend within diverse and multidisciplinary research forms the basis of the next wave of AI. In this talk, we will present research that plans to produce interpretable deep learning models for time series analysis with a broad scope of applications.
This document summarizes Rebeen Ali Hamad's PhD thesis on developing robust deep learning models for human activity recognition using sensor data. The thesis addressed key challenges in HAR including imbalanced class problems and reducing the need for large labeled datasets. Some of the contributions included a dilated causal convolution model with self-attention to improve recognition accuracy, a joint temporal model to handle imbalanced data, and a cross-domain learning approach using shared representations to reduce labeling efforts. Evaluation results demonstrated improved performance over existing methods on several HAR datasets. Future work opportunities involve hybrid algorithm-data level models, better attention mechanisms, and recognizing multi-user concurrent activities.
Graph neural networks (GNNs) are neural network architectures that operate on graph-structured data. GNNs iteratively update node representations by aggregating neighbor representations and can be used for tasks like node classification. There are many frontiers for GNN research, including graph generation/transformation, dynamic/heterogeneous graphs, and applications in domains that can be modeled with graphs like social networks and drug discovery. Automated machine learning techniques are also being applied to GNNs.
Dr. Fariba Fahroo presents an overview of her program, Optimization and Discrete Mathematics, at the AFOSR 2013 Spring Review. At this review, Program Officers from AFOSR Technical Divisions will present briefings that highlight basic research programs beneficial to the Air Force.
Invited talk at Tsinghua University on "Applications of Deep Neural Network". As the tech. lead of deep learning task force at NIO USA INC, I was invited to give this colloquium talk on general applications of deep neural network.
The document summarizes a presentation on localized learning approaches for human activity recognition using sensor data. It discusses developing a wearable system to monitor vital signs of hospital patients in real-time. The presentation covers data preparation and feature extraction, and using machine learning algorithms like LS-SVM and KNN for modeling. It evaluates the approaches on synthetic and real-world activity recognition datasets, finding localized learning handles class imbalance and outperforms global models in terms of time performance and ability to handle streaming data.
Survey on classification algorithms for data mining (comparison and evaluation)Alexander Decker
This document provides an overview and comparison of three classification algorithms: K-Nearest Neighbors (KNN), Decision Trees, and Bayesian Networks. It discusses each algorithm, including how KNN classifies data based on its k nearest neighbors. Decision Trees classify data based on a tree structure of decisions, and Bayesian Networks classify data based on probabilities of relationships between variables. The document conducts an analysis of these three algorithms to determine which has the best performance and lowest time complexity for classification tasks based on evaluating a mock dataset over 24 months.
This document introduces graph attention networks (GATs) for node classification of graph-structured data. GATs use self-attention mechanisms over a node's neighbors to compute hidden representations. The proposed approach achieves state-of-the-art results on four benchmarks, demonstrating the potential of attention models on graphs. GATs are computationally efficient and do not require upfront knowledge of global graph structure.
Predictive Data Mining with Normalized Adaptive Training Method for Neural Ne...IJERDJOURNAL
Abstract:- Predictive data mining is an upcoming and fast-growing field and offers a competitive edge for the benefit of organization. In recent decades, researchers have developed new techniques and intelligent algorithms for predictive data mining. In this research paper, we have proposed a novel training algorithm for optimizing neural networks for prediction purpose and to utilize it for the development of prediction models. Models developed in MATLAB Neural Network Toolbox have been tested for insurance datasets taken from a live data warehouse. A comparative study of the proposed algorithm with other popular first and second order algorithms has been presented to judge the predictive accuracy of the suggested technique. Various graphs have been presented to analyse the convergence behaviour of different algorithms towards point of minimum error.
This research poster presents a study aiming to predict the likelihood of autism spectrum disorder (ASD) in infants from 3-6 months old using electrocardiogram (ECG) recordings and machine learning. The researchers collected ECG data from infants during parent/object interaction experiments. They analyzed heart rate variability measures from the ECG data using neurokit and extracted features to use in machine learning models. Their best performing models were random forest and decision tree, which classified infants as having either elevated or low likelihood of ASD with over 75% accuracy. The results suggest certain heart rate variability measures may serve as potential biomarkers for ASD and that ECG could help diagnose ASD at a younger age before behavioral assessments are effective.
This document discusses a human activity recognition system using machine learning techniques. It provides an overview of convolutional neural networks (CNNs) and long short-term memory (LSTM) networks for human activity recognition using data from sensors like accelerometers. CNNs use spatial correlations to process data through convolutional and pooling layers, while LSTMs are useful for processing large datasets and retaining memory of previous data due to their gated structure. The document compares CNNs and LSTMs, discusses relevant literature, and machine learning algorithms that can be used like K-nearest neighbors, support vector machines, and random forests. The goal is to classify human activities in real-time using supervised learning techniques and sensor data.
Time Series Forecasting Using Novel Feature Extraction Algorithm and Multilay...Editor IJCATR
Time series forecasting is important because it can often provide the foundation for decision making in a large variety of fields. A tree-ensemble method, referred to as time series forest (TSF), is proposed for time series classification. The approach is based on the concept of data series envelopes and essential attributes generated by a multilayer neural network... These claims are further investigated by applying statistical tests. With the results presented in this article and results from related investigations that are considered as well, we want to support practitioners or scholars in answering the following question: Which measure should be looked at first if accuracy is the most important criterion, if an application is time-critical, or if a compromise is needed? In this paper demonstrated feature extraction by novel method can improvement in time series data forecasting process
A Survey of Convolutional Neural NetworksRimzim Thube
Convolutional neural networks (CNNs) are widely used for tasks like image classification, object detection, and face recognition. CNNs extract features from data using convolutional structures and are inspired by biological visual perception. Early CNNs include LeNet for handwritten text recognition and AlexNet which introduced ReLU and dropout to improve performance. Newer CNNs like VGGNet, GoogLeNet, ResNet and MobileNets aim to improve accuracy while reducing parameters. CNNs require activation functions, loss functions, and optimizers to learn from data during training. They have various applications in domains like computer vision, natural language processing and time series forecasting.
The document discusses mining frequent items and item sets from data streams using fuzzy approaches. It describes objectives of mining frequent items from datasets in real-time using fuzzy sets and slices. This involves fetching relevant records, analyzing the data, searching for liked items using fuzzy slices, identifying frequently viewed item lists, making recommendations, and evaluating the results. Algorithms used for mining frequent items from data streams in a single or multiple pass are also reviewed.
Application of soft computing techniques in electrical engineeringSouvik Dutta
This document discusses the application of soft computing techniques in electrical engineering. It begins with an introduction to soft computing and its key elements including fuzzy logic, neural networks, evolutionary computation, machine learning and probabilistic reasoning. It then discusses hard computing versus soft computing, defining hard computing as requiring precise analytical models and definitions, while soft computing can handle imprecision. The document outlines several soft computing techniques - neural networks, fuzzy logic, and their applications in power system economic load dispatch and generation level determination to solve complex, non-linear optimization problems in electrical engineering. In conclusion, soft computing provides alternatives to traditional techniques for electrical engineering problems involving uncertainty.
This document provides an overview of soft computing techniques including neural networks, fuzzy logic, genetic algorithms, and hybrid systems. It discusses how neural networks are inspired by the human brain and can learn from examples to perform tasks like object recognition. Fuzzy logic allows for partial membership in sets and handles imprecise data. Genetic algorithms use selection, crossover and mutation to evolve solutions to problems. Hybrid systems combine techniques, such as neurofuzzy and neurogenetic systems. Soft computing is used to solve complex problems with approximate models, unlike hard computing which uses precise models.
Discover How Scientific Data is Used for the Public Good with Natural Languag...BaoTramDuong2
This document discusses using natural language processing techniques like n-grams, deep learning models, and named entity recognition to analyze scientific publications and identify references to datasets. It evaluates classifiers like recurrent neural networks and convolutional neural networks to perform sequence labeling and extract dataset citations. The goal is to help government agencies and researchers quickly find datasets, measures, and experts by automating the analysis of research articles.
Text classification based on gated recurrent unit combines with support vecto...IJECEIAES
As the amount of unstructured text data that humanity produce largely and a lot of texts are grows on the Internet, so the one of the intelligent technique is require processing it and extracting different types of knowledge from it. Gated recurrent unit (GRU) and support vector machine (SVM) have been successfully used to Natural Language Processing (NLP) systems with comparative, remarkable results. GRU networks perform well in sequential learning tasks and overcome the issues of “vanishing and explosion of gradients in standard recurrent neural networks (RNNs) when captureing long-term dependencies. In this paper, we proposed a text classification model based on improved approaches to this norm by presenting a linear support vector machine (SVM) as the replacement of Softmax in the final output layer of a GRU model. Furthermore, the cross-entropy function shall be replaced with a margin-based function. Empirical results present that the proposed GRU-SVM model achieved comparatively better results than the baseline approaches BLSTM-C, DABN.
Deep Learning: Evolution of ML from Statistical to Brain-like Computing- Data...Impetus Technologies
Presentation on 'Deep Learning: Evolution of ML from Statistical to Brain-like Computing'
Speaker- Dr. Vijay Srinivas Agneeswaran,Director, Big Data Labs, Impetus
The main objective of the presentation is to give an overview of our cutting edge work on realizing distributed deep learning networks over GraphLab. The objectives can be summarized as below:
- First-hand experience and insights into implementation of distributed deep learning networks.
- Thorough view of GraphLab (including descriptions of code) and the extensions required to implement these networks.
- Details of how the extensions were realized/implemented in GraphLab source – they have been submitted to the community for evaluation.
- Arrhythmia detection use case as an application of the large scale distributed deep learning network.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Dr. Fariba Fahroo presents an overview of her program, Optimization and Discrete Mathematics, at the AFOSR 2013 Spring Review. At this review, Program Officers from AFOSR Technical Divisions will present briefings that highlight basic research programs beneficial to the Air Force.
Invited talk at Tsinghua University on "Applications of Deep Neural Network". As the tech. lead of deep learning task force at NIO USA INC, I was invited to give this colloquium talk on general applications of deep neural network.
The document summarizes a presentation on localized learning approaches for human activity recognition using sensor data. It discusses developing a wearable system to monitor vital signs of hospital patients in real-time. The presentation covers data preparation and feature extraction, and using machine learning algorithms like LS-SVM and KNN for modeling. It evaluates the approaches on synthetic and real-world activity recognition datasets, finding localized learning handles class imbalance and outperforms global models in terms of time performance and ability to handle streaming data.
Survey on classification algorithms for data mining (comparison and evaluation)Alexander Decker
This document provides an overview and comparison of three classification algorithms: K-Nearest Neighbors (KNN), Decision Trees, and Bayesian Networks. It discusses each algorithm, including how KNN classifies data based on its k nearest neighbors. Decision Trees classify data based on a tree structure of decisions, and Bayesian Networks classify data based on probabilities of relationships between variables. The document conducts an analysis of these three algorithms to determine which has the best performance and lowest time complexity for classification tasks based on evaluating a mock dataset over 24 months.
This document introduces graph attention networks (GATs) for node classification of graph-structured data. GATs use self-attention mechanisms over a node's neighbors to compute hidden representations. The proposed approach achieves state-of-the-art results on four benchmarks, demonstrating the potential of attention models on graphs. GATs are computationally efficient and do not require upfront knowledge of global graph structure.
Predictive Data Mining with Normalized Adaptive Training Method for Neural Ne...IJERDJOURNAL
Abstract:- Predictive data mining is an upcoming and fast-growing field and offers a competitive edge for the benefit of organization. In recent decades, researchers have developed new techniques and intelligent algorithms for predictive data mining. In this research paper, we have proposed a novel training algorithm for optimizing neural networks for prediction purpose and to utilize it for the development of prediction models. Models developed in MATLAB Neural Network Toolbox have been tested for insurance datasets taken from a live data warehouse. A comparative study of the proposed algorithm with other popular first and second order algorithms has been presented to judge the predictive accuracy of the suggested technique. Various graphs have been presented to analyse the convergence behaviour of different algorithms towards point of minimum error.
This research poster presents a study aiming to predict the likelihood of autism spectrum disorder (ASD) in infants from 3-6 months old using electrocardiogram (ECG) recordings and machine learning. The researchers collected ECG data from infants during parent/object interaction experiments. They analyzed heart rate variability measures from the ECG data using neurokit and extracted features to use in machine learning models. Their best performing models were random forest and decision tree, which classified infants as having either elevated or low likelihood of ASD with over 75% accuracy. The results suggest certain heart rate variability measures may serve as potential biomarkers for ASD and that ECG could help diagnose ASD at a younger age before behavioral assessments are effective.
This document discusses a human activity recognition system using machine learning techniques. It provides an overview of convolutional neural networks (CNNs) and long short-term memory (LSTM) networks for human activity recognition using data from sensors like accelerometers. CNNs use spatial correlations to process data through convolutional and pooling layers, while LSTMs are useful for processing large datasets and retaining memory of previous data due to their gated structure. The document compares CNNs and LSTMs, discusses relevant literature, and machine learning algorithms that can be used like K-nearest neighbors, support vector machines, and random forests. The goal is to classify human activities in real-time using supervised learning techniques and sensor data.
Time Series Forecasting Using Novel Feature Extraction Algorithm and Multilay...Editor IJCATR
Time series forecasting is important because it can often provide the foundation for decision making in a large variety of fields. A tree-ensemble method, referred to as time series forest (TSF), is proposed for time series classification. The approach is based on the concept of data series envelopes and essential attributes generated by a multilayer neural network... These claims are further investigated by applying statistical tests. With the results presented in this article and results from related investigations that are considered as well, we want to support practitioners or scholars in answering the following question: Which measure should be looked at first if accuracy is the most important criterion, if an application is time-critical, or if a compromise is needed? In this paper demonstrated feature extraction by novel method can improvement in time series data forecasting process
A Survey of Convolutional Neural NetworksRimzim Thube
Convolutional neural networks (CNNs) are widely used for tasks like image classification, object detection, and face recognition. CNNs extract features from data using convolutional structures and are inspired by biological visual perception. Early CNNs include LeNet for handwritten text recognition and AlexNet which introduced ReLU and dropout to improve performance. Newer CNNs like VGGNet, GoogLeNet, ResNet and MobileNets aim to improve accuracy while reducing parameters. CNNs require activation functions, loss functions, and optimizers to learn from data during training. They have various applications in domains like computer vision, natural language processing and time series forecasting.
The document discusses mining frequent items and item sets from data streams using fuzzy approaches. It describes objectives of mining frequent items from datasets in real-time using fuzzy sets and slices. This involves fetching relevant records, analyzing the data, searching for liked items using fuzzy slices, identifying frequently viewed item lists, making recommendations, and evaluating the results. Algorithms used for mining frequent items from data streams in a single or multiple pass are also reviewed.
Application of soft computing techniques in electrical engineeringSouvik Dutta
This document discusses the application of soft computing techniques in electrical engineering. It begins with an introduction to soft computing and its key elements including fuzzy logic, neural networks, evolutionary computation, machine learning and probabilistic reasoning. It then discusses hard computing versus soft computing, defining hard computing as requiring precise analytical models and definitions, while soft computing can handle imprecision. The document outlines several soft computing techniques - neural networks, fuzzy logic, and their applications in power system economic load dispatch and generation level determination to solve complex, non-linear optimization problems in electrical engineering. In conclusion, soft computing provides alternatives to traditional techniques for electrical engineering problems involving uncertainty.
This document provides an overview of soft computing techniques including neural networks, fuzzy logic, genetic algorithms, and hybrid systems. It discusses how neural networks are inspired by the human brain and can learn from examples to perform tasks like object recognition. Fuzzy logic allows for partial membership in sets and handles imprecise data. Genetic algorithms use selection, crossover and mutation to evolve solutions to problems. Hybrid systems combine techniques, such as neurofuzzy and neurogenetic systems. Soft computing is used to solve complex problems with approximate models, unlike hard computing which uses precise models.
Discover How Scientific Data is Used for the Public Good with Natural Languag...BaoTramDuong2
This document discusses using natural language processing techniques like n-grams, deep learning models, and named entity recognition to analyze scientific publications and identify references to datasets. It evaluates classifiers like recurrent neural networks and convolutional neural networks to perform sequence labeling and extract dataset citations. The goal is to help government agencies and researchers quickly find datasets, measures, and experts by automating the analysis of research articles.
Text classification based on gated recurrent unit combines with support vecto...IJECEIAES
As the amount of unstructured text data that humanity produce largely and a lot of texts are grows on the Internet, so the one of the intelligent technique is require processing it and extracting different types of knowledge from it. Gated recurrent unit (GRU) and support vector machine (SVM) have been successfully used to Natural Language Processing (NLP) systems with comparative, remarkable results. GRU networks perform well in sequential learning tasks and overcome the issues of “vanishing and explosion of gradients in standard recurrent neural networks (RNNs) when captureing long-term dependencies. In this paper, we proposed a text classification model based on improved approaches to this norm by presenting a linear support vector machine (SVM) as the replacement of Softmax in the final output layer of a GRU model. Furthermore, the cross-entropy function shall be replaced with a margin-based function. Empirical results present that the proposed GRU-SVM model achieved comparatively better results than the baseline approaches BLSTM-C, DABN.
Deep Learning: Evolution of ML from Statistical to Brain-like Computing- Data...Impetus Technologies
Presentation on 'Deep Learning: Evolution of ML from Statistical to Brain-like Computing'
Speaker- Dr. Vijay Srinivas Agneeswaran,Director, Big Data Labs, Impetus
The main objective of the presentation is to give an overview of our cutting edge work on realizing distributed deep learning networks over GraphLab. The objectives can be summarized as below:
- First-hand experience and insights into implementation of distributed deep learning networks.
- Thorough view of GraphLab (including descriptions of code) and the extensions required to implement these networks.
- Details of how the extensions were realized/implemented in GraphLab source – they have been submitted to the community for evaluation.
- Arrhythmia detection use case as an application of the large scale distributed deep learning network.
Similar to Presentation-Licentiate degree.pptx (20)
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Generating privacy-protected synthetic data using Secludy and Milvus
Presentation-Licentiate degree.pptx
1. Towards Reliable, Stable and
Fast Learning for Smart
Home Activity Recognition
Rebeen Ali Hamad Supervisors: Thorsteinn Rögnvaldsson
Eric Jarpe
Mohamed-Rafik Bouguelia
Jens Lundstrom
February-24-2022
Licentiate Thesis
2. Layout
• Introduction
• Motivation
• Challenges of human activity recognition
• Research question
• Addressing activity recognition challenges
• Conclusion and future work
3. Activity Recognition (AR)
Human Activity Recognition (HAR) is a challenging and highly dynamic research
field aiming at recognizing human activities based on sensor observation data.
HAR, as one of the significant applications of intelligent environment and
wearable sensor technologies, can be used to monitor the activity of daily living
(ADL) to support and assist senior people, disabled and cognitive impaired
4. Motivation
• The aging and dependent population have been recognized as a major social
and economic challenge for the coming decades.
• One of the promising solutions to this challenge is ambient assisted living (AAL)
systems. Such systems aim to reduce the costs of healthcare and would enable
elders to live independently in their home.
• One of the most important roles and components of the AAL system is HAR
• HAR could be used to perform recognition of dangerous situations and detects
deviations of behavior to improve elderly-care alert systems
5. • Challenges of human activity recognition
• Labeling sensor readings
• Real-time constraints
• Diversity and frequency of human activity
• Types of activities
• Sensor challenges
• Number of activities and types of activities
6. Research question
• Considering the above challenges of human AR within smart home
environments, this thesis addresses the following research questions
i. How stable are low-dimensional maps of human activities in a smart home?
ii. How AR could be improved at the expense of real-time recognition?
iii. How to handle imbalanced class problems in the context of AR?
These research questions were investigated and relevant contributions were
made in the following sections and papers :
7. Addressing activity recognition challenges
• The long-term goal of the project is to process and share information across
multiple smart homes to reduce the learning time and data collection as well as
increase accuracy for HAR. One solution through machine learning development
is to use transfer learning to enhance systems ability.
• In our work, it is hypothesized that learned manifolds from disparate data sets
could be used for transfer learning. Therefore, it is crucial to investigate the
stability of t-SNE maps in order to properly align manifolds for the purpose of
transfer learning. Therefore, the first contribution of this thesis is investigating
stability of t-SNE maps.
8. Addressing activity recognition challenges
Stability analysis of the t-SNE algorithm for human activity pattern data
• Hypothesize: even if these smart homes are unique their data share a common
latent manifold which resides in a lower-dimensional subspace
• Manifold Alignment : by learning projections from each original space to the
shared manifold, correspondences of observations are recovered and knowledge
from one home can be transferred to another.
• t-SNE algorithm was used to be able to map human activity patterns from smart
home environments to a low-dimension manifold.
9. Stability analysis of the t-SNE algorithm
Despite the non-deterministic setup of the t-SNE algorithm, the visual
interpretation of different runs is easily compared by humans
How to analyze the stability of the t-SNE algorithm output. The proposed
approach utilizes comparisons of several output maps as a whole and partially by
clustered low-dimensional data points.
t-SNE map1 t-SNE map2 (same dataset)
10. Stability analysis of the t-SNE algorithm
Contribution: development of methods and
tools for studying t-SNE output stability on
smart home data used for modeling human
activity patterns
Linearly and non-linearly aligning low-
dimensional manifolds in order to compute
disparity and correct correspondence
observation within the five nearest
neighbors
12. Stability analysis of the t-SNE algorithm
LPA was introduced to non-linearly align manifolds by using locally linear mappings.
• It follows a divisive approach to cluster datasets. The algorithm starts by considering
a cluster of all data points and keeps on splitting into two sub-clusters recursively and
terminates if the diversity of a cluster is below a predetermined threshold.
• At each stage, PA is applied to all clusters in the first data set and the corresponding
cluster in the second dataset to compute disparity. If the disparity falls short of the
threshold, the clustering process stops for these clusters at this stage.
13. Stability analysis of the t-SNE algorithm
• Normalized Local Procrustes analysis: we propose an extension of the Local
Procrustes Analysis (LPA) technique to non-linearly align manifolds by using locally linear
mappings.
• The changes of the proposed method NLPA compared to the LPA procedure can be
summarized
• 1) Modification: Firstly the clustering algorithm is modified from k-means to
agglomerative clustering.
• 2) Improvement: Secondly the creating clustering criteria is improved to have two distinct
data points in each cluster and the threshold is minimized to render better alignment.
• 3) Extension: Finally, the NLPA is extended on LPA to normalize the transformed clusters in
order to the combined clusters with NLPA and the whole dataset with PA have a same space
16. Stability analysis of the t-SNE algorithm
• Results of the experiments indicate that t-SNE low-dimensional manifolds are
locally stable which is part of the achievements of this research project.
• t-SNE is preserving the local geometry of the original high-dimensional data
• The long term goal of this research is to achieve automatic knowledge transfer
between related data sets from different smart homes
17. Contribution: Efficient AR in smart home data
using delayed fuzzy temporal windows
• We propose a data-driven approach that aims to delay the recognition process
and includes representations of binary sensor activations that occur before and
after the time where the prediction is made.
• For this, the proposed method uses multiple incremental fuzzy temporal
windows (FTWs) to extract features from both preceding and partial oncoming
sensor activations. To avoid the human configuration of FTWs we have modelled
their shapes with the Fibonacci sequence, which has been defined to model
incremental sequences in a harmonic way under the fields of mathematics,
science, and engineering
18. Efficient activity recognition in smart homes using
delayed fuzzy temporal windows on binary sensors
• Often HAR tasks are designed as sequential temporal learning
(LSTM, 1D ConvNet)
• The proposed method is evaluated with three temporal deep learning models
(CNN and LSTM Network as well as hybrid models combining CNN and LSTM), on
a binary sensor dataset of real daily living activities. The experimental evaluation
shows that the proposed method achieves significantly better results than the
real-time approach
19. Contribution: the second paper
• The proposed temporal models based on the FTWs achieves encouraging
performance particularly in the activities that real-time models have difficulties
recognizing accurately, such as Leaving, Snack, Grooming, and Toileting
20. Efficacy of Imbalanced Data Handling Methods on Deep
Learning for Smart Homes Environments.
• Human activities are highly diverse not only in the form of different sensor
activations but the frequency of activities themselves are inherently imbalanced
and hence accurate AR is challenging from a machine learning perspective.
• The main contribution of this paper is the study of well-known class imbalance
approaches (synthetic minority over-sampling technique, cost-sensitive learning
and ensemble learning) applied to activity recognition data with various
temporal data preprocessing for the deep learning models LSTM and 1D CNN
21. Contribution: Efficacy of Imbalanced Data
Handling Methods on Deep Learning
• The experimental results indicate that handling imbalanced data is more
important than selecting machine learning algorithms and improves
classification performance.
22. Potential questions for future research
• How to transfer knowledge between smart homes with a different layout,
sensor setting, and resident ? The aim of this research question is to exploit
what has been learned in one smart home to improve generalization in
different but related smart homes to reduce the need for labeling data.
• Often HAR tasks are designed as sequential temporal learning, (LSTM,
1D ConvNet)
• Considerable amounts of well-curated human activity data are needed, which
is a notably challenging task, hindered by privacy issues and labelling time and
cost.
• Hence it is a crucial research challenge to design a deep learning model that
can successfully learn to recognize human activities by leveraging only a small
number of annotated samples.
23. Conclusion
This thesis could answer the following questions
• i. How stable are low-dimensional maps of human activities in a smart home?
• ii. How AR could be improved at the expense of real-time recognition?
• iii. How to handle imbalanced class problems in the context of AR?
25. Multiple incremental fuzzy temporal windows(FTWs)
• Ordonez datasets
These datasets comprise information regarding the ADLs performed by two users on a daily basis in their own
homes. These datasets are composed of two instances of data, each one corresponding to a different user
and summing up to 35 days of fully labelled data. Each instance of the dataset is described by three text files,
namely: description, sensors events (features), activities of daily living (labels). Sensor events were recorded
using a wireless sensor network and data were labelled manually.
• The first dataset is recorded from 28-11-2011 to 12-12-2011 called ‘OrdonezA’ ( 9 activities)
• The second dataset is recorded from 11-11-2012 to 2-12-2012 called OrdonezB ( 10 activities)
• An example of the data
26. Multiple incremental fuzzy temporal windows(FTWs)
• Generating datasets based on the sensor type + sensor location + sensor
place using fuzzy temporal windows.
• An example of all the time intervals of a sensor
28. FTWs
• In total we have 15 FTW, so if we have only one sensor time interval
then the dataset will have 15 dimension
in every minute from the started time(2011-11-28 02:27:59') we slide 15 fuzzy temporal winodws (FTWs) on the sensors
time interval. for example if we have one sensor with N time interval, then we slide the first FTW on all the time intervals and
then we select the maximum number to be the first feature of the sensor then we apply the second FTW on all the sensors
again and we select the maximum number to be second feature and we will continue untill we apply 15 FTW and we will
have 15 dimension for every minute
2011-11-28 02:27:59
2011-11-28 02:28:59
2011-11-28 02:29:59