Today there are a lot of data that are stored in the form of time series, and with the actual large diffusion of real-time applications many areas are strongly increasing their interest in applications based on this kind of data, like for example finance, advertising, marketing, health care, automated disease detection, biometrics, retail, and identification of anomalies of any kind. It is therefore very interesting to understand the role and potential of machine learning in this sector.
Many methods can be used for the classification of the time series, but all of them, apart from deep learning, require some kind of feature engineering as a separate stage before the classification is performed, and this can imply the loss of some important information and the increase of the development and test time. On the contrary, deep learning models such as recurrent and convolutional neural networks already incorporate this kind of feature engineering internally, optimizing it and eliminating the need to do it manually. Therefore they are able to extract information from the time series in a faster, more direct, and more complete way.
Bio:
Marco Del Pra
I am 41 years old, I was born in Venice, I have 2 master's degrees (Computer Science and Mathematics). I have been working for about 10 years in Artificial Intelligence, first as Data Scientist, then as Team Leader and finally as Head of Data. Among others, I worked for Microsoft, for the European Commission (JRC of Ispra) and for Cuebiq. I am currently working as a freelancer and I am creating with 2 other cofounders an innovative AI startup. I have 2 important publications in applied mathematics.
Topics: recurrent and convolutional neural networks, deep learning, time-series.
1D Convolutional Neural Networks for Time Series Modeling - Nathan Janos, Jef...PyData
This talk describes an experimental approach to time series modeling using 1D convolution filter layers in a neural network architecture. This approach was developed at System1 for forecasting marketplace value of online advertising categories.
In this talk we walk the audience through how to marry correlation analysis with anomaly detection, discuss how the topics are intertwined, and detail the challenges one may encounter based on production data. We also showcase how deep learning can be leveraged to learn nonlinear correlation, which in turn can be used to further contain the false positive rate of an anomaly detection system. Further, we provide an overview of how correlation can be leveraged for common representation learning.
This is very simple introduction to Clustering with some real world example. At the end of lecture I use stackOverflow API to test some clustering. I also wants to try facebook but it has some problem with it's API
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...Simplilearn
This Deep Learning Presentation will help you in understanding what is Deep learning, why do we need Deep learning, applications of Deep Learning along with a detailed explanation on Neural Networks and how these Neural Networks work. Deep learning is inspired by the integral function of the human brain specific to artificial neural networks. These networks, which represent the decision-making process of the brain, use complex algorithms that process data in a non-linear way, learning in an unsupervised manner to make choices based on the input. This Deep Learning tutorial is ideal for professionals with beginners to intermediate levels of experience. Now, let us dive deep into this topic and understand what Deep learning actually is.
Below topics are explained in this Deep Learning Presentation:
1. What is Deep Learning?
2. Why do we need Deep Learning?
3. Applications of Deep Learning
4. What is Neural Network?
5. Activation Functions
6. Working of Neural Network
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms.
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
1D Convolutional Neural Networks for Time Series Modeling - Nathan Janos, Jef...PyData
This talk describes an experimental approach to time series modeling using 1D convolution filter layers in a neural network architecture. This approach was developed at System1 for forecasting marketplace value of online advertising categories.
In this talk we walk the audience through how to marry correlation analysis with anomaly detection, discuss how the topics are intertwined, and detail the challenges one may encounter based on production data. We also showcase how deep learning can be leveraged to learn nonlinear correlation, which in turn can be used to further contain the false positive rate of an anomaly detection system. Further, we provide an overview of how correlation can be leveraged for common representation learning.
This is very simple introduction to Clustering with some real world example. At the end of lecture I use stackOverflow API to test some clustering. I also wants to try facebook but it has some problem with it's API
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...Simplilearn
This Deep Learning Presentation will help you in understanding what is Deep learning, why do we need Deep learning, applications of Deep Learning along with a detailed explanation on Neural Networks and how these Neural Networks work. Deep learning is inspired by the integral function of the human brain specific to artificial neural networks. These networks, which represent the decision-making process of the brain, use complex algorithms that process data in a non-linear way, learning in an unsupervised manner to make choices based on the input. This Deep Learning tutorial is ideal for professionals with beginners to intermediate levels of experience. Now, let us dive deep into this topic and understand what Deep learning actually is.
Below topics are explained in this Deep Learning Presentation:
1. What is Deep Learning?
2. Why do we need Deep Learning?
3. Applications of Deep Learning
4. What is Neural Network?
5. Activation Functions
6. Working of Neural Network
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms.
There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this deep learning online course particularly for the following professionals:
1. Software engineers
2. Data scientists
3. Data analysts
4. Statisticians with an interest in deep learning
You will learn the basic concepts of machine learning classification and will be introduced to some different algorithms that can be used. This is from a very high level and will not be getting into the nitty-gritty details.
Introduction For seq2seq(sequence to sequence) and RNNHye-min Ahn
This is my slides for introducing sequence to sequence model and Recurrent Neural Network(RNN) to my laboratory colleagues.
Hyemin Ahn, @CPSLAB, Seoul National University (SNU)
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...Simplilearn
This presentation on TensorFlow will help you understand what is Deep Learning and it's libraries, why use TensorFlow, what is TensorFlow, how to build a computational graph, programming using elements in TensorFlow, what are Recurrent Neural Networks along with a use case implementation on TensorFlow. TensorFlow is a software library developed by Google for the purposes of conducting machine learning and deep neural network research. In this video, you will learn the fundamentals of TensorFlow concepts, functions and operations required to implement deep learning algorithms and leverage data like never before. Now let's get started in mastering the concept of Deep Learning using TensorFlow.
Below topics are explained in this TensorFlow presentation:
1. What is Deep Learning?
2. Top Deep Learning libraries?
3. Why use TensorFlow?
4. What is TensorFlow?
5. Building a computational graph
6. Programming elements in TensorFlow
7. Introducing Recurrent Neural Networks
8. Use case implementation of RNN using TensorFlow
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks. Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
7. Build your own deep learning project
8. Differentiate between machine learning, deep learning and artificial intelligence
Learn more at: https://www.simplilearn.com
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...Databricks
Given the resurgence of neural network-based techniques in recent years, it is important for data science practitioner to understand how to apply these techniques and the tradeoffs between neural network-based and traditional statistical methods.
This lecture discusses two specific techniques: Vector Autoregressive (VAR) Models and Recurrent Neural Network (RNN). The former is one of the most important class of multivariate time series statistical models applied in finance while the latter is a neural network architecture that is suitable for time series forecasting. I’ll demonstrate how they are implemented in practice and compares their advantages and disadvantages. Real-world applications, demonstrated using python and Spark, are used to illustrate these techniques. While not the focus in this lecture, exploratory time series data analysis using time-series plot, plots of autocorrelation (i.e. correlogram), plots of partial autocorrelation, plots of cross-correlations, histogram, and kernel density plot, will also be included in the demo.
The attendees will learn – the formulation of a time series forecasting problem statement in context of VAR and RNN – the application of Recurrent Neural Network-based techniques in time series forecasting – the application of Vector Autoregressive Models in multivariate time series forecasting – the pros and cons of using VAR and RNN-based techniques in the context of financial time series forecasting – When to use VAR and when to use RNN-based techniques
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
This was a presentation done for the Techspace of IoT Asia 2017 oon 30th March 2017. This is an introductory session to introduce the concept of Long Short-Term Memory (LSTMs) for the prediction in Time Series. I also shared the Keras code to work out a simple Sin Wave example and a Household power consumption data to use for the predictions. The links for the code can be found in the presentation.
Time series forecasting with machine learningDr Wei Liu
An introduction of developing and application time series forecast models with both traditional time series methods and machine learning techniques. Case study for a challenging very short-term electrical price forecasting project was presented.
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...Simplilearn
This presentation on TensorFlow will help you in understanding what exactly is TensorFlow and how it is used in Deep Learning. TensorFlow is a software library developed by Google for the purposes of conducting machine learning and deep neural network research. In this tutorial, you will learn the fundamentals of TensorFlow concepts, functions, and operations required to implement deep learning algorithms and leverage data like never before. This TensorFlow tutorial is ideal for beginners who want to pursue a career in Deep Learning. Now, let us deep dive into this TensorFlow tutorial and understand what TensorFlow actually is and how to use it.
Below topics are explained in this TensorFlow presentation:
1. What is Deep Learning?
2. Top Deep Learning Libraries
3. Why TensorFlow?
4. What is TensorFlow?
5. What are Tensors?
6. What is a Data Flow Graph?
7. Program Elements in TensorFlow
8. Use case implementation using TensorFlow
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
7. Build your own deep learning project
8. Differentiate between machine learning, deep learning and artificial intelligence
Learn more at: https://www.simplilearn.com
GANs are the new hottest topic in the ML arena; however, they present a challenge for the researchers and the engineers alike. Their design, and most importantly, the code implementation has been causing headaches to the ML practitioners, especially when moving to production.
Starting from the very basic of what a GAN is, passing trough Tensorflow implementation, using the most cutting-edge APIs available in the framework, and finally, production-ready serving at scale using Google Cloud ML Engine.
Slides for the talk: https://www.pycon.it/conference/talks/deep-diving-into-gans-form-theory-to-production
Github repo: https://github.com/zurutech/gans-from-theory-to-production
You will learn the basic concepts of machine learning classification and will be introduced to some different algorithms that can be used. This is from a very high level and will not be getting into the nitty-gritty details.
Introduction For seq2seq(sequence to sequence) and RNNHye-min Ahn
This is my slides for introducing sequence to sequence model and Recurrent Neural Network(RNN) to my laboratory colleagues.
Hyemin Ahn, @CPSLAB, Seoul National University (SNU)
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...Simplilearn
This presentation on TensorFlow will help you understand what is Deep Learning and it's libraries, why use TensorFlow, what is TensorFlow, how to build a computational graph, programming using elements in TensorFlow, what are Recurrent Neural Networks along with a use case implementation on TensorFlow. TensorFlow is a software library developed by Google for the purposes of conducting machine learning and deep neural network research. In this video, you will learn the fundamentals of TensorFlow concepts, functions and operations required to implement deep learning algorithms and leverage data like never before. Now let's get started in mastering the concept of Deep Learning using TensorFlow.
Below topics are explained in this TensorFlow presentation:
1. What is Deep Learning?
2. Top Deep Learning libraries?
3. Why use TensorFlow?
4. What is TensorFlow?
5. Building a computational graph
6. Programming elements in TensorFlow
7. Introducing Recurrent Neural Networks
8. Use case implementation of RNN using TensorFlow
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks. Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
7. Build your own deep learning project
8. Differentiate between machine learning, deep learning and artificial intelligence
Learn more at: https://www.simplilearn.com
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...Databricks
Given the resurgence of neural network-based techniques in recent years, it is important for data science practitioner to understand how to apply these techniques and the tradeoffs between neural network-based and traditional statistical methods.
This lecture discusses two specific techniques: Vector Autoregressive (VAR) Models and Recurrent Neural Network (RNN). The former is one of the most important class of multivariate time series statistical models applied in finance while the latter is a neural network architecture that is suitable for time series forecasting. I’ll demonstrate how they are implemented in practice and compares their advantages and disadvantages. Real-world applications, demonstrated using python and Spark, are used to illustrate these techniques. While not the focus in this lecture, exploratory time series data analysis using time-series plot, plots of autocorrelation (i.e. correlogram), plots of partial autocorrelation, plots of cross-correlations, histogram, and kernel density plot, will also be included in the demo.
The attendees will learn – the formulation of a time series forecasting problem statement in context of VAR and RNN – the application of Recurrent Neural Network-based techniques in time series forecasting – the application of Vector Autoregressive Models in multivariate time series forecasting – the pros and cons of using VAR and RNN-based techniques in the context of financial time series forecasting – When to use VAR and when to use RNN-based techniques
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
This was a presentation done for the Techspace of IoT Asia 2017 oon 30th March 2017. This is an introductory session to introduce the concept of Long Short-Term Memory (LSTMs) for the prediction in Time Series. I also shared the Keras code to work out a simple Sin Wave example and a Household power consumption data to use for the predictions. The links for the code can be found in the presentation.
Time series forecasting with machine learningDr Wei Liu
An introduction of developing and application time series forecast models with both traditional time series methods and machine learning techniques. Case study for a challenging very short-term electrical price forecasting project was presented.
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...Simplilearn
This presentation on TensorFlow will help you in understanding what exactly is TensorFlow and how it is used in Deep Learning. TensorFlow is a software library developed by Google for the purposes of conducting machine learning and deep neural network research. In this tutorial, you will learn the fundamentals of TensorFlow concepts, functions, and operations required to implement deep learning algorithms and leverage data like never before. This TensorFlow tutorial is ideal for beginners who want to pursue a career in Deep Learning. Now, let us deep dive into this TensorFlow tutorial and understand what TensorFlow actually is and how to use it.
Below topics are explained in this TensorFlow presentation:
1. What is Deep Learning?
2. Top Deep Learning Libraries
3. Why TensorFlow?
4. What is TensorFlow?
5. What are Tensors?
6. What is a Data Flow Graph?
7. Program Elements in TensorFlow
8. Use case implementation using TensorFlow
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you’ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
7. Build your own deep learning project
8. Differentiate between machine learning, deep learning and artificial intelligence
Learn more at: https://www.simplilearn.com
GANs are the new hottest topic in the ML arena; however, they present a challenge for the researchers and the engineers alike. Their design, and most importantly, the code implementation has been causing headaches to the ML practitioners, especially when moving to production.
Starting from the very basic of what a GAN is, passing trough Tensorflow implementation, using the most cutting-edge APIs available in the framework, and finally, production-ready serving at scale using Google Cloud ML Engine.
Slides for the talk: https://www.pycon.it/conference/talks/deep-diving-into-gans-form-theory-to-production
Github repo: https://github.com/zurutech/gans-from-theory-to-production
This presentation is Part 2 of my September Lisp NYC presentation on Reinforcement Learning and Artificial Neural Nets. We will continue from where we left off by covering Convolutional Neural Nets (CNN) and Recurrent Neural Nets (RNN) in depth.
Time permitting I also plan on having a few slides on each of the following topics:
1. Generative Adversarial Networks (GANs)
2. Differentiable Neural Computers (DNCs)
3. Deep Reinforcement Learning (DRL)
Some code examples will be provided in Clojure.
After a very brief recap of Part 1 (ANN & RL), we will jump right into CNN and their appropriateness for image recognition. We will start by covering the convolution operator. We will then explain feature maps and pooling operations and then explain the LeNet 5 architecture. The MNIST data will be used to illustrate a fully functioning CNN.
Next we cover Recurrent Neural Nets in depth and describe how they have been used in Natural Language Processing. We will explain why gated networks and LSTM are used in practice.
Please note that some exposure or familiarity with Gradient Descent and Backpropagation will be assumed. These are covered in the first part of the talk for which both video and slides are available online.
A lot of material will be drawn from the new Deep Learning book by Goodfellow & Bengio as well as Michael Nielsen's online book on Neural Networks and Deep Learning as well several other online resources.
Bio
Pierre de Lacaze has over 20 years industry experience with AI and Lisp based technologies. He holds a Bachelor of Science in Applied Mathematics and a Master’s Degree in Computer Science.
https://www.linkedin.com/in/pierre-de-lacaze-b11026b/
This slides explains how Convolution Neural Networks can be coded using Google TensorFlow.
Video available at : https://www.youtube.com/watch?v=EoysuTMmmMc
Artificial neural network for machine learninggrinu
An Artificial Neurol Network (ANN) is a computational model. It is based on the structure and functions of biological neural networks. It works like the way human brain processes information. ANN includes a large number of connected processing units that work together to process information. They also generate meaningful results from it.
Synthetic dialogue generation with Deep LearningS N
A walkthrough of a Deep Learning based technique which would generate TV scripts using Recurrent Neural Network. The model will generate a completely new TV script for a scene, after being training from a dataset. One will learn the concepts around RNN, NLP and various deep learning techniques.
Technologies to be used:
Python 3, Jupyter, TensorFlow
Source code: https://github.com/syednasar/talks/tree/master/synthetic-dialog
Deep Learning Enabled Question Answering System to Automate Corporate HelpdeskSaurabh Saxena
Studied feasibility of applying state-of-the-art deep learning models like end-to-end memory networks and neural attention- based models to the problem of machine comprehension and subsequent question answering in corporate settings with huge
amount of unstructured textual data. Used pre-trained embeddings like word2vec and GLove to avoid huge training costs.
Introducing the use of the machine learning in the Matlab Environment. This technique is related to the Artificial Intelligence. Machine Learning is a discussed topic in the field of Computer Science, Robotics, Artificial Vision.
MLConf 2013: Metronome and Parallel Iterative Algorithms on YARNJosh Patterson
Online learning techniques, such as Stochastic Gradient Descent (SGD), are powerful when applied to risk minimization and convex games on large problems. However, their sequential design prevents them from taking advantage of newer distributed frameworks such as Hadoop/MapReduce. In this session, we will take a look at how we parallelize parameter estimation for linear models on the next-gen YARN framework Iterative Reduce and the parallel machine learning library Metronome. We also take a look at non-linear modeling with the introduction of parallel neural network training in Metronome as well.
Artificial neural networks have been adopted for a broad range of tasks in multimedia analysis and processing, such as visual and acoustic classification, extraction of multimedia descriptors or image and video coding. The trained neural networks for these applications contain a large number of parameters (weights), resulting in a considerable size. Thus, transferring them to a number of clients using them in applications (e.g., mobile phones, smart cameras) benefits from a compressed representation of neural networks.
MPEG Neural Network Coding and Representation is the first international standard for efficient compression of neural networks (NNs). The standard is designed as a toolbox of compression methods, which can be used to create coding pipelines. It can be either used as an independent coding framework (with its own bitstream format) or together with external neural network formats and frameworks. For providing the highest degree of flexibility, the network compression methods operate per parameter tensor in order to always ensure proper decoding, even if no structure information is provided. The standard contains compression-efficient quantization and an arithmetic coding scheme (DeepCABAC) as core encoding and decoding technologies, as well as neural network parameter pre-processing methods like sparsification, pruning, low-rank decomposition, unification, local scaling, and batch norm folding. NNR achieves a compression efficiency of more than 97% for transparent coding cases, i.e. without degrading classification quality, such as top-1 or top-5 accuracies.
This talk presents an overview of the context, technical features, and characteristics of the NN coding standard, and discusses ongoing topics such as incremental neural network representation.
How to use the Economic Complexity Index to guide innovation plansData Science Milan
In this talk Mauro Pelucchi will present the Economic Complexity Index (ECI) and the Product Complexity Index (PCI), two network measures that provide unique insights into economic development patterns.We will show how to compute these metrics and explore the network theory behind these indices (Hidalgo and Hausmann, 2009).
The measures are also related to various dimensionality reduction methods and can be used to determine distances between nodes based on their nodes based on their similarity.Finally, we will discover how to interpret these metrics to compare countries, markets, products, and guide our plans in a data-driven context.
"You don't need a bigger boat": serverless MLOps for reasonable companiesData Science Milan
It is indeed a wonderful time to build machine learning systems, as the growing ecosystems of tools and shared best practices make even small teams incredibly productive at scale. In this talk, we present our philosophy for modern, no-nonsense data pipelines, highlighting the advantages of a (almost) pure serverless and open-source approach, and showing how the entire toolchain works - from raw data to model serving - on a real-world dataset.
Finally, we argue that the crucial component for analyzing data pipelines is not the model per se, but the surrounding DAG, and present our proposal for producing automated "DAG cards" from Metaflow classes.
Bio:
Jacopo Tagliabue was co-founder and CTO of Tooso, an A.I. company in San Francisco acquired by Coveo in 2019. Jacopo is currently the Lead A.I. Scientist at Coveo. When not busy building A.I. products, he is exploring research topics at the intersection of language, reasoning and learning, with several publications at major conferences (e.g. WWW, SIGIR, RecSys, NAACL). In previous lives, he managed to get a Ph.D., do scienc-y things for a pro basketball team, and simulate a pre-Columbian civilization.
Topics: MLOps, Metaflow, model cards.
Question generation using Natural Language Processing by QuestGen.AIData Science Milan
Manual question generation (worksheets and quizzes) in edtech is not scalable for online transformation and leads to increased workload on teachers due to the pandemic. In this session, we will explore natural language processing (NLP) techniques to generate Multiple Choice Questions automatically from any text content using the T5 transformer model. We will also explore methods to deploy the T5 question generation model for fast CPU inference using ONNX conversion and quantization.
Bio:
Ramsri is a Lead Data Scientist with 8+ years of work experience across Silicon Valley, Singapore, and India. Most recently he had been a co-founder and CTO of a funded AI-assisted assessments startup. He has spent the last 2 years developing question generation models in edtech and also released an open-source library on the same.
Abstract: Data preparation and modelling are the activities that take most of the time in a typical data scientist workday. In this session we’ll see how AWS services for Analytics and data management can be effectively used and integrated in AI/ML pipelines. We’ll focus on AWS Glue, AWS Glue DataBrew and AWS Data Wrangler with a bit of theory and hands-on demos.
Bio:
Francesco Marelli is a senior solutions architect at Amazon Web Services. He has lived and worked in UK, italy, Switzerland and other countries in EMEA. He is specialized in the design and implementation of Analytics, Data Management and Big Data systems. Francesco also has a strong experience in systems integration and design and implementation of applications.
Topics: machine learning pipelines, AWS, cloud.
MLOps with a Feature Store: Filling the Gap in ML InfrastructureData Science Milan
A Feature Store enables machine learning (ML) features to be registered, discovered, and used as part of ML pipelines, thus making it easier to transform and validate the training data that is fed into machine learning systems. Feature stores can also enable consistent engineering of features between training and inference, but to do so, they need a common data processing platform. The first Feature Stores, developed at hyperscale AI companies such as Uber, Airbnb, and Facebook, enabled feature engineering using domain specific languages, providing abstractions tailored to the companies’ feature engineering domains. However, a general purpose Feature Store needs a general purpose feature engineering, feature selection, and feature transformation platform.
In this talk, we describe how we built a general purpose, open-source Feature Store for ML around dataframes and Apache Spark. We will demonstrate how data engineers can transform and engineers features from backend databases and data lakes, while data scientists can use PySpark to select and transform features into train/test data in a file format of choice (.tfrecords, .npy, .petastorm, etc) on a file system of choice (S3, HDFS). Finally, we will show how the Feature Store enables end-to-end ML pipelines to be factored into feature engineering and data science stages that each can run at different cadences.
Bio:
Fabio Buso is the head of engineering at Logical Clocks AB, where he leads the Feature Store development. Fabio holds a master's degree in cloud computing and services with a focus on data intensive applications, awarded by a joint program between KTH Stockholm and TU Berlin.
Topics: feature store, MLOps.
Reinforcement Learning is a growing subset of Machine Learning and one of the most important frontiers of Artificial Intelligence. Its goal is to capture higher logic and use more adaptable algorithms than classical Machine Learning.
Formally it denotes a set of algorithms that deal with sequential decision-making and have the potential capability to make highly intelligent decisions depending on their local environment.
Reinforcement Learning problems can be described as an agent that has to make decisions in its environment in order to optimize a cumulative reward, and it is clear that this formalization applies to a great variety of tasks in many different fields.
In this talk, the main features of the most important Reinforcement Learning algorithms will be illustrated and deepened, with some concrete and explanatory examples.
Bio:
Marco Del Pra
Marco was born in Venice 41 years ago, has two master's degrees (Computer Science and Mathematics), and has two important publications in applied mathematics.
He has been working in Artificial Intelligence for 10 years, mainly as a freelancer. Among others, he worked for the European Commission's Joint Research Center, for Cuebiq, and as Data Science Lead for Microsoft's Artificial Intelligence projects in Italy.
Ludwig: A code-free deep learning toolbox | Piero Molino, Uber AIData Science Milan
The talk will introduce Ludwig, a deep learning toolbox that allows to train models and to use them for prediction without the need to write code. It is unique in its ability to help make deep learning easier to understand for non-experts and enable faster model improvement iteration cycles for experienced machine learning developers and researchers alike. By using Ludwig, experts and researchers can simplify the prototyping process and streamline data processing so that they can focus on developing deep learning architectures.
Bio:
Piero Molino is a Senior Research Scientist at Uber AI with focus on machine learning for language and dialogue. Piero completed a PhD on Question Answering at the University of Bari, Italy. Founded QuestionCube, a startup that built a framework for semantic search and QA. Worked for Yahoo Labs in Barcelona on learning to rank, IBM Watson in New York on natural language processing with deep learning and then joined Geometric Intelligence, where he worked on grounded language understanding. After Uber acquired Geometric Intelligence, he became one of the founding members of Uber AI Labs.
Audience projection of target consumers over multiple domains a ner and baye...Data Science Milan
Traditional market research is generally conducted by questionnaires or other forms of explicit feedback, directly asked to an ad hoc panel of individuals that in aggregate are representative of a larger group of people. Unfortunately, those traditional approaches are often invasive, nonscalable, and biased. Indirect approaches based on sparse and implicit consumer feedback (e.g., social network interactions, web browsing, or online purchases) are more scalable, authentic, and more suitable for real-time consumer insights.
Although those sources of implicit consumer feedback provide relevant and detailed pictures of the population, they individually provide only a limited set of observable behaviors.
The Holy Grail of market research is the ability to merge different sources of consumers interests into an augmented view that connects all the dots across multiple domains.
Unfortunately, user-centric "fusion" algorithms present many limitations in the case of heterogeneous datasets strongly differing in terms of size and density and when the number of sources to merge increases.
We propose a novel approach of Audience Projection able to define a target audience as a subset of the population in a source domain and to project this target to a set of users into a destination dataset.
We will show how libraries such as spaCy can provide Deep Learning implementations for Named Entity Recognition (NER) to match related brands and we will use Bayesian Inference to transfer knowledge from the source domain. This way, we can estimate the probability of the user to belong to the target using the source distribution of volume of interests of common entities as model evidence and the source target size as prior probability.
Bio:
Gianmario Spacagna is the chief scientist and head of AI at Helixa. His team’s mission is building the next generation of behavior algorithms and models of human decision making with careful attention to their potential and effects on society. His experience covers a diverse portfolio of machine learning algorithms and data products across different industries. Previously, he worked as a data scientist in IoT automotive (Pirelli Cyber Technology), retail and business banking (Barclays Analytics Centre of Excellence), threat intelligence (Cisco Talos), predictive marketing (AgilOne), plus some occasional freelancing. He’s a co-author of the book Python Deep Learning, contributor to the “Professional Manifesto for Data Science,” and founder of the Data Science Milan community. Gianmario holds a master’s degree in telematics (Polytechnic of Turin) and software engineering of distributed systems (KTH of Stockholm). After having spent half of his career abroad, he now lives in Milan. His favorite hobbies include home cooking, hiking, and exploring the surrounding nature on his motorcycle.
Weakly Supervised Learning: Introduction and Best Practices
In the talk we will introduce the definition of three main types of weakly supervised learning: incomplete, inexact and inaccurate; we examine how the models can be trained in case of weak supervision and view the real application of weakly supervised learning, how it can improve results and decrease the costs.
Bio:
Kristina Khvatova works as a Software Engineer at Softec S.p.A. Currently she is involved in the development of a project for data analysis and visualisation; it includes quantitative and qualitative analysis based on classification, optimisation, time series prediction, anomaly detection techniques. She obtained a master degree in Mathematics at the Saint-Petersburg State University and a master degree in Computer Science at the University of Milano-Bicocca.
GANs beyond nice pictures: real value of data generation, Alex HoncharData Science Milan
GANs beyond nice pictures: real value of data generation (theory and business applications)
About the speaker, Alex Honchar:
I am machine learning expert currently applying AI in medtech, fintech and other areas. I also enjoy teaching and blogging (50k+ views monthly) about deep learning applications. As an academia member, I have a track of scientific publications as well. Beside sciences, I travel, do sports and perform card magic.
Continual/Lifelong Learning with Deep Architectures, Vincenzo LomonacoData Science Milan
Humans have the extraordinary ability to learn continually from experience. Not only can we apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of AI is building an artificial continually learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex skills and knowledge.
"Continual Learning" (CL) is indeed a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the topic, we’ll implement different Continual Learning strategies and assess them on common vision benchmarks. We’ll conclude the workshop with a look at possible real world applications of CL.
Vincenzo Lomonaco is a Deep Learning PhD student at the University of Bologna and founder of ContinualAI.org. He is also the PhD students representative at the Department of Computer Science of Engineering (DISI) and teaching assistant of the courses “Machine Learning” and “Computer Architectures” in the same department. Previously, he was a Machine Learning software engineer at IDL in-line Devices and a Master Student at the University of Bologna where he graduated cum laude in 2015 with the dissertation “Deep Learning for Computer Vision: a Comparison Between CNNs and HTMs on Object Recognition Tasks".
Processing 3D images has many use cases. For example, to improve autonomous car driving, to enable digital conversions of old factory buildings, to enable augmented reality solutions for medical surgeries, etc. Also 3D images help in 3D modeling and safety evaluation of products.
3D image processing brings enormous benefits but also amplifies computing cost. The size of the point cloud, the number of points, sparse and irregular point cloud, and the adverse impact of the light reflections, (partial) occlusions, etc., make it difficult for engineers to process point clouds.
Moving from using hand crafted features to using deep learning techniques to semantically segment the images, to classify objects, to detect objects, to detect actions in 3D videos, etc., we have come a long way in 3D image processing.
3D Point Cloud image processing is increasingly used to solve Industry 4.0 use cases to help architects, builders and product managers. I will share some of the innovations that are helping the progress of 3D point cloud processing. I will share the practical implementation issues we faced while developing deep learning models to make sense of 3D Point Clouds.
Attendees: Beginners and Intermediate skilled in Image Processing and 3D Point Clouds
Profile of the speaker:
SK Reddy is the Chief Product Officer AI in Hexagon (www.hexagon.com). He is an AI and ML expert and a successful twice startup entrepreneur. He is an AI startup advisor too. Also he is a frequent speaker in conferences and is an AI blogger.
Deep time-to-failure: predicting failures, churns and customer lifetime with ...Data Science Milan
The notebook and documentation of the original tutorial is available at https://github.com/gm-spacagna/deep-ttf.
Deep Time-to-Failure: predicting failures, churns and customer lifetime using recurrent neural networks.
Machineries and customers are among the most valuable assets for many businesses. A common trait of these assets is that sooner or later they will fail or, in the case of customers, they will churn.
In order to catch those failure events we would ideally consider the whole history of the machine/customer available information and learn smart representations of the system status over time.
Traditional machine learning and statistical models approach the prediction of time-to-failure, aka. expected lifetime, as a supervised regression problem using handcrafted features.
Training those models is hard because of three main reasons:
The complexity of extracting predictive features from time-series without overfitting.
The difficulty of modeling uncertainty and confidence levels in the predictions.
The scarcity of labeled data, failure events are by definition rare and that results in highly unbalanced training datasets.
The first issue can be solved adopting recurrent neural architectures.
A solution to the the last two problems could be to exploit censored data and to build survival regression models.
In this talk we will present a novel technique based on recurrent neural networks that can turn any length-variable sequence of data into a probability distribution representing the estimated remaining time to the failure event. The network will be trained in presence of ground truth as well as with right-censored data.
We will demonstrate using a case study regarding 100 jet engine simulated degradation provided by NASA.
During the tutorial you will learn:
What is Survival Analysis and what are the most popular Survival Regression techniques.
How a Weibull distribution can be used as generic distribution for modeling Time-to-Failure events.
How to build a deep learning algorithm in Keras leveraging recurrent units (LSTM or GRU) that can map raw time-series of covariates into Weibull probability distributions.
The tutorial will also cover a few common pitfalls, visualizations and evaluation tools useful for testing and adapting this approach to generic use cases.
You are free to bring your laptop if you would like to do some live coding and experiment yourself. In this case we strongly encourage to check you have all of the requirements installed in your machine.
More details on the required packages can be found on the Github repository gm-spacagna/deep-ttf.
50 Shades of Text - Leveraging Natural Language Processing (NLP), Alessandro ...Data Science Milan
50 Shades of Text - Leveraging Natural Language Processing (NLP) to validate, improve, and expand the functionalities of a product
Nowadays, every company either stores or produces text data: from web logs and user queries, to translations and support tickets, yet not everyone knows how to extract valuable insights from it. In this session, we will present a practical case on how to move from raw text data to a valuable business application leveraging upon some of the major NLP methodologies (word embedding, word2vec, doc2vec, fastText, etc.)
Bio: Alessandro is a data veteran. He holds two Master’s degrees in computer engineering, one from Politecnico di Milano and the other from University of Illinois at Chicago (UIC).
He started his career in data consultancy, where he mastered Apache Spark for Machine Learning projects and subsequently joined WW Grainger, one of the largest MRO e-commerce companies in the United States. In September 2017, after more than 5 years in the USA, Alessandro returned to his native country, Italy, where he is now leading a team of data scientists. His current work focuses on achieving energy efficiency through the automation of energy management processes for commercial customers.
Pricing Optimization: Close-out, Online and Renewal strategies, Data ReplyData Science Milan
“Product close-out strategy” by Ilaria Gianoli, Data Scientist, Data Reply
Abstract:
How to deal with products in their decline phase? Ilaria will share her experience in optimizing the close-out strategy for a multinational retail leader, with a particular focus on the price optimization.
Bio:
Ilaria is a Data Scientist at Data Reply, where she works as a consultant across different industries, in particular in the Retail. She uses her mathematical, statistical and machine learning background to turning data into business opportunities. She also works closely to the business to provide quantitative support for decision making, adapting the complexity of the mathematical models to customer needs.
She holds a MSc in Applied Statistics - Mathematical Engineering from Politecnico di Milano.
“Online pricing: from theory to application” by Giovanni Corradini, Data Scientist, Data Reply
Abstract:
Multi-Armed Bandit algorithms are populating the world of e-commerce. How do they work?
Giovanni will share the basic of this field and an application of a state-of-the-art algorithm on real world simulation of the ticket industry.
Bio: Giovanni is a Data Scientist at Data Reply.
He holds a MSc in Applied Statistics - Mathematical Engineering from Politecnico di Milano.
He has a background in statistics, machine learning and data mining and he provides decision making support to industries in many different fields.
“Renewal Price Optimization for Subscription products” by Riccardo Lorenzon, Data Scientist, Data Reply
Abstract:
We are observing a huge shift in modern economy from a pay-per-product model to a subscription-based model. When it comes to pricing strategies, it is important both to close the single deal and monetize long-term relationships with the customer. Riccardo will present an application of subscription renewal pricing optimization models for a company belonging to the publishing industry.
Bio:
Riccardo holds a MSc in Mathematical Models for Decision Making from Politecnico di Milano.
He developed hands-on experience on end-to-end data projects across multiple industries. His proactive creativity helps him be very effective in the business case design and early stages of projects.
"How Pirelli uses Domino and Plotly for Smart Manufacturing" by Alberto Arrig...Data Science Milan
"How Pirelli uses Domino and Plotly for Smart Manufacturing" by Alberto Arrigoni, Senior Data Scientist, Pirelli (pirelli.com)
Abstract:
Pirelli, a global performance tire manufacturer, uses data science in its 20 factories to improve quality and efficiency, and reduce energy consumption. For this “Smart Manufacturing” initiative, Pirelli’s data science team has developed predictive models and analytics tools to monitor processes, machines and materials on the factory floors. In this talk we will show some of the solutions we deploy, demonstrate how we used Domino’s data science platform and Plot.ly to build these solutions, and discuss the next steps in this journey towards predictive maintenance.
Bio:
Alberto Arrigoni is a data scientist at Pirelli, where he works to process sensors and telemetry data for IoT, Smart Factories and connected-vehicle applications.
He works closely with all major business units such as R&D, industrial engineering and BI to develop tailored machine learning algorithms and production systems.
He holds a PhD in biostatistics from the University of Milan Bicocca and prior to joining Pirelli was a staff data scientist at the National Institute of Molecular Genetics (Milan), as well as a Fulbright student at the Santa Clara University and visiting PhD student at Pacific Biosciences (Menlo Park, CA).
Brief introduction to Cerved data, the role of data scientist in Cerved and how a data scientist can take advantage from graph database.
Bio:
Stefano Gatti: Born in 1970, has been involved for more than 15 years in several big data and technologies driven projects in leading business information companies like Lince and Cerved. He is very fond of agile metodologies, trying to apply them at all organizational levels. In last years he is strongly engaged in facilitating in Cerved the spread of innovation and the taking advantage from the new big and smart data technologies especially from a business usage perspective. datatelling, open innovation, partnership with smart actors of worldwide data driven innovation ecosystem are his actual mantra. Nunzio Pellegrino: Data Scientist in Cerved, as part of Innovation team, with focus on extract value from data and resolve problems with the latest technologies available. I’ve a degree in Statistics with background in Machine Learning. I’ve being worked primarily in Data Integration and Business Intelligence projects for 3 years. In this moment, I’m product owner of a web application based on GraphDB and involved in Italian Open Data projects. I’m a R enthusiastic, Python practitioner and fascinated of graph ecosystem.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
Time Series Classification with Deep Learning | Marco Del Pra
1. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Time Series Classification with Deep Learning
May 5, 2020
Time Series Classification with Deep Learning
2. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Motivation
During the last years, Time Series Classification (TSC) has become one of the
most challenging problems in data mining
Many classification problems can be treated as a Time Series Classification
problems
Time series are present in many real-world applications:
health care,
human activity recognition,
cyber-security,
finance.
Many areas are strongly increasing their interest in applications based on time
series
Non Deep Learning algorithms require some kind of feature engineering before
the classification
Deep Learning algorithms already incorporate this kind of feature engineering
internally
Time Series Classification with Deep Learning
3. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Examples of Time Series Classification Problems
Electrocardiogram analysis
Electrocardiogram records are saved in time series form
Distinguishing a disease is a TSC problem
Gesture recognition
Many devices record series of images to interpret the user’s gestures
Identifying the correct gesture is a TSC problem
Anomaly detection
Anomaly detection is the identification of unusual events
Often the data in anomaly detection are time series
Distinguishing and recognize an anomaly is a TSC problem
Time Series Classification with Deep Learning
4. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Problem definition
Given a set of objects with the same structure and a fixed set of different classes,
a dataset is a collection of pairs (object, class)
Given a dataset, the goal of a Classification algorithm is to build a model that
associates to an object the probability to belong to the possible classes,
accordingly to the features of the objects associated to each class
Univariate time series: ordered set of real values
M-dimensional multivariate time series: M different univariate time series with
the same length
Time Series Classification problem: Classification problem where the objects of
the dataset are univariate or multivariate time series
Time Series Classification with Deep Learning
5. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Perceptron (Neuron)
The Perceptron (Neuron) is the basic element of many machine learning
algorithms
The goal of a Perceptron is to compute the wighted sum of the input values and
then apply an activation function to the result
Most common activation functions:
The result of the activation function is referred as the activation of the
Perceptron and represents its output value
Time Series Classification with Deep Learning
6. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Multi Layer Perceptron Architecture
A Multi Layer Perceptron (MLP) is a class of feedforward neural networks, with
one Input Layer, one or more Hidden Layers, and one Output Layer
Multi Layer Perceptron is fully connected
Each node of the hidden layers and of the output layer is a Perceptron
The output of the Multi Layer Perceptron is obtained computing in sequence the
activation of its Perceptrons
The function that connect the input and the output depends on the values of the
weights.
Time Series Classification with Deep Learning
7. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Classification with Multi Layer Perceptron
Multi Layer Perceptron is commonly used for Classification problems
It’s necessary to represent the pairs (object, class) in the dataset in a more
suitable way:
Every object must be represented with a vector, called input vector
Every class must be represented with its one-hot label vector, called target
For training, MLP uses Backpropagation technique that iterates on the input
vectors
Iteration steps:
Computation of the output for the current input vector
Computation of the prediction error with a cost function
Upgrade of the weights with gradient descent
Time Series Classification with Deep Learning
8. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Classification with Multi Layer Perceptron
The Backpropagation minimizes the loss on the training data
After the training the model is able to predict the estimated probabilities of an
object to belong to each class
Why don’t use the MLP for TSC, taking the whole multivariate time series as
input?
MLP don’t work well for TSC problems because the length of the time series really
hurts the computational speed
It’s necessary to extract the relevant features of the input time series
The big advantage of Deep Learning algorithms is that these relevant feature are
learned during the training
After many layers used for the extraction of the relevant features, Deep Learning
architecures uses algorithms like MLP to obtain the classification
Time Series Classification with Deep Learning
9. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Deep Learning for Time Series Classification
A Deep Learning algorithm is a composition of several layers that implement
non-linear functions
Every layer takes as input the output of the previous layer and applies its
non-linear transformation to compute its own output
The behavior of the non-linear transformations is controlled by trainable
parameters
Often, the last layer is a Multi Layer Perceptron or a Ridge regressor
We consider 3 different Deep Learning Architectures:
Convolutional Neural Network
Inception Time
Echo State Network
Time Series Classification with Deep Learning
10. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Convolutional Neural Networks Architecture
A Convolutional Neural Network (CNN) is able to successfully capture the spatial
and temporal patterns through the application trainable filters
The pre-processing required in a Convolutional Neural Network is much lower as
compared to other classification algorithms
A Convolutional Neural Network is composed of three different layers:
1 Convolutional Layer
2 Pooling Layer
3 Fully-Connected Layer
Several Convolutional Layers and Pooling Layers are alternated before the
Fully-Connected Layer
Time Series Classification with Deep Learning
11. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Convolutional Layer
The convolution performs a convolution of an input series of feature maps with a
filter matrix to obtain as output a different series of feature maps
The convolution is defined by a set of filters, that are fixed size matrices.
Single convolution step:
Convolution between one input feature map and a filter:
Convolutional Layer executes the convolution between every filter and every input
feature map
The values of the filters are considered as trainable weights and then are learned
during training.
Time Series Classification with Deep Learning
12. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Stride
Stride controls how the filter convolves around one input feature map.
The value of stride indicates how many units must be shifted at a time.
Time Series Classification with Deep Learning
13. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Padding
Padding indicates how many extra columns and rows to add outside an input
feature map, before applying a convolution filter
All the cells of the new columns and rows have a dummy value, usually 0.
Padding is used to preserve the original size of the input feature map after
Convolutional Layer, or make it drecresing slower
Time Series Classification with Deep Learning
14. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Pooling Layer
The purpose of Pooling is to achieve a dimension reduction of feature maps
Pooling is applied to sliding windows of fixed size across the width and height of
every input feature map
There are two types of pooling: Max Pooling and Average Pooling.
For every sliding window the result of the pooling is the maximum or the average
value
Max Pooling works as a noise suppressant, discarding noisy activations.
Also for Pooling Layer stride and padding must be specified.
The advantage of pooling operation is down-sampling the convolutional output
bands, thus reducing variability in the hidden activations.
Time Series Classification with Deep Learning
15. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Fully-Connected Layer
The goal of the Fully-Connected Layer is to learn non-linear combinations of the
high-level features
Usually the Fully Connected Layer is implemented with a Multi Layer Perceptron.
After several convolution and pooling operations, the output series of feature
maps are flattened into a vector
The flattened column is the input of the Multi-Layer Perceptron
The output has a number of neurons equal to the number of possible classes
Backpropagation is applied to every iteration of training and finally the model is
able to classify the time series
Time Series Classification with Deep Learning
16. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Hyperparameters
Number of convolution filters
Few filters cannot extract enough features to achieve classification
Too many filters are helpless and computationally expensive
Convolution filter size and initial values
Smaller filters collect as much local information as possible
Bigger filters represent more global, high-level and representative information
The filters are usually initialized with random values.
Pooling method and size
Method: Max or Average
Size: when increases, the dimension reduction is greater, but more informations are lost
Weight initialization
The weights are usually initialized with small random numbers
Activation function
Rectifier, sigmoid or hyperbolic tangent are usually chosen
Number of epochs
Number of times the entire training set pass through the model
Time Series Classification with Deep Learning
17. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Implementation
Building a Convolutional Neural Network is very easy using Python library Keras
To build a CNN in Keras, it is sufficient to:
declare a Sequential class
add the desired Convolutional, MaxPooling and Dense Keras Layers in the Sequential
class
specify number of filters and filter size for Convolutional Layer
specify pooling size for Pooling Layer
To compile the model, Keras requires:
the input shape
the optimizer
the loss function
a list of metrics
To train a model in Keras it’s sufficient to call the function fit() specifying the
needed parameters:
the training data (input data and targets),
the number of epochs
the validation data
To use the model, pass an array of input to the function predict() and it returns
the array of outputs
Time Series Classification with Deep Learning
18. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Inception Time Architecture
Recently was introduced a deep Convolutional Neural Network called Inception
Time.
This kind of network shows high accuracy and very good scalability.
The Inception Network consists of a series of Inception Modules followed by a
Global Average Pooling Layer and a Fully Conencted Layer
A residual connections is added at every third inception module
Time Series Classification with Deep Learning
19. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Inception Module
Inception Module consists of 4 Layers:
Bottleneck Layer
A set of parallel Convolutional Layers with different filter size
MaxPooling Layer
Depth Concatenation Layer
The network is able to extract relevant features of multiple resolutions thanks to
the use of filters with different sizes
Internal layers chooses which filter size is relevant to learn the relevant features
This is very helpful to identify a high-level feature that can have different sizes on
different input feature maps.
Time Series Classification with Deep Learning
20. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Receptive Field and results
A neuron in an Inception Network depends only on a region of the input features
map, that is called Receptive Field of the neuron
For time series data, the total Receptive field of an Inception Network is given by
1 +
d
i=1
(ki − 1) (1)
It’s very interesting to investigate how the accuracy of an Inception Network
changes as the Receptive Field varies
The Figure shows Inception Network’s accuracy over a simulation dataset, with
respect to the filter length as well as the input time series length
It is evident that a longer filter is required to produce more accurate results
Time Series Classification with Deep Learning
21. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Receptive Field and results
The Figure shows Inception Network’s accuracy over a simulation dataset, with
respect to the network’s depth as well as the length of the input time series.
It turns out that adding more layers doesn’t necessarily give an improvement of
the network’s performance, particularly for datasets with a small training set
Single Inception Network sometimes exhibits high variance in accuracy
For this reason Inception Time is implemented as an ensemble of many Inception
Networks
In this way the algorithm improves his stability, and shows high accuracy and very
good scalability
Different experiments have shown that its time complexity grows linearly with
both the training set size and the time series length
Time Series Classification with Deep Learning
22. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Implementation
On github you can find an full implementation of Inception Time written with
Python using Keras library, at this link:
https://github.com/hfawaz/InceptionTime
This implementation is based on 3 main files:
File main.py contains the necessary code to run an experiement
File inception.py contains the Inception Network implementation
File nne.py contains the code that ensembles a set of Inception Networks
The implementation uses the Keras Module Class, since some layers of
InceptionTime work in parallel
The code that implements the Inception Module building block is very similar to
that described for CNNs, and can be easily included in codes based on Keras in
order to implement customized architectures
The structure of the code that implements compilation, training and use of the
model is very similar to that described for Convolutional Neural Networks
Time Series Classification with Deep Learning
23. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Recurrent Neural Networks
Echo State Networks are a type of Recurrent Neural Networks
Recurrent Neural Networks are networks of neuron-like nodes organized into
successive layers
Like in standard Neural Networks, neurons are divided in Input Layer, Hidden
Layer and Output Layer
Each connection between neurons has a corresponding trainable weight
Every neurons is assigned to a fixed timestep
The neurons in the hidden layer are also forwarded in a time dependent direction
The input and output neurons are connected only to the hidden layers with the
same assigned timestep
The activation of the neurons is computed in time order
Time Series Classification with Deep Learning
24. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Motivation of Echo State Networks
Recurrent Neural Networks (RNNs) are rarely applied for Time Series
Classification mainly due to three factors:
1 The type of this architecture is designed mainly to predict an output for each element in
the time series
2 Recurrent Neural Networks typically suffer from the vanishing gradient problem
3 The training of a RNN is hard to parallelize and computationally expensive
Echo State Networks were designed to mitigate the problems of Recurrent Neural
Networks by eliminating the need to compute the gradient for the hidden layers
This reduces the training time and avoid the vanishing gradient problem
Many results show that Echo State Networks are really helpful to handle chaotic
time series
Time Series Classification with Deep Learning
25. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Echo State Networks Architecture
The Architecture of an Echo State Network consists of an Input Layer, a
Reservoir, a Dimension Reduction Layer, a Readout, and an Output Layer
The Reservoir is organized like a sparsely connected random RNN
The Dimension Reduction algorithm is usually implemented with the PCA
The Readout is usually implemented as MLP or a Ridge regressor
The weights between the Input layer and the Reservoir and those in the Reservoir
are randomly assigned and not trainable
The weights in the Readout are trainable
Time Series Classification with Deep Learning
26. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Reservoir
The Reservoir is connected to the Input Layer, and consists in a set of internal
sparsely-connected neurons, and in its own output neurons.
In the Reservoir there are 4 types of weights:
the input weights
the internal weights
the output weights
the backpropagation weights
All these weights are randomly initialized, time independent and are not trainable
This output is added to the total Reservoir output, but acts also as input for the
next time step through backpropagation weights.
The output of the Reservoir is computed separately for every time step
At every time step, the activation of every internal and output neuron is computed
The Reservoir creates a recurrent non linear embedding of the input into a higher
dimension representation
Time Series Classification with Deep Learning
27. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Dimension Reduction
Choosing the correct dimension reduction it’s possible to reduce the execution
time without lowering the accuracy
The Figure shows how training time and average classification accuracy vary with
respect to the subspace dimension D after dimension reduction, for a particular
experiment
Training time increases approximately linearly with D
Accuracy stops growing when D = 75
In this case the better value for the subspace dimension is 75
Time Series Classification with Deep Learning
28. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Implementation and Hyperparameters
A full implementation in Python of Echo State Networks is available on Github at
this link:
https://github.com/FilippoMB/Reservoir-Computing-framework-for-multivariate-
time-series-classification/blob/master/README.md
The code uses the libraries Scikit-learn and SciPy.
The main class RC_classifier contained in the file modules.py permits to build,
train and test an Echo State Network classifier
The most important hyperparameters in the Reservoir are:
the number of neurons in the Reservoir
the percentage of nonzero connection weights
the largest eigenvalue of the reservoir matrix of connection weights
The most important hyperparameters in other layers are:
the algorithm for Dimensional Reduction Layer
the subspace dimension after the Dimension Reduction Layer
the type of Readout used for classification
the number of epochs
The structure of the code that implements training and use of the model is very
similar to that described for Convolutional Neural Networks
Time Series Classification with Deep Learning
29. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Conclusions
Convolutional Neural Networks are the most popular Deep Learning technique for
Time Series Classifications
The main difficulties in using Convolutional Neural Networks:
The length of the time series can slow down training
Results can be not accurate as expected with chaotic input time series
Results can be not accurate as expected with input time series in which the same
relevant feature can have different sizes
To solve these problems, InceptionTime and Echo State Networks perform better
than the other purposed architectures
InceptionTime:
speeds up the training process using an efficient dimension reduction
performs really well in handling input time series in which the same relevant feature can
have different sizes
Echo State Networks:
Speed up the training process since they are very sparsely connected with most of their
weights fixed a priori
Really helpful to handle chaotic input time series
In conclusion, high accuracy and high scalability make these new architectures the
perfect candidate for product development
Time Series Classification with Deep Learning
30. Introduction Time Series Classification Convolutional Neural Networks Inception Time Echo State Networks Conclusions Bibliography
Filippo Maria Bianchi, Simone Scardapane, Sigurd Løkse, Robert Jenssen
Reservoir computing approaches for representation and classification of
multivariate time series.
Hassan Ismail Fawaz, Benjamin Lucas, Germain Forestier, Charlotte Pelletier,
Daniel F. Schmidt, Jonathan Weber, Geoffrey I. Webb, Lhassane Idoumghar,
Pierre-Alain Muller, François Petitjean InceptionTime: Finding AlexNet for Time
Series Classification.
Time Series Classification with Deep Learning