This is a single day course, allows the learner to get experience with the basic details of deep learning, first half is building a network using python/numpy only and the second half we build the more advanced netwrok using TensorFlow/Keras.
At the end you will find a list of usefull pointers to continue.
course git: https://gitlab.com/eshlomo/EazyDnn
Practical deep learning for computer visionEran Shlomo
This is the presentation given in TLV DLD 2017. In this presentation we walk through the planning and implemintation of deeplearning solution for image recognition, with focus on the data.
It is based on the work we do at dataloop.ai and its customers.
Avi Pfeffer, Principal Scientist, Charles River Analytics at MLconf SEA - 5/2...MLconf
Practical Probabilistic Programming with Figaro: Probabilistic reasoning enables you to predict the future, infer the past, and learn from experience. Probabilistic programming enables users to build and reason with a wide variety of probabilistic models without machine learning expertise. In this talk, I will present Figaro, a mature probabilistic programming system with many applications. I will describe the main design principles of the language and show example applications. I will also discuss our current efforts to fully automate and optimize the inference process.
Daniel Shank, Data Scientist, Talla at MLconf SF 2016MLconf
Neural Turing Machines: Perils and Promise: Daniel Shank is a Senior Data Scientist at Talla, a company developing a platform for intelligent information discovery and delivery. His focus is on developing machine learning techniques to handle various business automation tasks, such as scheduling, polls, expert identification, as well as doing work on NLP. Before joining Talla as the company’s first employee in 2015, Daniel worked with TechStars Boston and did consulting work for ThriveHive, a small business focused marketing company in Boston. He studied economics at the University of Chicago.
Practical deep learning for computer visionEran Shlomo
This is the presentation given in TLV DLD 2017. In this presentation we walk through the planning and implemintation of deeplearning solution for image recognition, with focus on the data.
It is based on the work we do at dataloop.ai and its customers.
Avi Pfeffer, Principal Scientist, Charles River Analytics at MLconf SEA - 5/2...MLconf
Practical Probabilistic Programming with Figaro: Probabilistic reasoning enables you to predict the future, infer the past, and learn from experience. Probabilistic programming enables users to build and reason with a wide variety of probabilistic models without machine learning expertise. In this talk, I will present Figaro, a mature probabilistic programming system with many applications. I will describe the main design principles of the language and show example applications. I will also discuss our current efforts to fully automate and optimize the inference process.
Daniel Shank, Data Scientist, Talla at MLconf SF 2016MLconf
Neural Turing Machines: Perils and Promise: Daniel Shank is a Senior Data Scientist at Talla, a company developing a platform for intelligent information discovery and delivery. His focus is on developing machine learning techniques to handle various business automation tasks, such as scheduling, polls, expert identification, as well as doing work on NLP. Before joining Talla as the company’s first employee in 2015, Daniel worked with TechStars Boston and did consulting work for ThriveHive, a small business focused marketing company in Boston. He studied economics at the University of Chicago.
Deep learning is making news across the country as one of the most promising techniques in machine learning research. However, these methods are complex to implement, finicky to tune, and state-of-the-art accuracy is only achieved by a few experts in the field. In this session, we give a beginner-friendly explanation of deep learning using neural networks—what it is, what it does, and how; and introduce the concept of deep features, which allows you to obtain great performance with reduced running times and data set sizes. We then show how these methods can easily be deployed on GPU instances (G2) on Amazon EC2.
by Vikram Madan, Sr. Product Manager, AWS Deep Learning
In this workshop, we will provide cover deep learning fundamentals and focus on the powerful and scalable Apache MXNet open source deep learning framework. At the end of this tutorial you’ll be able to train your own deep neural network and fine tune existing state of the art models for image and object recognition. We’ll also deep dive on setting up your deep learning infrastructure on AWS and model deployment on AWS Lambda.
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016MLconf
Multi-algorithm Ensemble Learning at Scale: Software, Hardware and Algorithmic Approaches: Multi-algorithm ensemble machine learning methods are often used when the true prediction function is not easily approximated by a single algorithm. The Super Learner algorithm, also known as stacking, combines multiple, typically diverse, base learning algorithms into a single, powerful prediction function through a secondary learning process called metalearning. Although ensemble methods offer superior performance over their singleton counterparts, there is an implicit computational cost to ensembles, as it requires training and cross-validating multiple base learning algorithms.
We will demonstrate a variety of software- and hardware-based approaches that lead to more scalable ensemble learning software, including a highly scalable implementation of stacking called “H2O Ensemble”, built on top of the open source, distributed machine learning platform, H2O. H2O Ensemble scales across multi-node clusters and allows the user to create ensembles of deep neural networks, Gradient Boosting Machines, Random Forest, and others. As for algorithm-based approaches, we will present two algorithmic modifications to the original stacking algorithm that further reduce computation time — Subsemble algorithm and the Online Super Learner algorithm. This talk will also include benchmarks of the implementations of these new stacking variants.
Build a simple image recognition system with tensor flowDebasisMohanty37
A perfect working model to detect mnist dataset using TensorFlow.
Dataset:
http://yann.lecun.com/exdb/mnist/
For code check the below GitHub links:
https://github.com/Jitudebz/psychic-pancake
AWS re:Invent 2016: Using MXNet for Recommendation Modeling at Scale (MAC306)Amazon Web Services
For many companies, recommendation systems solve important machine learning problems. But as recommendation systems grow to millions of users and millions of items, they pose significant challenges when deployed at scale. The user-item matrix can have trillions of entries (or more), most of which are zero. To make common ML techniques practical, sparse data requires special techniques. Learn how to use MXNet to build neural network models for recommendation systems that can scale efficiently to large sparse datasets.
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16MLconf
Multi-algorithm Ensemble Learning at Scale: Software, Hardware and Algorithmic Approaches: Multi-algorithm ensemble machine learning methods are often used when the true prediction function is not easily approximated by a single algorithm. The Super Learner algorithm, also known as stacking, combines multiple, typically diverse, base learning algorithms into a single, powerful prediction function through a secondary learning process called metalearning. Although ensemble methods offer superior performance over their singleton counterparts, there is an implicit computational cost to ensembles, as it requires training and cross-validating multiple base learning algorithms.
We will demonstrate a variety of software- and hardware-based approaches that lead to more scalable ensemble learning software, including a highly scalable implementation of stacking called “H2O Ensemble”, built on top of the open source, distributed machine learning platform, H2O. H2O Ensemble scales across multi-node clusters and allows the user to create ensembles of deep neural networks, Gradient Boosting Machines, Random Forest, and others. As for algorithm-based approaches, we will present two algorithmic modifications to the original stacking algorithm that further reduce computation time — Subsemble algorithm and the Online Super Learner algorithm. This talk will also include benchmarks of the implementations of these new stacking variants.
Rajat Monga, Engineering Director, TensorFlow, Google at MLconf 2016MLconf
Machine Learning with TensorFlow: TensorFlow has enabled cutting-edge machine learning research at the top AI labs in the world. At the same time it has made the technology accessible to a large audience leading to some amazing uses. TensorFlow is used for classification, recommendation, text parsing, sentiment analysis and more. This talk will go over the design that makes it fast, flexible, and easy to use, and describe how we continue to make it better.
Develop a fundamental overview of Google TensorFlow, one of the most widely adopted technologies for advanced deep learning and neural network applications. Understand the core concepts of artificial intelligence, deep learning and machine learning and the applications of TensorFlow in these areas.
The deck also introduces the Spotle.ai masterclass in Advanced Deep Learning With Tensorflow and Keras.
Applying your Convolutional Neural NetworksDatabricks
Part 3 of the Deep Learning Fundamentals Series, this session starts with a quick primer on activation functions, learning rates, optimizers, and backpropagation. Then it dives deeper into convolutional neural networks discussing convolutions (including kernels, local connectivity, strides, padding, and activation functions), pooling (or subsampling to reduce the image size), and fully connected layer. The session also provides a high-level overview of some CNN architectures. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Upload photos Copy this Meetup
Things we will discuss are
1.Introduction of Machine learning and deep learning.
2.Applications of ML and DL.
3.Various learning algorithms of ML and DL.
4.Quick introduction of open source solutions for all programming languages.
5.Finally A broad picture of what you can do with Deep learning to this tech world.
Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016MLconf
Applying Deep Learning at Facebook Scale: Facebook leverages Deep Learning for various applications including event prediction, machine translation, natural language understanding and computer vision at a very large scale. There are more than a billion users logging on to Facebook every daily generating thousands of posts per second and uploading more than a billion images and videos every day. This talk will explain how Facebook scaled Deep Learning inference for realtime applications with latency budgets in the milliseconds.
Le Song, Assistant Professor, College of Computing, Georgia Institute of Tech...MLconf
Understanding Deep Learning for Big Data: The complexity and scale of big data impose tremendous challenges for their analysis. Yet, big data also offer us great opportunities. Some nonlinear phenomena, features or relations, which are not clear or cannot be inferred reliably from small and medium data, now become clear and can be learned robustly from big data. Typically, the form of the nonlinearity is unknown to us, and needs to be learned from data as well. Being able to harness the nonlinear structures from big data could allow us to tackle problems which are impossible before or obtain results which are far better than previous state-of-the-arts.
Nowadays, deep neural networks are the methods of choice when it comes to large scale nonlinear learning problems. What makes deep neural networks work? Is there any general principle for tackling high dimensional nonlinear problems which we can learn from deep neural works? Can we design competitive or better alternatives based on such knowledge? To make progress in these questions, my machine learning group performed both theoretical and experimental analysis on existing and new deep learning architectures, and investigate three crucial aspects on the usefulness of the fully connected layers, the advantage of the feature learning process, and the importance of the compositional structures. Our results point to some promising directions for future research, and provide guideline for building new deep learning models.
Josh Patterson, Advisor, Skymind – Deep learning for Industry at MLconf ATL 2016MLconf
DL4J and DataVec for Enterprise Deep Learning Workflows: Applications in NLP, sensor processing (IoT), image processing, and audio processing have all emerged as prime deep learning applications. In this session we will take a look at a practical review of building practical and secure Deep Learning workflows in the enterprise. We’ll see how DL4J’s DataVec tool enables scalable ETL and vectorization pipelines to be created for a single machine or scale out to Spark on Hadoop. We’ll also see how Deep Networks such as Recurrent Neural Networks are able to leverage DataVec to more quickly process data for modeling.
Tom Peters, Software Engineer, Ufora at MLconf ATL 2016MLconf
Say What You Mean: Scaling Machine Learning Algorithms Directly from Source Code: Scaling machine learning applications is hard. Even with powerful systems like Spark, Tensor Flow, and Theano, the code you write has more to do with getting these systems to work at all than it does with your algorithm itself. But it doesn’t have to be this way!
In this talk, I’ll discuss an alternate approach we’ve taken with Pyfora, an open-source platform for scalable machine learning and data science in Python. I’ll show how it produces efficient, large scale machine learning implementations directly from the source code of single-threaded Python programs. Instead of programming to a complex API, you can simply say what you mean and move on. I’ll show some classes of problem where this approach truly shines, discuss some practical realities of developing the system, and I’ll talk about some future directions for the project.
Andrew Musselman, Committer and PMC Member, Apache Mahout, at MLconf Seattle ...MLconf
Andrew recently joined Lucidworks to head up their Advisory practice, and is a Committer and PMC member on the Apache Mahout project.
Abstract summary
Apache Mahout: Distributed Matrix Math for Machine Learning:
Machine learning and statistics tools like R and Scikit-learn are declarative, flexible, and extensible, but they scale poorly. “Big Data” tools such as Apache Spark, Apache Flink, and H2O distribute well, but have rudimentary functionality for machine learning and are not easily extensible. In this talk we present Apache Mahout, which provides a Scala-based, R-like DSL for doing linear algebra on distributed systems, letting practitioners quickly implement algorithms on distributed matrices. We will highlight new features in version 0.13 including the hybrid CPU/GPU-optimized engine, and a new framework for user-contributed methods and algorithms similar to R’s CRAN.
We will cover some history of Mahout, introduce the R-Like Scala DSL, provide an overview of how Mahout is able to operate on matrices distributed across multiple computers, and how it takes advantage of GPUs on each computer in a cluster creating a hybrid distributed/GPU-accelerated environment; then demonstrate the kinds of normally complex or unfeasible problems users can easily solve with Mahout; show an integration which allows Mahout to leverage the visualization packages of projects such as R, Python, and D3; and lastly explain how to develop algorithms and submit them to the Mahout project for other users to use.
Deep learning is making news across the country as one of the most promising techniques in machine learning research. However, these methods are complex to implement, finicky to tune, and state-of-the-art accuracy is only achieved by a few experts in the field. In this session, we give a beginner-friendly explanation of deep learning using neural networks—what it is, what it does, and how; and introduce the concept of deep features, which allows you to obtain great performance with reduced running times and data set sizes. We then show how these methods can easily be deployed on GPU instances (G2) on Amazon EC2.
by Vikram Madan, Sr. Product Manager, AWS Deep Learning
In this workshop, we will provide cover deep learning fundamentals and focus on the powerful and scalable Apache MXNet open source deep learning framework. At the end of this tutorial you’ll be able to train your own deep neural network and fine tune existing state of the art models for image and object recognition. We’ll also deep dive on setting up your deep learning infrastructure on AWS and model deployment on AWS Lambda.
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016MLconf
Multi-algorithm Ensemble Learning at Scale: Software, Hardware and Algorithmic Approaches: Multi-algorithm ensemble machine learning methods are often used when the true prediction function is not easily approximated by a single algorithm. The Super Learner algorithm, also known as stacking, combines multiple, typically diverse, base learning algorithms into a single, powerful prediction function through a secondary learning process called metalearning. Although ensemble methods offer superior performance over their singleton counterparts, there is an implicit computational cost to ensembles, as it requires training and cross-validating multiple base learning algorithms.
We will demonstrate a variety of software- and hardware-based approaches that lead to more scalable ensemble learning software, including a highly scalable implementation of stacking called “H2O Ensemble”, built on top of the open source, distributed machine learning platform, H2O. H2O Ensemble scales across multi-node clusters and allows the user to create ensembles of deep neural networks, Gradient Boosting Machines, Random Forest, and others. As for algorithm-based approaches, we will present two algorithmic modifications to the original stacking algorithm that further reduce computation time — Subsemble algorithm and the Online Super Learner algorithm. This talk will also include benchmarks of the implementations of these new stacking variants.
Build a simple image recognition system with tensor flowDebasisMohanty37
A perfect working model to detect mnist dataset using TensorFlow.
Dataset:
http://yann.lecun.com/exdb/mnist/
For code check the below GitHub links:
https://github.com/Jitudebz/psychic-pancake
AWS re:Invent 2016: Using MXNet for Recommendation Modeling at Scale (MAC306)Amazon Web Services
For many companies, recommendation systems solve important machine learning problems. But as recommendation systems grow to millions of users and millions of items, they pose significant challenges when deployed at scale. The user-item matrix can have trillions of entries (or more), most of which are zero. To make common ML techniques practical, sparse data requires special techniques. Learn how to use MXNet to build neural network models for recommendation systems that can scale efficiently to large sparse datasets.
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16MLconf
Multi-algorithm Ensemble Learning at Scale: Software, Hardware and Algorithmic Approaches: Multi-algorithm ensemble machine learning methods are often used when the true prediction function is not easily approximated by a single algorithm. The Super Learner algorithm, also known as stacking, combines multiple, typically diverse, base learning algorithms into a single, powerful prediction function through a secondary learning process called metalearning. Although ensemble methods offer superior performance over their singleton counterparts, there is an implicit computational cost to ensembles, as it requires training and cross-validating multiple base learning algorithms.
We will demonstrate a variety of software- and hardware-based approaches that lead to more scalable ensemble learning software, including a highly scalable implementation of stacking called “H2O Ensemble”, built on top of the open source, distributed machine learning platform, H2O. H2O Ensemble scales across multi-node clusters and allows the user to create ensembles of deep neural networks, Gradient Boosting Machines, Random Forest, and others. As for algorithm-based approaches, we will present two algorithmic modifications to the original stacking algorithm that further reduce computation time — Subsemble algorithm and the Online Super Learner algorithm. This talk will also include benchmarks of the implementations of these new stacking variants.
Rajat Monga, Engineering Director, TensorFlow, Google at MLconf 2016MLconf
Machine Learning with TensorFlow: TensorFlow has enabled cutting-edge machine learning research at the top AI labs in the world. At the same time it has made the technology accessible to a large audience leading to some amazing uses. TensorFlow is used for classification, recommendation, text parsing, sentiment analysis and more. This talk will go over the design that makes it fast, flexible, and easy to use, and describe how we continue to make it better.
Develop a fundamental overview of Google TensorFlow, one of the most widely adopted technologies for advanced deep learning and neural network applications. Understand the core concepts of artificial intelligence, deep learning and machine learning and the applications of TensorFlow in these areas.
The deck also introduces the Spotle.ai masterclass in Advanced Deep Learning With Tensorflow and Keras.
Applying your Convolutional Neural NetworksDatabricks
Part 3 of the Deep Learning Fundamentals Series, this session starts with a quick primer on activation functions, learning rates, optimizers, and backpropagation. Then it dives deeper into convolutional neural networks discussing convolutions (including kernels, local connectivity, strides, padding, and activation functions), pooling (or subsampling to reduce the image size), and fully connected layer. The session also provides a high-level overview of some CNN architectures. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Upload photos Copy this Meetup
Things we will discuss are
1.Introduction of Machine learning and deep learning.
2.Applications of ML and DL.
3.Various learning algorithms of ML and DL.
4.Quick introduction of open source solutions for all programming languages.
5.Finally A broad picture of what you can do with Deep learning to this tech world.
Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016MLconf
Applying Deep Learning at Facebook Scale: Facebook leverages Deep Learning for various applications including event prediction, machine translation, natural language understanding and computer vision at a very large scale. There are more than a billion users logging on to Facebook every daily generating thousands of posts per second and uploading more than a billion images and videos every day. This talk will explain how Facebook scaled Deep Learning inference for realtime applications with latency budgets in the milliseconds.
Le Song, Assistant Professor, College of Computing, Georgia Institute of Tech...MLconf
Understanding Deep Learning for Big Data: The complexity and scale of big data impose tremendous challenges for their analysis. Yet, big data also offer us great opportunities. Some nonlinear phenomena, features or relations, which are not clear or cannot be inferred reliably from small and medium data, now become clear and can be learned robustly from big data. Typically, the form of the nonlinearity is unknown to us, and needs to be learned from data as well. Being able to harness the nonlinear structures from big data could allow us to tackle problems which are impossible before or obtain results which are far better than previous state-of-the-arts.
Nowadays, deep neural networks are the methods of choice when it comes to large scale nonlinear learning problems. What makes deep neural networks work? Is there any general principle for tackling high dimensional nonlinear problems which we can learn from deep neural works? Can we design competitive or better alternatives based on such knowledge? To make progress in these questions, my machine learning group performed both theoretical and experimental analysis on existing and new deep learning architectures, and investigate three crucial aspects on the usefulness of the fully connected layers, the advantage of the feature learning process, and the importance of the compositional structures. Our results point to some promising directions for future research, and provide guideline for building new deep learning models.
Josh Patterson, Advisor, Skymind – Deep learning for Industry at MLconf ATL 2016MLconf
DL4J and DataVec for Enterprise Deep Learning Workflows: Applications in NLP, sensor processing (IoT), image processing, and audio processing have all emerged as prime deep learning applications. In this session we will take a look at a practical review of building practical and secure Deep Learning workflows in the enterprise. We’ll see how DL4J’s DataVec tool enables scalable ETL and vectorization pipelines to be created for a single machine or scale out to Spark on Hadoop. We’ll also see how Deep Networks such as Recurrent Neural Networks are able to leverage DataVec to more quickly process data for modeling.
Tom Peters, Software Engineer, Ufora at MLconf ATL 2016MLconf
Say What You Mean: Scaling Machine Learning Algorithms Directly from Source Code: Scaling machine learning applications is hard. Even with powerful systems like Spark, Tensor Flow, and Theano, the code you write has more to do with getting these systems to work at all than it does with your algorithm itself. But it doesn’t have to be this way!
In this talk, I’ll discuss an alternate approach we’ve taken with Pyfora, an open-source platform for scalable machine learning and data science in Python. I’ll show how it produces efficient, large scale machine learning implementations directly from the source code of single-threaded Python programs. Instead of programming to a complex API, you can simply say what you mean and move on. I’ll show some classes of problem where this approach truly shines, discuss some practical realities of developing the system, and I’ll talk about some future directions for the project.
Andrew Musselman, Committer and PMC Member, Apache Mahout, at MLconf Seattle ...MLconf
Andrew recently joined Lucidworks to head up their Advisory practice, and is a Committer and PMC member on the Apache Mahout project.
Abstract summary
Apache Mahout: Distributed Matrix Math for Machine Learning:
Machine learning and statistics tools like R and Scikit-learn are declarative, flexible, and extensible, but they scale poorly. “Big Data” tools such as Apache Spark, Apache Flink, and H2O distribute well, but have rudimentary functionality for machine learning and are not easily extensible. In this talk we present Apache Mahout, which provides a Scala-based, R-like DSL for doing linear algebra on distributed systems, letting practitioners quickly implement algorithms on distributed matrices. We will highlight new features in version 0.13 including the hybrid CPU/GPU-optimized engine, and a new framework for user-contributed methods and algorithms similar to R’s CRAN.
We will cover some history of Mahout, introduce the R-Like Scala DSL, provide an overview of how Mahout is able to operate on matrices distributed across multiple computers, and how it takes advantage of GPUs on each computer in a cluster creating a hybrid distributed/GPU-accelerated environment; then demonstrate the kinds of normally complex or unfeasible problems users can easily solve with Mahout; show an integration which allows Mahout to leverage the visualization packages of projects such as R, Python, and D3; and lastly explain how to develop algorithms and submit them to the Mahout project for other users to use.
Machine Learning Essentials Demystified part2 | Big Data DemystifiedOmid Vahdaty
achine Learning Essentials Abstract:
Machine Learning (ML) is one of the hottest topics in the IT world today. But what is it really all about?
In this session we will talk about what ML actually is and in which cases it is useful.
We will talk about a few common algorithms for creating ML models and demonstrate their use with Python. We will also take a peek at Deep Learning (DL) and Artificial Neural Networks and explain how they work (without too much math) and demonstrate DL model with Python.
The target audience are developers, data engineers and DBAs that do not have prior experience with ML and want to know how it actually works.
Deep Style: Using Variational Auto-encoders for Image GenerationTJ Torres
This talk is about some work done at Stitch Fix surrounding the use of Variational Auto-encoders to efficiently create distributed representation spaces of style and generative image models for new clothing.
Online learning, Vowpal Wabbit and HadoopHéloïse Nonne
Online learning, Vowpal Wabbit and Hadoop
Online learning has recently caught a lot of attention, following some competitions, and especially after Criteo released 11GB for the training set of a Kaggle contest.
Online learning allows to process massive data as the learner processes data in a sequential way using up a low amount of memory and limited CPU ressources. It is also particularly suited for handling time-evolving date.
Vowpal Wabbit has become quite popular: it is a handy, light and efficient command line tool allowing to do online learning on GB of data, even on a standard laptop with standard memory. After a reminder of the online learning principles, we present how to run Vowpal Wabbit on Hadoop in a distributed fashion.
Separating Hype from Reality in Deep Learning with Sameer FarooquiDatabricks
Deep Learning is all the rage these days, but where does the reality of what Deep Learning can do end and the media hype begin? In this talk, I will dispel common myths about Deep Learning that are not necessarily true and help you decide whether you should practically use Deep Learning in your software stack.
I’ll begin with a technical overview of common neural network architectures like CNNs, RNNs, GANs and their common use cases like computer vision, language understanding or unsupervised machine learning. Then I’ll separate the hype from reality around questions like:
• When should you prefer traditional ML systems like scikit learn or Spark.ML instead of Deep Learning?
• Do you no longer need to do careful feature extraction and standardization if using Deep Learning?
• Do you really need terabytes of data when training neural networks or can you ‘steal’ pre-trained lower layers from public models by using transfer learning?
• How do you decide which activation function (like ReLU, leaky ReLU, ELU, etc) or optimizer (like Momentum, AdaGrad, RMSProp, Adam, etc) to use in your neural network?
• Should you randomly initialize the weights in your network or use more advanced strategies like Xavier or He initialization?
• How easy is it to overfit/overtrain a neural network and what are the common techniques to ovoid overfitting (like l1/l2 regularization, dropout and early stopping)?
Part 2 of the Deep Learning Fundamentals Series, this session discusses Tuning Training (including hyperparameters, overfitting/underfitting), Training Algorithms (including different learning rates, backpropagation), Optimization (including stochastic gradient descent, momentum, Nesterov Accelerated Gradient, RMSprop, Adaptive algorithms - Adam, Adadelta, etc.), and a primer on Convolutional Neural Networks. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
This is a slide deck from a presentation, that my colleague Shirin Glander (https://www.slideshare.net/ShirinGlander/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, I just copied the two slide decks together. As I did the "surrounding" part, I added Shirin's part at the place when she took over and then added my concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.
This is a slide deck from a presentation, that my colleague Uwe Friedrichsen (https://www.slideshare.net/ufried/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, Uwe copied the two slide decks together. As he did the "surrounding" part, he added my part at the place where I took over and then added concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.
"Practical Machine Learning With Ruby" by Iqbal Farabi (ID Ruby Community)Tech in Asia ID
This slide was shared on Tech in Asia Jakarta 2016 @ 17 November 2016.
Get updates about our dev events delivered straight to your inbox by signing up here: http://bit.ly/tia-dev ! Be the first to know when new information is available!
This is a 2 hours overview on the deep learning status as for Q1 2017.
Starting with some basic concepts, continue to basic networks topologies , tools, HW/Accelerators and finally Intel's take on the the different fronts.
Intel and Amazon - Powering your innovation together. Eran Shlomo
In these slides we go over the current joined offering from Intel and amazon, the coming great technologies and how the two companies are creating synergy that boost your innovation and productivity.
This was presented in TLV AWS loft Mar 2017.
The industrial internet of things present the fastest growing IoT market, aiming for reduction in cost, TTM and rise in quality.
This is the presentation from the Israeli System e.g conference.
This is a python course for beginners, intended both for frontal class learning as well as self-work.
The Course is designed for 2 days and then another week of HW assignments.
Intel RealSense & Depth cameras expected usage in the coming years.
Robotics, Drones, Autonomous cars, Gestures, 3D scanning, Bio signature and many others.
Code and some more technical details are available at https://github.com/IntelRealSense/librealsense
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. About me
Haifa IoT Ignition lab and IPP(Intel ingenuity partnership program) tech lead.
Intel Perceptual computing.
Compute, cloud and embedded expert.
Maker and Entrepreneur
Focus on Data science and Machine learning in recent years
3. Agenda
Lets talk some theory
Lets define a problem
Time to code our network
Meet the pro tools
Time to fancier netwroks.
4. By the end of the day…
You will have:
• Get some intuition on what is DL, what you can use it for.
• Have understanding of the mechanics behind deep learning.
• Get basic feeling of the concepts and how does DL works.
• Some hands on known tools.
• A list of pointers to continue your learning and experimenting.
You will not have:
• Practical experience on solving problems using DL
• Understanding of the different types of networks and their usage
• The math skills required to be an expert.
5. We are going to work in try, catch up
• Along the way we have exercises, you will get time to try them
• Usually next slide will contain the solution
• So for every task:
• Try
• Catch up once solution is on board, focus on understanding the solution
• Make sure you have it working, each step is required for the one after it.
7. Assembly C (compiler) C++(OOP) JAVA(managed)
Python (run
time)
Where we are in technology timeline perspective
Model
protos
High level
(keras)
???? ???? ????
8. Deep learning – basic anatomy
Data driven
Training a model
Input, output and hidden neurons
Input layer Hidden layer(s) Output layer
Deep learning Many hidden (deep) layers
9. The essence of deeplearning
Xi YiWij(1) Wij(2)
W11(1)
X1
Y1
W11(2)
𝑌 = 𝑓 𝑋 = 𝑊𝑋+b
Deep network is essentially a function
we train to detect some pattern
b (bias) is omitted in this drawing
Why is the sudden success ?
A lot of data
A lot of compute
Improved networks
10. Before we start , some Math …
• As data science is becoming part of every business math gaining extra
popularity.
• Question you need to ask yourself if you wish to go deeper into the field – Do I
Want/Can refresh/increase my math skills.
• For our basic deeplearning course we need some:
• Algebra, Mainly around Matrix/Vector operations
• Calculus, Mainly around derivatives
• You can never get enough of statistics in data science, go over variance, mean,
distributions, probabilities
• Python
11. Some math references to get start with
https://www.youtube.com/watch?v=K5BLNZw7UeU Matrix operations
https://www.youtube.com/watch?v=kuixY2bCc_0 Multiplying matrices
https://www.youtube.com/watch?v=rAof9Ld5sOg Derivatives
https://www.youtube.com/watch?v=TUJgZ4UDY2g The chain rule
https://www.youtube.com/watch?v=ZkjP5RJLQF4 Linear regression
https://www.youtube.com/watch?v=_Po-xZJflPM Logistic regression
https://www.youtube.com/watch?v=Y4lTTHua0TE Mean, Variance,…
12. Lets practice some basics
• We are working with python 3.5 . you are encouraged to work with conda
package manager but pip is oke as well.
• numpy is THE math operations package for python, we will be using it to play
with matrices, install it.
• Lets create 5 random normal numbers, making sure numpy is good to go
• Visualization is very important in general and in the course, install matplotlib
• Visualize 100 random numbers like the example above.
13. Normal random
We got 100 normally distributed numbers, lets create a
histogram of them
By default our normal distribution is with mean 0 and
variance of 1.
Create 2 matrices of 10x10: A = N(0,1), B =N(3,16)
plot their histogram
Hint: Flatten the matrix to histogram
Type equation here.
X~N(M, σ2
)
Variancemean
14. Normal random
You can switch between the standard normal to any normal, Z=N(0, 12)
Code a function that multiply two matrices explicitly,
you can assume inputs are nd array, don’t use numpy
matrix multiplication operator.
def mul_matrix(a,b):
pass
Hint: for loop is going over rows in numpy, matrix.T is the transpose
16. Neural networks – Background and inspiration
It is pretty common to compare neural networks to how our brain works:
• Coupled well with the term AI
• Has some sense in it, as many different researches show. Yet we are a bit long from really understanding
how the brain works.
𝑘=0
𝑛
𝑊𝑋
W1
W2
W3
X1
X2
X3
𝑓(𝑥)
17. Artificial neural networks
Output=f( 𝑘=0
𝑛
𝑊𝑋), where :
WX – inputs multiplied by weights
F(x) is an activation function
Common activation functions: sigmoid,
relu, tanh, linear, …
1. Code the sigmoid function
Use numpy, z can be a
vector
def sigmoid(z):
…2. Plot 100 points of your
sigmoid.
Hint: plt.plot(X, Y,'co') draw
points only
18. Synthetic problem for our neural network
One of the biggest challenges with deeplearning is data, and a lot of it.
We will override this problem using synthetic problem, we will model predefined
function.
We usually divide our dataset to (at least) two groups:
• Training, ~70% of our data.
• Test, 30% of our data.
In real life cases you are likely to have validation as well.
Our problem is to predict some function behavior given points from this function, later
comparing our model into new generated points from the function.
Lets example
19. Utils.py
You got utils.py module, this module contains some help functions.
Git repo https://gitlab.com/eshlomo/EazyDnn , utils.py under base_network
Lets generate a pattern:
• Signature:
• Function is given with range, number of samples and generator, which is the
function itself.
• Generate 100 samples of 𝑓 𝑥 = 𝑥2 between -1 to 1
• Plot the generated function
20. Before we go into our network
• This is a lot of info to push into short time.
• You are likely not be able to follow all details in real time, In order to fully feel
you got it you need to train and train and …
• We are going through these details to give you a solid base for self learning.
• Don’t get too alarmed from the math.
• Feel free to contact me
21. Lets model a line
𝑓 𝑥 = 𝑥2Previous ex. solution
We will create the following network
𝑥 𝑓 𝑥 𝑓 𝑥 = 𝑥
• Generate 100 samples of 𝑓 𝑥 = 𝑥, between 0 to 1
• Plot the generated function
𝑤11
(1)
𝑤12
(1)
𝑤13
(1)
𝑤 1 𝑤 2
𝑤11
(2)
𝑤21
(2)
22. Network basics
We are going to train our network to estimate our line, in other words:
• We are going to find the best vectors w1,w2 that will make our network model
our line.
• Training is an iterative process in which in every iteration we minimize our
model error be small change to our weight vectors.
• For that purpose we define a cost (error) function on our model.
Previous ex. solution 𝑓 𝑥 = 𝑥
23. Cost function
• Lets mark our model output as 𝑌, and our real output as 𝑌
• We use Quadratic cost (marked with J), Also known as mean squared
error, maximum likelihood, and sum squared error :Err𝑜𝑟 = 𝐽 𝑌 =
1
2
(𝑌 − 𝑌)2
• Keep in mind we know the real output, we have training data==ground truth==
annotated data.
• Since we want to minimize our error, we would like to move our weights
against the derivative at any given iteration.
• Similar to finding minimum of function in calculus, only in numeric way, This
process is called gradient decent.
24. Gradient decent
Lets start creating our network, create a class that will manage our network:
• 3 layers, their size is constructor parameter.
• 2 weight matrices
• Init all weights with standard random normal
A process in which every iteration we:
• Predict our output – Forward pass
• Calculate our error using our cost function = prediction –
ground truth
• Calculate the error derivative in reference for our weights.
• Update each weight with small Δ opposite to the gradient
direction (minimize) – Backward pass
• Δ is called our learning rate
25. Our forward pass
𝑘=0
𝑛
𝑊𝑋
W1
W2
W3
X1
X2
X3
𝑠𝑖𝑔𝑚𝑜𝑖𝑑
• Add a method to our class, called
forward
• This method will calculate 𝑌 our model
predicted output
• Use the following naming conventions:
• 𝑍(𝑛)
=𝑊(𝑛−1)
𝑋
• 𝑎(𝑛)
=𝑠𝑖𝑔𝑚𝑜𝑖𝑑(𝑍(𝑛)
)=𝑓𝑎(𝑍(𝑛)
)
• Forward will return our model output,
AKA sigmoid of the last layer activation
sum - 𝑓𝑎(𝑍(3)
).
𝑍(2) 𝑎(2)
26. Lets add our cost function
• Add a method to our class, called cost
• This method will calculate our cost for
every iteration
• It gets as parameters our input and
output and returns the cost 𝐽 𝑌 =
1
2
(𝑌 − 𝑌)2
• Keep in mind 𝑌=forward(X)
27. Back propagation
Once we have the cost (error) we want to calculate the derivative of each weight
in reference to the cost.
Remember – we want to move each weight in the opposite of the direction to its
error.
We want to minimize 𝐽 𝑌 =
1
2
(𝑌 − 𝑌)2, where 𝑌 = 𝑓(𝑊𝑋) 𝐽 𝑊
=
1
2
(𝑌 − 𝑓(𝑊𝑋))2
So we need to calculate :
𝜕𝐽
𝜕𝑊
𝐽 𝑊
We have a composition of parameters, We need to use the chain rule
28. The chain rule
• https://en.wikipedia.org/wiki/Chain_rule
• A way to compute the derivative of function composition
We want to calculate
𝜕𝐽
𝜕𝑊
𝐽 𝑊 , lets do some chaining
𝜕𝐽
𝜕𝑊
𝐽 𝑊 =
𝜕𝐽
𝜕𝑊(1) 𝐽 𝑊 +
𝜕𝐽
𝜕𝑊(2) 𝐽 𝑊
Lets start with
𝜕𝐽
𝜕𝑊(2) 𝐽 𝑊 :
𝜕𝐽
𝜕𝑊(2)
1
2
(𝑌 − 𝑌)2 = 𝑌 − 𝑌
𝜕 𝑌
𝜕𝑊 2
𝑌= 𝑓𝑎( 𝑍(3)
),
𝜕 𝑌
𝜕𝑊 2 =
𝜕 𝑌
𝜕𝑍(3)
𝜕𝑍(3)
𝜕𝑊 2 we need to calculate sigmoid derivative
𝑍(3)
= 𝑎(2)
𝑊(2)
,
𝜕𝑍(3)
𝜕𝑊 2 = 𝑎(2) Linear propagation of the error per weight.
29. Some derivatives
Code a function called sigmoidPrime , calculates sigmoid derivative for matix Z
Code a function called costPrime, calculates
𝜕𝐽
𝜕𝑊(1) ,
𝜕𝐽
𝜕𝑊(2)
Sigmoid derivative :
𝑑
𝑑𝑧
1
1+𝑒−𝑧 = 𝑒−𝑧
(1+𝑒−𝑧)2
31. Time for some training
Go over the methods you have just added, can you tell what are they doing ?
In utils.py there is a class linear_trainer – what is it doing ?
In utils.py there is a method test_line – what is it doing ?
Create a network instance, training and test data and train your network using
the function test_line.
32. Lets run some more
Much better, now lets try 100K. mmm… same result Ideas ?
Default of training iterations number is 100, lets make it 10000
33. When error is too high
We usually tend to (in few minutes will get into the why):
• Do more training time
• Get more data
• Get bigger (deeper) model, usually comes with more data.
Do the following:
• Install scipy
• Increase your hidden layer to size 30
• Replace the linear trainer with BFGS_trainer inside the method test_line– find it in
utils.py
34. Optimization
Gradient decent looks for minimum and can suffer
from these problems:
• Stuck in local minima
• Stuck in plateau
• Learning rate is too big to reach minima,
bouncing…
Read more @ http://sebastianruder.com/optimizing-
gradient-descent/
35. The Bias Variance tradeoff
We can look at our model error as follows:
noise
model
error
Total
Error
Our error usually comes from combination of these two, These are all equivalent:
• High variance=modeling noise=not enough data=model too big=overfit
• High bias =model too simple=underfit
36. Bias / Variance
0
20
40
60
80
100
120
140
160
0 5 10 15
Good model
0
20
40
60
80
100
120
140
160
0 5 10 15
High bias
0
20
40
60
80
100
120
140
160
0 5 10 15
High Variance
How can you tell which one of
those do you have ?
37. Rules of thumb regarding Bias/Variance
• Good accuracy on training and test Good model
• Good accuracy on training, poor on test Overfit
• Poor on both underfit
Put back our training iterations on 1000, still BGFS
Generate the training data as before range 0-1 but test data on range 0-2
How does it look ? Can you guess why ?
38. Deep neural nets have limits
• On general the network is trained on bounded data
• It is likely not to generalize well out of bound
• So you need your data sets contain all data range OR
• Have more suitable model, for curve
prediction(time series) RNN might have been a
better choice here.
• Try to fit the following curve 2.5𝑒
−𝑥
2 cos(𝜋𝑥) , range 0-
10
• Install tensorflow (cpu version, make sure you are on
python 3.5.x)
• Install keras
• Create our model using Keras(google time…):
• Sequential model
• Dens layers are the sum
• Use relu activation
39. Next steps if you wish to get deeper
• CNN and caffee: http://adilmoujahid.com/posts/2016/06/introduction-deep-
learning-python-caffe/
• Udacity deep learning course (TF examples walkthrough)
• Andrew NG ML course on courser
• Geof Hinton neural netwroks on coursera
• Stanford CS231 on youtube
• RNN: http://karpathy.github.io/2015/05/21/rnn-effectiveness/
• Quick round tour on different networks , great youtube series:
• https://www.youtube.com/playlist?list=PLjJh1vlSEYgvGod9wWiydumYl8hOXixNu