Introduction to Deep Learning, Keras, and TensorFlowSri Ambati
This meetup was recorded in San Francisco on Jan 9, 2019.
Video recording of the session can be viewed here: https://youtu.be/yG1UJEzpJ64
Description:
This fast-paced session starts with a simple yet complete neural network (no frameworks), followed by an overview of activation functions, cost functions, backpropagation, and then a quick dive into CNNs. Next, we'll create a neural network using Keras, followed by an introduction to TensorFlow and TensorBoard. For best results, familiarity with basic vectors and matrices, inner (aka "dot") products of vectors, and rudimentary Python is definitely helpful. If time permits, we'll look at the UAT, CLT, and the Fixed Point Theorem. (Bonus points if you know Zorn's Lemma, the Well-Ordering Theorem, and the Axiom of Choice.)
Oswald's Bio:
Oswald Campesato is an education junkie: a former Ph.D. Candidate in Mathematics (ABD), with multiple Master's and 2 Bachelor's degrees. In a previous career, he worked in South America, Italy, and the French Riviera, which enabled him to travel to 70 countries throughout the world.
He has worked in American and Japanese corporations and start-ups, as C/C++ and Java developer to CTO. He works in the web and mobile space, conducts training sessions in Android, Java, Angular 2, and ReactJS, and he writes graphics code for fun. He's comfortable in four languages and aspires to become proficient in Japanese, ideally sometime in the next two decades. He enjoys collaborating with people who share his passion for learning the latest cool stuff, and he's currently working on his 15th book, which is about Angular 2.
TensorFlow에 대한 분석 내용
- TensorFlow?
- 배경
- DistBelief
- Tutorial - Logistic regression
- TensorFlow - 내부적으로는
- Tutorial - CNN, RNN
- Benchmarks
- 다른 오픈 소스들
- TensorFlow를 고려한다면
- 설치
- 참고 자료
Tijmen Blankenvoort, co-founder Scyfer BV, presentation at Artificial Intelligence Meetup 15-1-2014. Introduction into Neural Networks and Deep Learning.
Introduction to Deep Learning, Keras, and TensorFlowSri Ambati
This meetup was recorded in San Francisco on Jan 9, 2019.
Video recording of the session can be viewed here: https://youtu.be/yG1UJEzpJ64
Description:
This fast-paced session starts with a simple yet complete neural network (no frameworks), followed by an overview of activation functions, cost functions, backpropagation, and then a quick dive into CNNs. Next, we'll create a neural network using Keras, followed by an introduction to TensorFlow and TensorBoard. For best results, familiarity with basic vectors and matrices, inner (aka "dot") products of vectors, and rudimentary Python is definitely helpful. If time permits, we'll look at the UAT, CLT, and the Fixed Point Theorem. (Bonus points if you know Zorn's Lemma, the Well-Ordering Theorem, and the Axiom of Choice.)
Oswald's Bio:
Oswald Campesato is an education junkie: a former Ph.D. Candidate in Mathematics (ABD), with multiple Master's and 2 Bachelor's degrees. In a previous career, he worked in South America, Italy, and the French Riviera, which enabled him to travel to 70 countries throughout the world.
He has worked in American and Japanese corporations and start-ups, as C/C++ and Java developer to CTO. He works in the web and mobile space, conducts training sessions in Android, Java, Angular 2, and ReactJS, and he writes graphics code for fun. He's comfortable in four languages and aspires to become proficient in Japanese, ideally sometime in the next two decades. He enjoys collaborating with people who share his passion for learning the latest cool stuff, and he's currently working on his 15th book, which is about Angular 2.
TensorFlow에 대한 분석 내용
- TensorFlow?
- 배경
- DistBelief
- Tutorial - Logistic regression
- TensorFlow - 내부적으로는
- Tutorial - CNN, RNN
- Benchmarks
- 다른 오픈 소스들
- TensorFlow를 고려한다면
- 설치
- 참고 자료
Tijmen Blankenvoort, co-founder Scyfer BV, presentation at Artificial Intelligence Meetup 15-1-2014. Introduction into Neural Networks and Deep Learning.
Summary:
There are three parts in this presentation.
A. Why do we need Convolutional Neural Network
- Problems we face today
- Solutions for problems
B. LeNet Overview
- The origin of LeNet
- The result after using LeNet model
C. LeNet Techniques
- LeNet structure
- Function of every layer
In the following Github Link, there is a repository that I rebuilt LeNet without any deep learning package. Hope this can make you more understand the basic of Convolutional Neural Network.
Github Link : https://github.com/HiCraigChen/LeNet
LinkedIn : https://www.linkedin.com/in/YungKueiChen
What is Deep Learning | Deep Learning Simplified | Deep Learning Tutorial | E...Edureka!
This Edureka "What is Deep Learning" video will help you to understand about the relationship between Deep Learning, Machine Learning and Artificial Intelligence and how Deep Learning came into the picture. This tutorial will be discussing about Artificial Intelligence, Machine Learning and its limitations, how Deep Learning overcame Machine Learning limitations and different real-life applications of Deep Learning.
Below are the topics covered in this tutorial:
1. What Is Artificial Intelligence?
2. What Is Machine Learning?
3. Limitations Of Machine Learning
4. Deep Learning To The Rescue
5. What Is Deep Learning?
6. Deep Learning Applications
To take a structured training on Deep Learning, you can check complete details of our Deep Learning with TensorFlow course here: https://goo.gl/VeYiQZ
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
Zaikun Xu from the Università della Svizzera Italiana presented this deck at the 2016 Switzerland HPC Conference.
“In the past decade, deep learning as a life-changing technology, has gained a huge success on various tasks, including image recognition, speech recognition, machine translation, etc. Pio- neered by several research groups, Geoffrey Hinton (U Toronto), Yoshua Benjio (U Montreal), Yann LeCun(NYU), Juergen Schmiduhuber (IDSIA, Switzerland), Deep learning is a renaissance of neural network in the Big data era.
Neural network is a learning algorithm that consists of input layer, hidden layers and output layers, where each circle represents a neural and the each arrow connection associates with a weight. The way neural network learns is based on how different between the output of output layer and the ground truth, following by calculating the gradients of this discrepancy w.r.b to the weights and adjust the weight accordingly. Ideally, it will find weights that maps input X to target y with error as lower as possible.”
Watch the video presentation: http://insidehpc.com/2016/03/deep-learning/
See more talks in the Swiss Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Artificial Intelligence, Machine Learning, Deep Learning
The 5 myths of AI
Deep Learning in action
Basics of Deep Learning
NVIDIA Volta V100 and AWS P3
Applying your Convolutional Neural NetworksDatabricks
Part 3 of the Deep Learning Fundamentals Series, this session starts with a quick primer on activation functions, learning rates, optimizers, and backpropagation. Then it dives deeper into convolutional neural networks discussing convolutions (including kernels, local connectivity, strides, padding, and activation functions), pooling (or subsampling to reduce the image size), and fully connected layer. The session also provides a high-level overview of some CNN architectures. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
"Mainstream access to deep learning technology will greatly impact most industries over the next three to five years."
So what exactly is deep learning? How does it work? And most importantly, why should you even care?
Deep learning is used in the research community and in industry to help solve many big data problems such as computer vision, speech recognition, and natural language processing.
Practical examples include:
-Vehicle, pedestrian and landmark identification for driver assistance
-Image recognition
-Speech recognition and translation
-Natural language processing
-Life sciences
-What You Will Learn
-Understand the intuition behind Artificial Neural Networks
-Apply Artificial Neural Networks in practice
-Understand the intuition behind Convolutional Neural Networks
-Apply Convolutional Neural Networks in practice
-Understand the intuition behind Recurrent Neural Networks
-Apply Recurrent Neural Networks in practice
-Understand the intuition behind Self-Organizing Maps
-Apply Self-Organizing Maps in practice
-Understand the intuition behind Boltzmann Machines
-Apply Boltzmann Machines in practice
-Understand the intuition behind AutoEncoders
-Apply AutoEncoders in practice
Despite the increase of deep learning practitioners and researchers, many of them do not use GPUs, this may lead to long training/evaluation cycles and non-practical research.
In his talk, Lior shares how to get started with GPUs and some of the best practices that helped him during research and work. The talk is for everyone who works with machine learning (deep learning experience is NOT mandatory!), It covers the very basics of how GPU works, CUDA drivers, IDE configuration, training, inference, and multi-GPU training.
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
Slides for a talk about Graph Neural Networks architectures, overview taken from very good paper by Zonghan Wu et al. (https://arxiv.org/pdf/1901.00596.pdf)
This presentation contains some interesting information about PyTorch and its capabilities.
It includes:
What’s PyTorch & It’s History
PyTorch Tensors
PyTorch Modules
PyTorch Vs TensorFlow
PyTorch Benefits
PyTorch Use-Cases
PyTorch: Governing Bodies & Board Members
Summary:
There are three parts in this presentation.
A. Why do we need Convolutional Neural Network
- Problems we face today
- Solutions for problems
B. LeNet Overview
- The origin of LeNet
- The result after using LeNet model
C. LeNet Techniques
- LeNet structure
- Function of every layer
In the following Github Link, there is a repository that I rebuilt LeNet without any deep learning package. Hope this can make you more understand the basic of Convolutional Neural Network.
Github Link : https://github.com/HiCraigChen/LeNet
LinkedIn : https://www.linkedin.com/in/YungKueiChen
What is Deep Learning | Deep Learning Simplified | Deep Learning Tutorial | E...Edureka!
This Edureka "What is Deep Learning" video will help you to understand about the relationship between Deep Learning, Machine Learning and Artificial Intelligence and how Deep Learning came into the picture. This tutorial will be discussing about Artificial Intelligence, Machine Learning and its limitations, how Deep Learning overcame Machine Learning limitations and different real-life applications of Deep Learning.
Below are the topics covered in this tutorial:
1. What Is Artificial Intelligence?
2. What Is Machine Learning?
3. Limitations Of Machine Learning
4. Deep Learning To The Rescue
5. What Is Deep Learning?
6. Deep Learning Applications
To take a structured training on Deep Learning, you can check complete details of our Deep Learning with TensorFlow course here: https://goo.gl/VeYiQZ
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
Zaikun Xu from the Università della Svizzera Italiana presented this deck at the 2016 Switzerland HPC Conference.
“In the past decade, deep learning as a life-changing technology, has gained a huge success on various tasks, including image recognition, speech recognition, machine translation, etc. Pio- neered by several research groups, Geoffrey Hinton (U Toronto), Yoshua Benjio (U Montreal), Yann LeCun(NYU), Juergen Schmiduhuber (IDSIA, Switzerland), Deep learning is a renaissance of neural network in the Big data era.
Neural network is a learning algorithm that consists of input layer, hidden layers and output layers, where each circle represents a neural and the each arrow connection associates with a weight. The way neural network learns is based on how different between the output of output layer and the ground truth, following by calculating the gradients of this discrepancy w.r.b to the weights and adjust the weight accordingly. Ideally, it will find weights that maps input X to target y with error as lower as possible.”
Watch the video presentation: http://insidehpc.com/2016/03/deep-learning/
See more talks in the Swiss Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Artificial Intelligence, Machine Learning, Deep Learning
The 5 myths of AI
Deep Learning in action
Basics of Deep Learning
NVIDIA Volta V100 and AWS P3
Applying your Convolutional Neural NetworksDatabricks
Part 3 of the Deep Learning Fundamentals Series, this session starts with a quick primer on activation functions, learning rates, optimizers, and backpropagation. Then it dives deeper into convolutional neural networks discussing convolutions (including kernels, local connectivity, strides, padding, and activation functions), pooling (or subsampling to reduce the image size), and fully connected layer. The session also provides a high-level overview of some CNN architectures. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
"Mainstream access to deep learning technology will greatly impact most industries over the next three to five years."
So what exactly is deep learning? How does it work? And most importantly, why should you even care?
Deep learning is used in the research community and in industry to help solve many big data problems such as computer vision, speech recognition, and natural language processing.
Practical examples include:
-Vehicle, pedestrian and landmark identification for driver assistance
-Image recognition
-Speech recognition and translation
-Natural language processing
-Life sciences
-What You Will Learn
-Understand the intuition behind Artificial Neural Networks
-Apply Artificial Neural Networks in practice
-Understand the intuition behind Convolutional Neural Networks
-Apply Convolutional Neural Networks in practice
-Understand the intuition behind Recurrent Neural Networks
-Apply Recurrent Neural Networks in practice
-Understand the intuition behind Self-Organizing Maps
-Apply Self-Organizing Maps in practice
-Understand the intuition behind Boltzmann Machines
-Apply Boltzmann Machines in practice
-Understand the intuition behind AutoEncoders
-Apply AutoEncoders in practice
Despite the increase of deep learning practitioners and researchers, many of them do not use GPUs, this may lead to long training/evaluation cycles and non-practical research.
In his talk, Lior shares how to get started with GPUs and some of the best practices that helped him during research and work. The talk is for everyone who works with machine learning (deep learning experience is NOT mandatory!), It covers the very basics of how GPU works, CUDA drivers, IDE configuration, training, inference, and multi-GPU training.
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
Slides for a talk about Graph Neural Networks architectures, overview taken from very good paper by Zonghan Wu et al. (https://arxiv.org/pdf/1901.00596.pdf)
This presentation contains some interesting information about PyTorch and its capabilities.
It includes:
What’s PyTorch & It’s History
PyTorch Tensors
PyTorch Modules
PyTorch Vs TensorFlow
PyTorch Benefits
PyTorch Use-Cases
PyTorch: Governing Bodies & Board Members
P 02 internal_data_first_2017_04_22_v6Vishwa Kolla
Data is the new oil and Analytics is the combustion engine. Internal data plays a special role in every organization. See how one can become internal data rich and move the value needle. Through what we call thoughtful data engineering, we found good data trumped good models time and again.
P 02 ta_in_uw_transformation_2017_06_13_v5Vishwa Kolla
Text Analytics can be fun, useful and distracting. It is not just about the tools, but about how to use tools to drive business outcome. In this deck, you will get a sneak peak into some uses of text analytics in Life Insurance Transformation
Tracxn Research — Enterprise Collaboration Landscape, November 2016Tracxn
Investments in the year to date rose by 10% YoY, to $1.1B, as the enterprise collaboration space goes through a shift from a document centric to a conversational model.
If you’re trying to build a specific look or theme for your presentation, you should keep in mind that colors do matter and have a major, powerful impact. In this post, our 24Slides designers focus on 4 major PowerPoint themes and offer tips on how to set the right colors for each
ML, AI, DL, DS, AA - are all of these the same or are they different? So much confusion. Learn how ML can be applied (with Insurance as an case study).
AI is considered the biggest commercial opportunity in the changing economy. It is expected to generate $15.7T by 2030 (according to PwC). In this talk I explained what are some of the top business opportunities for cloud sales professionals in the field of deep learning and artificial intelligence. I addressed the most important principles of deep learning techniques and finally, I studied two deep learning problems (image recognition with CNNs and sentiment analysis with RNNs) with 5 different deep learning frameworks.
14 Startups Leading the Artificial Intelligence (AI) RevolutionNVIDIA
Learn how these top 14 startups around the globe are using artificial intelligence (AI) and Deep Learning to impact key industries and humanity-at-large.
Transform your Business with AI, Deep Learning and Machine LearningSri Ambati
Video: https://www.youtube.com/watch?v=R3IXd1iwqjc
Meetup: http://www.meetup.com/SF-Bay-ACM/events/231709894/
In this talk, Arno Candel presents a brief history of AI and how Deep Learning and Machine Learning techniques are transforming our everyday lives. Arno will introduce H2O, a scalable open-source machine learning platform, and show live demos on how to train sophisticated machine learning models on large distributed datasets. He will show how data scientists and application developers can use the Flow GUI, R, Python, Java, Scala, JavaScript and JSON to build smarter applications, and how to take them to production. He will present customer use cases from verticals including insurance, fraud, churn, fintech, and marketing.
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new hype? How is Deep Learning different from previous approaches? Let’s look behind the curtain and unravel the reality. This talk will introduce the core concept of deep learning, explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why “deep learning is probably one of the most exciting things that is happening in the computer industry“ (Jen-Hsun Huang – CEO NVIDIA).
How to Become a Thought Leader in Your NicheLeslie Samuel
Are bloggers thought leaders? Here are some tips on how you can become one. Provide great value, put awesome content out there on a regular basis, and help others.
Georgia Tech cse6242 - Intro to Deep Learning and DL4JJosh Patterson
Introduction to deep learning and DL4J - http://deeplearning4j.org/ - a guest lecture by Josh Patterson at Georgia Tech for the cse6242 graduate class.
Don't Start from Scratch: Transfer Learning for Novel Computer Vision Problem...StampedeCon
In this session, we’ll discuss approaches for applying convolutional neural networks to novel computer vision problems, even without having millions of images of your own. Pretrained models and generic image data sets from Google, Kaggle, universities, and other places can be leveraged and adapted to solve industry and business specific problems. We’ll discuss the approaches of transfer learning and fine tuning to help anyone get started on using deep learning to get cutting edge results on their computer vision problems.
This is a slide deck from a presentation, that my colleague Shirin Glander (https://www.slideshare.net/ShirinGlander/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, I just copied the two slide decks together. As I did the "surrounding" part, I added Shirin's part at the place when she took over and then added my concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.
This is a slide deck from a presentation, that my colleague Uwe Friedrichsen (https://www.slideshare.net/ufried/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, Uwe copied the two slide decks together. As he did the "surrounding" part, he added my part at the place where I took over and then added concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.
Separating Hype from Reality in Deep Learning with Sameer FarooquiDatabricks
Deep Learning is all the rage these days, but where does the reality of what Deep Learning can do end and the media hype begin? In this talk, I will dispel common myths about Deep Learning that are not necessarily true and help you decide whether you should practically use Deep Learning in your software stack.
I’ll begin with a technical overview of common neural network architectures like CNNs, RNNs, GANs and their common use cases like computer vision, language understanding or unsupervised machine learning. Then I’ll separate the hype from reality around questions like:
• When should you prefer traditional ML systems like scikit learn or Spark.ML instead of Deep Learning?
• Do you no longer need to do careful feature extraction and standardization if using Deep Learning?
• Do you really need terabytes of data when training neural networks or can you ‘steal’ pre-trained lower layers from public models by using transfer learning?
• How do you decide which activation function (like ReLU, leaky ReLU, ELU, etc) or optimizer (like Momentum, AdaGrad, RMSProp, Adam, etc) to use in your neural network?
• Should you randomly initialize the weights in your network or use more advanced strategies like Xavier or He initialization?
• How easy is it to overfit/overtrain a neural network and what are the common techniques to ovoid overfitting (like l1/l2 regularization, dropout and early stopping)?
How to win data science competitions with Deep LearningSri Ambati
Note: Please download the slides first, otherwise some links won't work!
How to win kaggle style data science competitions and influence decisions with R, Deep Learning and H2O's fast algorithms.
We take a few public and kaggle datasets and model to win competitions on accuracy and scoring speed.
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Synthetic dialogue generation with Deep LearningS N
A walkthrough of a Deep Learning based technique which would generate TV scripts using Recurrent Neural Network. The model will generate a completely new TV script for a scene, after being training from a dataset. One will learn the concepts around RNN, NLP and various deep learning techniques.
Technologies to be used:
Python 3, Jupyter, TensorFlow
Source code: https://github.com/syednasar/talks/tree/master/synthetic-dialog
Speculations in anthropology and tech for an uncertain futureRoelof Pieters
We live in uncertain times. Never before has a species had such a devastating impact on Earth. First through potential nuclear armageddon, now through human-made climate breakdown. Similarly, many innovations and technologies have made us more connected than ever, but we are equally more separate and disconnected. Finally, not only our natural climate but equally so our political and scientific climate are on fire. What to do?
It is exactly in these uncertain times, changing times, that we might dare to dream collectively, together, on how we might move beyond the current mess we have found ourselves in. It is time to stage the revolution and commit ourselves to a more equitable world that respects planetary boundaries, embraces biodiversity, energy, water, wellbeing, society, and culture.
In this talk I will look at the role that technology plays, and could play in, making such dreams a reality, and how new tools and paradigms developed under the moniker of the makers movement and maker spaces, open-source and low tech have the potential to remake the shaky ground (physical, conceptual — dare I say — ontological) on which we stand.
Can technology, AI, the internet, open-source software and hardware really be of much help here? And why are anthropologists and engineers uniquely positioned to help? Let’s find out!
https://www.anthtechconf.co.uk/keynote/roelof-pieters
Deep Neural Networks that talk (Back)… with styleRoelof Pieters
Talk at Nuclai 2016 in Vienna
Can neural networks sing, dance, remix and rhyme? And most importantly, can they talk back? This talk will introduce Deep Neural Nets with textual and auditory understanding and some of the recent breakthroughs made in these fields. It will then show some of the exciting possibilities these technologies hold for "creative" use and explorations of human-machine interaction, where the main theorem is "augmentation, not automation".
http://events.nucl.ai/track/cognitive/#deep-neural-networks-that-talk-back-with-style
Explore Data: Data Science + VisualizationRoelof Pieters
Talk on Data Visualization for Data Scientist at Stockholm NLP Meetup June 2015: http://www.meetup.com/Stockholm-Natural-Language-Processing-Meetup/events/222609869/
Video recording at https://www.youtube.com/watch?v=3Li_xIQ1K84
Talk given at PYCON Stockholm 2015
Intro to Deep Learning + taking pretrained imagenet network, extracting features, and RBM on top = 97 Accuracy after 1 hour (!) of training (in top 10% of kaggle cat vs dog competition)
Learning to understand phrases by embedding the dictionaryRoelof Pieters
review of "Learning to Understand Phrases by Embedding the Dictionary" by Felix Hill, Kyunghyun Cho, Anna Korhonen, Yoshua Bengio
at KTH's Deep Learning reading group:
www.csc.kth.se/cvap/cvg/rg/
Zero shot learning through cross-modal transferRoelof Pieters
review of the paper "Zero-Shot Learning Through Cross-Modal Transfer" by Richard Socher, Milind Ganjoo, Hamsa Sridhar, Osbert Bastani, Christopher D. Manning, Andrew Y. Ng.
at KTH's Deep Learning reading group:
www.csc.kth.se/cvap/cvg/rg/
Visual-Semantic Embeddings: some thoughts on LanguageRoelof Pieters
Language technology is rapidly evolving. A resurgence in the use of distributed semantic representations and word embeddings, combined with the rise of deep neural networks has led to new approaches and new state of the art results in many natural language processing tasks. One such exciting - and most recent - trend can be seen in multimodal approaches fusing techniques and models of natural language processing (NLP) with that of computer vision.
The talk is aimed at giving an overview of the NLP part of this trend. It will start with giving a short overview of the challenges in creating deep networks for language, as well as what makes for a “good” language models, and the specific requirements of semantic word spaces for multi-modal embeddings.
0x01 - Newton's Third Law: Static vs. Dynamic AbusersOWASP Beja
f you offer a service on the web, odds are that someone will abuse it. Be it an API, a SaaS, a PaaS, or even a static website, someone somewhere will try to figure out a way to use it to their own needs. In this talk we'll compare measures that are effective against static attackers and how to battle a dynamic attacker who adapts to your counter-measures.
About the Speaker
===============
Diogo Sousa, Engineering Manager @ Canonical
An opinionated individual with an interest in cryptography and its intersection with secure software development.
Acorn Recovery: Restore IT infra within minutesIP ServerOne
Introducing Acorn Recovery as a Service, a simple, fast, and secure managed disaster recovery (DRaaS) by IP ServerOne. A DR solution that helps restore your IT infra within minutes.
Have you ever wondered how search works while visiting an e-commerce site, internal website, or searching through other types of online resources? Look no further than this informative session on the ways that taxonomies help end-users navigate the internet! Hear from taxonomists and other information professionals who have first-hand experience creating and working with taxonomies that aid in navigation, search, and discovery across a range of disciplines.
This presentation by Morris Kleiner (University of Minnesota), was made during the discussion “Competition and Regulation in Professions and Occupations” held at the Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found out at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Sharpen existing tools or get a new toolbox? Contemporary cluster initiatives...Orkestra
UIIN Conference, Madrid, 27-29 May 2024
James Wilson, Orkestra and Deusto Business School
Emily Wise, Lund University
Madeline Smith, The Glasgow School of Art
Obesity causes and management and associated medical conditions
Python for Image Understanding: Deep Learning with Convolutional Neural Nets
1. for Image
Understanding:
Deep Learning with
Convolutional Neural Nets
Roelof Pieters
PhD candidate at KTH & Data Science
consultant at Graph Technologies
@graphific
London 2015roelof@graph-technologies.com
3. “Deep learning is a set of algorithms in
machine learning that attempt to learn in
multiple levels, corresponding to different
levels of abstraction.” (much debated definition)
A Definition
4. • A host of statistical machine learning
techniques
• Enables the automatic learning of feature
hierarchies
• Generally based on artificial neural
networks
A typology
Deep learning is:
5. Manually designed features are often over-specified,
incomplete and take a long time to design and validate
Learned features are easy to adapt, fast to learn
Deep learning provides a very flexible, (possibly?)
universal, learnable framework for representing world,
visual and linguistic information.
Deep learning can learn unsupervised (from raw text/
audio/images/whatever content) and supervised (with
specific labels like positive/negative)
Summary by Richard Socher.
Old vs new school?
30. python has a wide range of deep learning-related
libraries available
Deep Learning with Python
Low level
High level
(efficient gpu-powered math)
(computer-vision oriented DL framework,
model-zoo, prototxt model definitions)
pythonification ongoing!
(wrapper for theano, yaml, experiment-oriented)
(theano-wrapper, models in python code,
abstracts theano away)
(theano-extension, models in python code,
theano not hidden)
and of course:
31. python has a wide range of deep learning-related
libraries available
Deep Learning with Python
Low level
High level
deeplearning.net/software/theano
caffe.berkeleyvision.org
deeplearning.net/software/pylearn2
keras.io
lasagne.readthedocs.org/en/latest
and of course:
32. python has a wide range of deep learning-related
libraries available
Deep Learning with Python
Low level
High level
deeplearning.net/software/theano
caffe.berkeleyvision.org
deeplearning.net/software/pylearn2
keras.io
lasagne.readthedocs.org/en/latest
and of course:
we will use lasagne in our examples
41. 1. Preprocess the data, the right way
Warning:
• compute preprocessing statistics on training data
• apply on all (training / validation / test) data
42. 1. Preprocess the data
2. Choose architecture
3. Train
4. Optimize/Regularize
5. Tips/Tricks
Training a (deep) Neural Network
43. • Deep Belief Network (DBN)
• Convolutional Net (CNN)
• Recurrent Net (RNN)
2. Choosing the right architecture
44. • Deep Belief Network (DBN)
• Convolutional Net (CNN)
• Recurrent Net (RNN)
2. Choosing the right architecture
45. • Deep Belief Network (DBN)
• Convolutional Net (CNN)
• Recurrent Net (RNN)
2. Choosing the right architecture
46. • Deep Belief Network (DBN)
• Convolutional Net (CNN)
• Recurrent Net (RNN)
2. Choosing the right architecture
51. DrawCNN: visualizing the units' connections
Agrawal, et al. Analyzing the performance of multilayer neural networks for object recognition. ECCV, 2014
Szegedy, et al. Intriguing properties of neural networks.arXiv preprint arXiv:1312.6199, 2013
Zeiler, M. et al. Visualizing and Understanding Convolutional Networks, ECCV 2014
52. 1. Preprocess the data
2. Choose architecture
3. Train
4. Optimize/Regularize
5. Tips/Tricks
Training a (deep) Neural Network
53. 1. Preprocess the data
2. Choose architecture
3. Train (Code Finally!)
4. Optimize/Regularize
5. Tips/Tricks
Training a (deep) Neural Network
54. Training a (deep) Neural Network
(…)
layer definitions
layer
parameters
71. Data Augmentation
http://benanne.github.io/2015/03/17/plankton.html
(realtime data augmentation at Kaggle’s #1 National Data Science Bowl
≋ Deep Sea ≋ team)
rotation: random with angle between 0° and 360° (uniform)
translation: random with shift between -10 and 10 pixels (uniform)
rescaling: random with scale factor between 1/1.6 and 1.6 (log-
uniform)
flipping: yes or no (bernoulli)
shearing: random with angle between -20° and 20° (uniform)
stretching: random with stretch factor between 1/1.3 and 1.3 (log-
uniform)
78. •Normalize the activations in
each layer within a minibatch
•Learn the mean and variance
of each layer as parameters
Batch Normalization as regularization
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
S Ioffe and C Szegedy (2015)
85%
50%
15%
79. Training a (deep) Neural Network
1. Preprocess the data
2. Choose architecture
3. Train
4. Optimize/Regularize
5. Further Tips & Tricks to improve Model Accuracy
Debug
80. • Ensembles
• Finetuning pre-trained/earlier-trained net
• Sticking extracted layer features in another classifier (ie
SVM)
Other “Tricks”
81. • Ensembles
• Finetuning pre-trained/earlier-trained net
• Sticking extracted layer features in another classifier (ie
SVM)
Other “Tricks”
82. Ensembles
• majority vote when hard predictions (ie classes)
• average vote for soft predictions (continious scale)
• make sure classifiers are uncorrelated
• cross validate ensemble weights (by grid search, or
rank average)
• stacked
• blending
85. predict by mean of all: 0.4244
avg: 0.3647
Ensembles
(10 similar nets with varying hyperparameters on same tiny-imagenet dataset)
86. (10 similar nets with varying hyperparameters on same tiny-imagenet dataset)
predict by mean of all: 0.4244
leave out model9: 0.4259
avg: 0.3647
Ensembles
87. (10 similar nets with varying hyperparameters on same tiny-imagenet dataset)
0.4259 0.4244
0.3647
Ensembles
91. Saliency Maps
K. Simonyan, A. Vedaldi, A. Zisserman , "Deep Inside Convolutional Networks: Visualising Image
Classification Models and Saliency Maps", ICLR Workshop 2014
first we predict on a pixel level
92. Fooling ConvNets
Szegedy, Christian, et al. "Intriguing properties of neural networks." arXiv preprint, 2013.
Nguyen, Anh, Jason Yosinski, and Jeff Clune. "Deep Neural Networks are Easily Fooled: High
Confidence Predictions for Unrecognizable Images." arXiv preprint
then we do our “magic”
93. Fooling ConvNets
Szegedy, Christian, et al. "Intriguing properties of neural networks." arXiv preprint, 2013.
Nguyen, Anh, Jason Yosinski, and Jeff Clune. "Deep Neural Networks are Easily Fooled: High
Confidence Predictions for Unrecognizable Images." arXiv preprint
then we do our “magic”