Slides for presentation New Directions in Structured Thinking for Intelligence Analysis at the Joint Australian Associations’ Security and Intelligence Seminar Program
Signposts for a Security Road Map
Talk given at PYCON Stockholm 2015
Intro to Deep Learning + taking pretrained imagenet network, extracting features, and RBM on top = 97 Accuracy after 1 hour (!) of training (in top 10% of kaggle cat vs dog competition)
Deep Neural Networks that talk (Back)… with styleRoelof Pieters
Talk at Nuclai 2016 in Vienna
Can neural networks sing, dance, remix and rhyme? And most importantly, can they talk back? This talk will introduce Deep Neural Nets with textual and auditory understanding and some of the recent breakthroughs made in these fields. It will then show some of the exciting possibilities these technologies hold for "creative" use and explorations of human-machine interaction, where the main theorem is "augmentation, not automation".
http://events.nucl.ai/track/cognitive/#deep-neural-networks-that-talk-back-with-style
Slides for presentation New Directions in Structured Thinking for Intelligence Analysis at the Joint Australian Associations’ Security and Intelligence Seminar Program
Signposts for a Security Road Map
Talk given at PYCON Stockholm 2015
Intro to Deep Learning + taking pretrained imagenet network, extracting features, and RBM on top = 97 Accuracy after 1 hour (!) of training (in top 10% of kaggle cat vs dog competition)
Deep Neural Networks that talk (Back)… with styleRoelof Pieters
Talk at Nuclai 2016 in Vienna
Can neural networks sing, dance, remix and rhyme? And most importantly, can they talk back? This talk will introduce Deep Neural Nets with textual and auditory understanding and some of the recent breakthroughs made in these fields. It will then show some of the exciting possibilities these technologies hold for "creative" use and explorations of human-machine interaction, where the main theorem is "augmentation, not automation".
http://events.nucl.ai/track/cognitive/#deep-neural-networks-that-talk-back-with-style
Explore Data: Data Science + VisualizationRoelof Pieters
Talk on Data Visualization for Data Scientist at Stockholm NLP Meetup June 2015: http://www.meetup.com/Stockholm-Natural-Language-Processing-Meetup/events/222609869/
Video recording at https://www.youtube.com/watch?v=3Li_xIQ1K84
Visual-Semantic Embeddings: some thoughts on LanguageRoelof Pieters
Language technology is rapidly evolving. A resurgence in the use of distributed semantic representations and word embeddings, combined with the rise of deep neural networks has led to new approaches and new state of the art results in many natural language processing tasks. One such exciting - and most recent - trend can be seen in multimodal approaches fusing techniques and models of natural language processing (NLP) with that of computer vision.
The talk is aimed at giving an overview of the NLP part of this trend. It will start with giving a short overview of the challenges in creating deep networks for language, as well as what makes for a “good” language models, and the specific requirements of semantic word spaces for multi-modal embeddings.
Learning to understand phrases by embedding the dictionaryRoelof Pieters
review of "Learning to Understand Phrases by Embedding the Dictionary" by Felix Hill, Kyunghyun Cho, Anna Korhonen, Yoshua Bengio
at KTH's Deep Learning reading group:
www.csc.kth.se/cvap/cvg/rg/
Zero shot learning through cross-modal transferRoelof Pieters
review of the paper "Zero-Shot Learning Through Cross-Modal Transfer" by Richard Socher, Milind Ganjoo, Hamsa Sridhar, Osbert Bastani, Christopher D. Manning, Andrew Y. Ng.
at KTH's Deep Learning reading group:
www.csc.kth.se/cvap/cvg/rg/
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new hype? How is Deep Learning different from previous approaches? Let’s look behind the curtain and unravel the reality. This talk will introduce the core concept of deep learning, explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why “deep learning is probably one of the most exciting things that is happening in the computer industry“ (Jen-Hsun Huang – CEO NVIDIA).
Google Cloud Platform - Building a scalable mobile applicationLukas Masuch
In this presentation we give an overview on several services of the Google Cloud Platform and showcase an Android application utilizing these technologies. We cover technologies, such as Google App Engine, Cloud Endpoints, Cloud Storage, Cloud Datastore and Google Cloud Messaging (GCM). We will talk about pitfalls, show meaningful code examples (in Java) and provide several tips and dev tools on how to get the most out of Google’s Cloud Platform.
Explore Data: Data Science + VisualizationRoelof Pieters
Talk on Data Visualization for Data Scientist at Stockholm NLP Meetup June 2015: http://www.meetup.com/Stockholm-Natural-Language-Processing-Meetup/events/222609869/
Video recording at https://www.youtube.com/watch?v=3Li_xIQ1K84
Visual-Semantic Embeddings: some thoughts on LanguageRoelof Pieters
Language technology is rapidly evolving. A resurgence in the use of distributed semantic representations and word embeddings, combined with the rise of deep neural networks has led to new approaches and new state of the art results in many natural language processing tasks. One such exciting - and most recent - trend can be seen in multimodal approaches fusing techniques and models of natural language processing (NLP) with that of computer vision.
The talk is aimed at giving an overview of the NLP part of this trend. It will start with giving a short overview of the challenges in creating deep networks for language, as well as what makes for a “good” language models, and the specific requirements of semantic word spaces for multi-modal embeddings.
Learning to understand phrases by embedding the dictionaryRoelof Pieters
review of "Learning to Understand Phrases by Embedding the Dictionary" by Felix Hill, Kyunghyun Cho, Anna Korhonen, Yoshua Bengio
at KTH's Deep Learning reading group:
www.csc.kth.se/cvap/cvg/rg/
Zero shot learning through cross-modal transferRoelof Pieters
review of the paper "Zero-Shot Learning Through Cross-Modal Transfer" by Richard Socher, Milind Ganjoo, Hamsa Sridhar, Osbert Bastani, Christopher D. Manning, Andrew Y. Ng.
at KTH's Deep Learning reading group:
www.csc.kth.se/cvap/cvg/rg/
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new hype? How is Deep Learning different from previous approaches? Let’s look behind the curtain and unravel the reality. This talk will introduce the core concept of deep learning, explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why “deep learning is probably one of the most exciting things that is happening in the computer industry“ (Jen-Hsun Huang – CEO NVIDIA).
Google Cloud Platform - Building a scalable mobile applicationLukas Masuch
In this presentation we give an overview on several services of the Google Cloud Platform and showcase an Android application utilizing these technologies. We cover technologies, such as Google App Engine, Cloud Endpoints, Cloud Storage, Cloud Datastore and Google Cloud Messaging (GCM). We will talk about pitfalls, show meaningful code examples (in Java) and provide several tips and dev tools on how to get the most out of Google’s Cloud Platform.
5. A graph is a graph is a graph
what drugs will bind to protein X and not interact with drug Y?
6. Graph-based Deep Learning
• Graphs + NLP = AWESOME
• Novel possibilities: Time is a fundamental factor
for any analysis
• Graph (natural NLP and other fancy affinities) vs
Matrices (complicated, not scalable algorithms/
mathematics)
• Whiteboard/Visualization friendly
• Thin Application layer = Focus on the graph, not
the software layer
7. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Chris Manning, Andrew Ng and Chris Potts. 2013. Recursive
Deep Models for Semantic Compositionality Over a Sentiment Treebank. EMNLP 2013
code & demo: http://nlp.stanford.edu/sentiment/index.html
Graph-based Deep Learning*