Graph Representation Learning with Deep Embedding Approach:
Graphs are commonly used data structure for representing the real-world relationships, e.g., molecular structure, knowledge graphs, social and communication networks. The effective encoding of graphical information is essential to the success of such applications. In this talk I’ll first describe a general deep learning framework, namely structure2vec, for end to end graph feature representation learning. Then I’ll present the direct application of this model on graph problems on different scales, including community detection and molecule graph classification/regression. We then extend the embedding idea to temporal evolving user-product interaction graph for recommendation. Finally I’ll present our latest work on leveraging the reinforcement learning technique for graph combinatorial optimization, including vertex cover problem for social influence maximization and traveling salesman problem for scheduling management.
Tamara G. Kolda, Distinguished Member of Technical Staff, Sandia National Lab...MLconf
Tensor Decomposition: A Mathematical Tool for Data Analysis:
Tensors are multiway arrays, and tensor decompositions are powerful tools for data analysis. In this talk, we demonstrate the wide-ranging utility of the canonical polyadic (CP) tensor decomposition with examples in neuroscience and chemical detection. The CP model is extremely useful for interpretation, as we show with an example in neuroscience. However, it can be difficult to fit to real data for a variety of reasons. We present a novel randomized method for fitting the CP decomposition to dense data that is more scalable and robust than the standard techniques. We further consider the modeling assumptions for fitting tensor decompositions to data and explain alternative strategies for different statistical scenarios, resulting in a _generalized_ CP tensor decomposition.
Bio: Tamara G. Kolda is a member of the Data Science and Cyber Analytics Department at Sandia National Laboratories in Livermore, CA. Her research is generally in the area of computational science and data analysis, with specialties in multilinear algebra and tensor decompositions, graph models and algorithms, data mining, optimization, nonlinear solvers, parallel computing and the design of scientific software. She has received a Presidential Early Career Award for Scientists and Engineers (PECASE), been named a Distinguished Scientist of the Association for Computing Machinery (ACM) and a Fellow of the Society for Industrial and Applied Mathematics (SIAM). She was the winner of an R&D100 award and three best paper prizes at international conferences. She is currently a member of the SIAM Board of Trustees and serves as associate editor for both the SIAM J. Scientific Computing and the SIAM J. Matrix Analysis and Applications.
Classifying Multi-Variate Time Series at Scale:
Characterizing and understanding the runtime behavior of large scale Big Data production systems is extremely important. Typical systems consist of hundreds to thousands of machines in a cluster with hundreds of terabytes of storage costing millions of dollars, solving problems that are business critical. By instrumenting each running process, and measuring their resource utilization including CPU, Memory, I/O, network etc., as time series it is possible to understand and characterize the workload on these massive clusters. Each time series is a series consisting of tens to tens of thousands of data points that must be ingested and then classified. At Pepperdata, our instrumentation of the clusters collects over three hundred metrics from each task every five seconds resulting in millions of data points per hour. At this scale the data are equivalent to the biggest IOT data sets in the world. Our objective is to classify the collection of time series into a set of classes that represent different work load types. Or phrased differently, our problem is essentially the problem of classifying multivariate time series.
In this talk, we propose a unique, off-the-shelf approach to classifying time series that achieves near best-in-class accuracy for univariate series and generalizes to multivariate time series. Our technique maps each time series to a Grammian Angular Difference Field (GADF), interprets that as an image, uses Google’s pre-trained CNN (trained on Inception v3) to map the GADF images into a 2048-dimensional vector space and then uses a small MLP with two hidden layers, with fifty nodes in each layer, and a softmax output to achieve the final classification. Our work is not domain specific – a fact proven by our achieving competitive accuracies with published results on the univariate UCR data set as well as the multivariate UCI data set.
Bio: Before joining Pepperdata, Ash was executive chairman for Marianas Labs, a deep learning startup sold in December 2015. Prior to that he was CEO for Graphite Systems, a big data storage startup that was sold to EMC DSSD in August 2015. Munshi also served as CTO of Yahoo, as a CEO of both public and private companies, and is on the board of several technology startups.
Matineh Shaker, Artificial Intelligence Scientist, Bonsai at MLconf SF 2017MLconf
Deep Reinforcement Learning with Shallow Trees:
In this talk, I present Concept Network Reinforcement Learning (CNRL), developed at Bonsai. It is an industrially applicable approach to solving complex tasks using reinforcement learning, which facilitates problem decomposition, allows component reuse, and simplifies reward functions. Inspired by Sutton’s options framework, we introduce the notion of “Concept Networks” which are tree-like structures in which leaves are “sub-concepts” (sub-tasks), representing policies on a subset of state space. The parent (non-leaf) nodes are “Selectors”, containing policies on which sub-concept to choose from the child nodes, at each time during an episode. There will be a high-level overview on the reinforcement learning fundamentals at the beginning of the talk.
Bio: Matineh Shaker is an Artificial Intelligence Scientist at Bonsai in Berkeley, CA, where she builds machine learning, reinforcement learning, and deep learning tools and algorithms for general purpose intelligent systems. She was previously a Machine Learning Researcher at Geometric Intelligence, Data Science Fellow at Insight Data Science, Predoctoral Fellow at Harvard Medical School. She received her PhD from Northeastern University with a dissertation in geometry-inspired manifold learning.
Anima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor,...MLconf
Large-scale Machine Learning: Deep, Distributed and Multi-Dimensional:
Modern machine learning involves deep neural network architectures which yields state-of-art performance on multiple domains such as computer vision, natural language processing and speech recognition. As the data and models scale, it becomes necessary to have multiple processing units for both training and inference. Apache MXNet is an open-source framework developed for distributed deep learning. I will describe the underlying lightweight hierarchical parameter server architecture that results in high efficiency in distributed settings.
Pushing the current boundaries of deep learning requires using multiple dimensions and modalities. These can be encoded into tensors, which are natural extensions of matrices. We present new deep learning architectures that preserve the multi-dimensional information in data end-to-end. We show that tensor contractions and regression layers are an effective replacement for fully connected layers in deep learning architectures. They result in significant space savings with negligible performance degradation. These functionalities are available in the Tensorly package with MXNet backend interface for large-scale efficient learning.
Bio: Anima Anandkumar is a principal scientist at Amazon Web Services and a Bren professor at Caltech CMS department. Her research interests are in the areas of large-scale machine learning, non-convex optimization and high-dimensional statistics. In particular, she has been spearheading the development and analysis of tensor algorithms. She is the recipient of several awards such as the Alfred. P. Sloan Fellowship, Microsoft Faculty Fellowship, Google research award, ARO and AFOSR Young Investigator Awards, NSF Career Award, Early Career Excellence in Research Award at UCI, Best Thesis Award from the ACM Sigmetrics society, IBM Fran Allen PhD fellowship, and several best paper awards. She has been featured in a number of forums such as the yourstory, Quora ML session, O’Reilly media, and so on. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She was a postdoctoral researcher at MIT from 2009 to 2010, an assistant professor at U.C. Irvine between 2010 and 2016, and a visiting researcher at Microsoft Research New England in 2012 and 2014.
Tianqi Chen, PhD Student, University of Washington, at MLconf Seattle 2017MLconf
Tianqi holds a bachelor’s degree in Computer Science from Shanghai Jiao Tong University, where he was a member of ACM Class, now part of Zhiyuan College in SJTU. He did his master’s degree at Changhai Jiao Tong University in China on Apex Data and Knowledge Management before joining the University of Washington as a PhD. He has had several prestigious internships and has been a visiting scholar including: Google on the Brain Team, at Graphlab authoring the boosted tree and neural net toolkit, at Microsoft Research Asia in the Machine Learning Group, and the Digital Enterprise Institute in Galway Ireland. What really excites Tianqi is what processes and goals can be enabled when we bring advanced learning techniques and systems together. He pushes the envelope on deep learning, knowledge transfer and lifelong learning. His PhD is supported by a Google PhD Fellowship.
Abstract summary
Build Scalable and Modular Learning Systems:
Machine learning and data-driven approaches are becoming very important in many areas. There are one factors that drive these successful applications: scalable learning systems that learn the model of interest from large datasets. More importantly, the system needed to be designed in a modular way to work with existing ecosystem and improve users’ productivity environment. In this talk, I will talk about XGBoost and MXNet, two learning scalable and portable systems that I build. I will discuss how we can apply distributed computing, asynchronous scheduling and hardware acceleration to improve these systems, as well as how do they fit into bigger open-source ecosystem of machine learning.
Alexandra Johnson, Software Engineer, SigOpt at MLconf ATL 2017MLconf
Best Practices for Hyperparameter Optimization:
All machine learning and artificial intelligence pipelines – from reinforcement agents to deep neural nets – have tunable hyperparameters. Optimizing these hyperparameters provides tremendous performance gains, but only if the optimization is done correctly. This presentation will discuss topics including selecting performance criteria, why you should always use cross validation, and choosing between state of the art optimization methods.
Tamara G. Kolda, Distinguished Member of Technical Staff, Sandia National Lab...MLconf
Tensor Decomposition: A Mathematical Tool for Data Analysis:
Tensors are multiway arrays, and tensor decompositions are powerful tools for data analysis. In this talk, we demonstrate the wide-ranging utility of the canonical polyadic (CP) tensor decomposition with examples in neuroscience and chemical detection. The CP model is extremely useful for interpretation, as we show with an example in neuroscience. However, it can be difficult to fit to real data for a variety of reasons. We present a novel randomized method for fitting the CP decomposition to dense data that is more scalable and robust than the standard techniques. We further consider the modeling assumptions for fitting tensor decompositions to data and explain alternative strategies for different statistical scenarios, resulting in a _generalized_ CP tensor decomposition.
Bio: Tamara G. Kolda is a member of the Data Science and Cyber Analytics Department at Sandia National Laboratories in Livermore, CA. Her research is generally in the area of computational science and data analysis, with specialties in multilinear algebra and tensor decompositions, graph models and algorithms, data mining, optimization, nonlinear solvers, parallel computing and the design of scientific software. She has received a Presidential Early Career Award for Scientists and Engineers (PECASE), been named a Distinguished Scientist of the Association for Computing Machinery (ACM) and a Fellow of the Society for Industrial and Applied Mathematics (SIAM). She was the winner of an R&D100 award and three best paper prizes at international conferences. She is currently a member of the SIAM Board of Trustees and serves as associate editor for both the SIAM J. Scientific Computing and the SIAM J. Matrix Analysis and Applications.
Classifying Multi-Variate Time Series at Scale:
Characterizing and understanding the runtime behavior of large scale Big Data production systems is extremely important. Typical systems consist of hundreds to thousands of machines in a cluster with hundreds of terabytes of storage costing millions of dollars, solving problems that are business critical. By instrumenting each running process, and measuring their resource utilization including CPU, Memory, I/O, network etc., as time series it is possible to understand and characterize the workload on these massive clusters. Each time series is a series consisting of tens to tens of thousands of data points that must be ingested and then classified. At Pepperdata, our instrumentation of the clusters collects over three hundred metrics from each task every five seconds resulting in millions of data points per hour. At this scale the data are equivalent to the biggest IOT data sets in the world. Our objective is to classify the collection of time series into a set of classes that represent different work load types. Or phrased differently, our problem is essentially the problem of classifying multivariate time series.
In this talk, we propose a unique, off-the-shelf approach to classifying time series that achieves near best-in-class accuracy for univariate series and generalizes to multivariate time series. Our technique maps each time series to a Grammian Angular Difference Field (GADF), interprets that as an image, uses Google’s pre-trained CNN (trained on Inception v3) to map the GADF images into a 2048-dimensional vector space and then uses a small MLP with two hidden layers, with fifty nodes in each layer, and a softmax output to achieve the final classification. Our work is not domain specific – a fact proven by our achieving competitive accuracies with published results on the univariate UCR data set as well as the multivariate UCI data set.
Bio: Before joining Pepperdata, Ash was executive chairman for Marianas Labs, a deep learning startup sold in December 2015. Prior to that he was CEO for Graphite Systems, a big data storage startup that was sold to EMC DSSD in August 2015. Munshi also served as CTO of Yahoo, as a CEO of both public and private companies, and is on the board of several technology startups.
Matineh Shaker, Artificial Intelligence Scientist, Bonsai at MLconf SF 2017MLconf
Deep Reinforcement Learning with Shallow Trees:
In this talk, I present Concept Network Reinforcement Learning (CNRL), developed at Bonsai. It is an industrially applicable approach to solving complex tasks using reinforcement learning, which facilitates problem decomposition, allows component reuse, and simplifies reward functions. Inspired by Sutton’s options framework, we introduce the notion of “Concept Networks” which are tree-like structures in which leaves are “sub-concepts” (sub-tasks), representing policies on a subset of state space. The parent (non-leaf) nodes are “Selectors”, containing policies on which sub-concept to choose from the child nodes, at each time during an episode. There will be a high-level overview on the reinforcement learning fundamentals at the beginning of the talk.
Bio: Matineh Shaker is an Artificial Intelligence Scientist at Bonsai in Berkeley, CA, where she builds machine learning, reinforcement learning, and deep learning tools and algorithms for general purpose intelligent systems. She was previously a Machine Learning Researcher at Geometric Intelligence, Data Science Fellow at Insight Data Science, Predoctoral Fellow at Harvard Medical School. She received her PhD from Northeastern University with a dissertation in geometry-inspired manifold learning.
Anima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor,...MLconf
Large-scale Machine Learning: Deep, Distributed and Multi-Dimensional:
Modern machine learning involves deep neural network architectures which yields state-of-art performance on multiple domains such as computer vision, natural language processing and speech recognition. As the data and models scale, it becomes necessary to have multiple processing units for both training and inference. Apache MXNet is an open-source framework developed for distributed deep learning. I will describe the underlying lightweight hierarchical parameter server architecture that results in high efficiency in distributed settings.
Pushing the current boundaries of deep learning requires using multiple dimensions and modalities. These can be encoded into tensors, which are natural extensions of matrices. We present new deep learning architectures that preserve the multi-dimensional information in data end-to-end. We show that tensor contractions and regression layers are an effective replacement for fully connected layers in deep learning architectures. They result in significant space savings with negligible performance degradation. These functionalities are available in the Tensorly package with MXNet backend interface for large-scale efficient learning.
Bio: Anima Anandkumar is a principal scientist at Amazon Web Services and a Bren professor at Caltech CMS department. Her research interests are in the areas of large-scale machine learning, non-convex optimization and high-dimensional statistics. In particular, she has been spearheading the development and analysis of tensor algorithms. She is the recipient of several awards such as the Alfred. P. Sloan Fellowship, Microsoft Faculty Fellowship, Google research award, ARO and AFOSR Young Investigator Awards, NSF Career Award, Early Career Excellence in Research Award at UCI, Best Thesis Award from the ACM Sigmetrics society, IBM Fran Allen PhD fellowship, and several best paper awards. She has been featured in a number of forums such as the yourstory, Quora ML session, O’Reilly media, and so on. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She was a postdoctoral researcher at MIT from 2009 to 2010, an assistant professor at U.C. Irvine between 2010 and 2016, and a visiting researcher at Microsoft Research New England in 2012 and 2014.
Tianqi Chen, PhD Student, University of Washington, at MLconf Seattle 2017MLconf
Tianqi holds a bachelor’s degree in Computer Science from Shanghai Jiao Tong University, where he was a member of ACM Class, now part of Zhiyuan College in SJTU. He did his master’s degree at Changhai Jiao Tong University in China on Apex Data and Knowledge Management before joining the University of Washington as a PhD. He has had several prestigious internships and has been a visiting scholar including: Google on the Brain Team, at Graphlab authoring the boosted tree and neural net toolkit, at Microsoft Research Asia in the Machine Learning Group, and the Digital Enterprise Institute in Galway Ireland. What really excites Tianqi is what processes and goals can be enabled when we bring advanced learning techniques and systems together. He pushes the envelope on deep learning, knowledge transfer and lifelong learning. His PhD is supported by a Google PhD Fellowship.
Abstract summary
Build Scalable and Modular Learning Systems:
Machine learning and data-driven approaches are becoming very important in many areas. There are one factors that drive these successful applications: scalable learning systems that learn the model of interest from large datasets. More importantly, the system needed to be designed in a modular way to work with existing ecosystem and improve users’ productivity environment. In this talk, I will talk about XGBoost and MXNet, two learning scalable and portable systems that I build. I will discuss how we can apply distributed computing, asynchronous scheduling and hardware acceleration to improve these systems, as well as how do they fit into bigger open-source ecosystem of machine learning.
Alexandra Johnson, Software Engineer, SigOpt at MLconf ATL 2017MLconf
Best Practices for Hyperparameter Optimization:
All machine learning and artificial intelligence pipelines – from reinforcement agents to deep neural nets – have tunable hyperparameters. Optimizing these hyperparameters provides tremendous performance gains, but only if the optimization is done correctly. This presentation will discuss topics including selecting performance criteria, why you should always use cross validation, and choosing between state of the art optimization methods.
Introduction of "TrailBlazer" algorithmKatsuki Ohto
論文「Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning」紹介スライドです。NIPS2016読み会@PFN(2017/1/19) https://connpass.com/event/47580/ にて。
This presentation is for introducing google DeepMind's DeepDPG algorithm to my colleagues.
I tried my best to make it easy to be understood...
Comment is always welcome :)
hiddenmaze91.blogspot.com
1118_Seminar_Continuous_Deep Q-Learning with Model based accelerationHye-min Ahn
The material that I've used to present the paper
"Continuous Deep Q-Learning with Model-based Acceleration", S.Gu, T.Lillicrap, I.Sutskever, S.Levine, 2016 ICML
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16MLconf
Multi-algorithm Ensemble Learning at Scale: Software, Hardware and Algorithmic Approaches: Multi-algorithm ensemble machine learning methods are often used when the true prediction function is not easily approximated by a single algorithm. The Super Learner algorithm, also known as stacking, combines multiple, typically diverse, base learning algorithms into a single, powerful prediction function through a secondary learning process called metalearning. Although ensemble methods offer superior performance over their singleton counterparts, there is an implicit computational cost to ensembles, as it requires training and cross-validating multiple base learning algorithms.
We will demonstrate a variety of software- and hardware-based approaches that lead to more scalable ensemble learning software, including a highly scalable implementation of stacking called “H2O Ensemble”, built on top of the open source, distributed machine learning platform, H2O. H2O Ensemble scales across multi-node clusters and allows the user to create ensembles of deep neural networks, Gradient Boosting Machines, Random Forest, and others. As for algorithm-based approaches, we will present two algorithmic modifications to the original stacking algorithm that further reduce computation time — Subsemble algorithm and the Online Super Learner algorithm. This talk will also include benchmarks of the implementations of these new stacking variants.
Generalized Linear Models in Spark MLlib and SparkRDatabricks
Generalized linear models (GLMs) unify various statistical models such as linear regression and logistic regression through the specification of a model family and link function. They are widely used in modeling, inference, and prediction with applications in numerous fields. In this talk, we will summarize recent community efforts in supporting GLMs in Spark MLlib and SparkR. We will review supported model families, link functions, and regularization types, as well as their use cases, e.g., logistic regression for classification and log-linear model for survival analysis. Then we discuss the choices of solvers and their pros and cons given training datasets of different sizes, and implementation details in order to match R’s model output and summary statistics. We will also demonstrate the APIs in MLlib and SparkR, including R model formula support, which make building linear models a simple task in Spark. This is a joint work with Eric Liang, Yanbo Liang, and some other Spark contributors.
Sergei Vassilvitskii, Research Scientist, Google at MLconf NYC - 4/15/16MLconf
Teaching K-Means New Tricks: Over 50 years old, the k-means algorithm remains one of the most popular clustering algorithms. In this talk we’ll cover some recent developments, including better initialization, the notion of coresets, clustering at scale, and clustering with outliers.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
deep reinforcement learning with double q learningSeungHyeok Baek
presentation for Lab seminar
Double DQN Algorithm of Deepmind
Van Hasselt, Hado, Arthur Guez, and David Silver. "Deep Reinforcement Learning with Double Q-Learning." AAAI. Vol. 2. 2016.
Melanie Warrick, Deep Learning Engineer, Skymind.io at MLconf SF - 11/13/15MLconf
Attention Neural Net Model Fundamentals: Neural networks have regained popularity over the last decade because they are demonstrating real world value in different applications (e.g. targeted advertising, recommender engines, Siri, self driving cars, facial recognition). Several model types are currently explored in the field with recurrent neural networks (RNN) and convolution neural networks (CNN) taking the top focus. The attention model, a recently developed RNN variant, has started to play a larger role in both natural language processing and image analysis research.
This talk will cover the fundamentals of the attention model structure and how its applied to visual and speech analysis. I will provide an overview of the model functionality and math including a high-level differentiation between soft and hard types. The goal is to give you enough of an understanding of what the model is, how it works and where to apply it.
Virginia Smith, Researcher, UC Berkeley at MLconf SF 2016MLconf
A General Framework for Communication-Efficient Distributed Optimization: Communication remains the most significant bottleneck in the performance of distributed optimization algorithms for large-scale machine learning. In light of this, we propose a general framework, CoCoA, that uses local computation in a primal-dual setting to dramatically reduce the amount of necessary communication. Our framework enjoys strong convergence guarantees and exhibits state-of-the-art empirical performance in the distributed setting. We demonstrate this performance with extensive experiments in Apache Spark, achieving speedups of up to 50x compared to leading distributed methods on common machine learning objectives.
https://imatge.upc.edu/web/publications/reproducing-and-analyzing-adaptive-computation-time-pytorch-and-tensorflow
The complexity of solving a problem can differ greatly to the complexity of posing that problem. Building a Neural Network capable of dynamically adapting to the complexity of the inputs would be a great feat for the machine learning community. One of the most promising approaches is Adaptive Computation Time for Recurrent Neural Network (ACT) \parencite{act}. In this thesis, we implement ACT in two of the most used deep learning frameworks, PyTorch and TensorFlow. Both are open source and publicly available. We use this implementations to evaluate the capability of ACT to learn algorithms from examples. We compare ACT with a proposed baseline where each input data sample of the sequence is read a fixed amount of times, learned as a hyperparameter during training. Surprisingly, we do not observe any benefit from ACT when compared with this baseline solution, which opens new and unexpected directions for future research.
Jeremy Nixon, Machine Learning Engineer, Spark Technology Center at MLconf AT...MLconf
Convolutional Neural Networks at scale in Spark MLlib:
Jeremy Nixon will focus on the engineering and applications of a new algorithm built on top of MLlib. The presentation will focus on the methods the algorithm uses to automatically generate features to capture nonlinear structure in data, as well as the process by which it’s trained. Major aspects of that include compositional transformations over the data, convolution, and distributed backpropagation via SGD with adaptive gradients and an adaptive learning rate. Applications will look into how to use convolutional neural networks to model data in computer vision, natural language and signal processing. Details around optimal preprocessing, the type of structure that can be learned, and managing its ability to generalize will inform developers looking to apply nonlinear modeling tools to problems that they face.
Jacob Eisenstein, Assistant Professor, School of Interactive Computing, Georg...MLconf
Making Natural Language Processing Robust to Sociolinguistic Variation:
Natural language processing on social media text has the potential to aggregate facts and opinions from millions of people all over the world. However, language in social media is highly variable, making it more difficult to analyze that conventional news texts. Fortunately, this variation is not random; it is often linked to social properties of the author. I will describe two machine learning methods for exploiting social network structures to make natural language processing more robust to socially-linked variation. The key idea behind both methods is linguistic homophily: the tendency of socially linked individuals to use language in similar ways. This idea is captured using embeddings of node positions in social networks. By integrating node embeddings into neural networks for language analysis, we obtained customized language processing systems for individual writers — even for individuals for whom we have no labeled data. The first application shows how to apply this idea to the problem of tweet-level sentiment analysis. The second application targets the problem of linking spans of text to known entities in a knowledge base.
Introduction of "TrailBlazer" algorithmKatsuki Ohto
論文「Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning」紹介スライドです。NIPS2016読み会@PFN(2017/1/19) https://connpass.com/event/47580/ にて。
This presentation is for introducing google DeepMind's DeepDPG algorithm to my colleagues.
I tried my best to make it easy to be understood...
Comment is always welcome :)
hiddenmaze91.blogspot.com
1118_Seminar_Continuous_Deep Q-Learning with Model based accelerationHye-min Ahn
The material that I've used to present the paper
"Continuous Deep Q-Learning with Model-based Acceleration", S.Gu, T.Lillicrap, I.Sutskever, S.Levine, 2016 ICML
Dr. Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf SEA - 5/20/16MLconf
Multi-algorithm Ensemble Learning at Scale: Software, Hardware and Algorithmic Approaches: Multi-algorithm ensemble machine learning methods are often used when the true prediction function is not easily approximated by a single algorithm. The Super Learner algorithm, also known as stacking, combines multiple, typically diverse, base learning algorithms into a single, powerful prediction function through a secondary learning process called metalearning. Although ensemble methods offer superior performance over their singleton counterparts, there is an implicit computational cost to ensembles, as it requires training and cross-validating multiple base learning algorithms.
We will demonstrate a variety of software- and hardware-based approaches that lead to more scalable ensemble learning software, including a highly scalable implementation of stacking called “H2O Ensemble”, built on top of the open source, distributed machine learning platform, H2O. H2O Ensemble scales across multi-node clusters and allows the user to create ensembles of deep neural networks, Gradient Boosting Machines, Random Forest, and others. As for algorithm-based approaches, we will present two algorithmic modifications to the original stacking algorithm that further reduce computation time — Subsemble algorithm and the Online Super Learner algorithm. This talk will also include benchmarks of the implementations of these new stacking variants.
Generalized Linear Models in Spark MLlib and SparkRDatabricks
Generalized linear models (GLMs) unify various statistical models such as linear regression and logistic regression through the specification of a model family and link function. They are widely used in modeling, inference, and prediction with applications in numerous fields. In this talk, we will summarize recent community efforts in supporting GLMs in Spark MLlib and SparkR. We will review supported model families, link functions, and regularization types, as well as their use cases, e.g., logistic regression for classification and log-linear model for survival analysis. Then we discuss the choices of solvers and their pros and cons given training datasets of different sizes, and implementation details in order to match R’s model output and summary statistics. We will also demonstrate the APIs in MLlib and SparkR, including R model formula support, which make building linear models a simple task in Spark. This is a joint work with Eric Liang, Yanbo Liang, and some other Spark contributors.
Sergei Vassilvitskii, Research Scientist, Google at MLconf NYC - 4/15/16MLconf
Teaching K-Means New Tricks: Over 50 years old, the k-means algorithm remains one of the most popular clustering algorithms. In this talk we’ll cover some recent developments, including better initialization, the notion of coresets, clustering at scale, and clustering with outliers.
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
deep reinforcement learning with double q learningSeungHyeok Baek
presentation for Lab seminar
Double DQN Algorithm of Deepmind
Van Hasselt, Hado, Arthur Guez, and David Silver. "Deep Reinforcement Learning with Double Q-Learning." AAAI. Vol. 2. 2016.
Melanie Warrick, Deep Learning Engineer, Skymind.io at MLconf SF - 11/13/15MLconf
Attention Neural Net Model Fundamentals: Neural networks have regained popularity over the last decade because they are demonstrating real world value in different applications (e.g. targeted advertising, recommender engines, Siri, self driving cars, facial recognition). Several model types are currently explored in the field with recurrent neural networks (RNN) and convolution neural networks (CNN) taking the top focus. The attention model, a recently developed RNN variant, has started to play a larger role in both natural language processing and image analysis research.
This talk will cover the fundamentals of the attention model structure and how its applied to visual and speech analysis. I will provide an overview of the model functionality and math including a high-level differentiation between soft and hard types. The goal is to give you enough of an understanding of what the model is, how it works and where to apply it.
Virginia Smith, Researcher, UC Berkeley at MLconf SF 2016MLconf
A General Framework for Communication-Efficient Distributed Optimization: Communication remains the most significant bottleneck in the performance of distributed optimization algorithms for large-scale machine learning. In light of this, we propose a general framework, CoCoA, that uses local computation in a primal-dual setting to dramatically reduce the amount of necessary communication. Our framework enjoys strong convergence guarantees and exhibits state-of-the-art empirical performance in the distributed setting. We demonstrate this performance with extensive experiments in Apache Spark, achieving speedups of up to 50x compared to leading distributed methods on common machine learning objectives.
https://imatge.upc.edu/web/publications/reproducing-and-analyzing-adaptive-computation-time-pytorch-and-tensorflow
The complexity of solving a problem can differ greatly to the complexity of posing that problem. Building a Neural Network capable of dynamically adapting to the complexity of the inputs would be a great feat for the machine learning community. One of the most promising approaches is Adaptive Computation Time for Recurrent Neural Network (ACT) \parencite{act}. In this thesis, we implement ACT in two of the most used deep learning frameworks, PyTorch and TensorFlow. Both are open source and publicly available. We use this implementations to evaluate the capability of ACT to learn algorithms from examples. We compare ACT with a proposed baseline where each input data sample of the sequence is read a fixed amount of times, learned as a hyperparameter during training. Surprisingly, we do not observe any benefit from ACT when compared with this baseline solution, which opens new and unexpected directions for future research.
Jeremy Nixon, Machine Learning Engineer, Spark Technology Center at MLconf AT...MLconf
Convolutional Neural Networks at scale in Spark MLlib:
Jeremy Nixon will focus on the engineering and applications of a new algorithm built on top of MLlib. The presentation will focus on the methods the algorithm uses to automatically generate features to capture nonlinear structure in data, as well as the process by which it’s trained. Major aspects of that include compositional transformations over the data, convolution, and distributed backpropagation via SGD with adaptive gradients and an adaptive learning rate. Applications will look into how to use convolutional neural networks to model data in computer vision, natural language and signal processing. Details around optimal preprocessing, the type of structure that can be learned, and managing its ability to generalize will inform developers looking to apply nonlinear modeling tools to problems that they face.
Jacob Eisenstein, Assistant Professor, School of Interactive Computing, Georg...MLconf
Making Natural Language Processing Robust to Sociolinguistic Variation:
Natural language processing on social media text has the potential to aggregate facts and opinions from millions of people all over the world. However, language in social media is highly variable, making it more difficult to analyze that conventional news texts. Fortunately, this variation is not random; it is often linked to social properties of the author. I will describe two machine learning methods for exploiting social network structures to make natural language processing more robust to socially-linked variation. The key idea behind both methods is linguistic homophily: the tendency of socially linked individuals to use language in similar ways. This idea is captured using embeddings of node positions in social networks. By integrating node embeddings into neural networks for language analysis, we obtained customized language processing systems for individual writers — even for individuals for whom we have no labeled data. The first application shows how to apply this idea to the problem of tweet-level sentiment analysis. The second application targets the problem of linking spans of text to known entities in a knowledge base.
Qiaoling Liu, Lead Data Scientist, CareerBuilder at MLconf ATL 2017MLconf
CompanyDepot: Employer Name Normalization in the Online Recruitment Industry
In the recruitment domain, the employer name normalization task, which links employer names in job postings or resumes to entities in an employer knowledge base (KB), is important to many business applications. It has several unique challenges: handling employer names from both job postings and resumes, leveraging the corresponding location and url context, as well as handling name variations, irrelevant input data, and noises in the KB. In this talk, we present a system called CompanyDepot which uses machine learning techniques to address these challenges. The proposed system achieves 2.5%- 21.4% higher coverage at the same precision level compared to a legacy system used at CareerBuilder over multiple real-world datasets. After applying it to several applications at CareerBuilder, we faced a new challenge: how to avoid duplicate normalization results when the KB is noisy and contains many duplicate entities. To address this challenge, we extend the CompanyDepot system to normalize employer names not only at entity level, but also at cluster level by mapping a query to a cluster in the KB that best matches the query. The proposed system performs an efficient graph-based clustering based on external knowledge from five mapping sources. We also propose a new metric based on success rate and diversity reduction ratio for evaluating the cluster-level normalization. Through experiments and applications, we demonstrate a large improvement on normalization quality from entity-level to cluster-level normalization.
Venkatesh Ramanathan, Data Scientist, PayPal at MLconf ATL 2017MLconf
Large Scale Graph Processing & Machine Learning Algorithms for Payment Fraud Prevention:
PayPal is at the forefront of applying large scale graph processing and machine learning algorithms to keep fraudsters at bay. In this talk, I’ll present how advanced graph processing and machine learning algorithms such as Deep Learning and Gradient Boosting are applied at PayPal for fraud prevention. I’ll elaborate on specific challenges in applying large scale graph processing & machine technique to payment fraud prevention. I’ll explain how we employ sophisticated machine learning tools – open source and in-house developed.
I will also present results from experiments conducted on a very large graph data set containing millions of edges and vertices.
Jennifer Marsman, Principal Software Development Engineer, Microsoft at MLcon...MLconf
Game of Drones: Using IoT, Machine Learning, Drones, and Networking to Solve World Hunger
Drones are increasingly used in various commercial and consumer scenarios – from agriculture drones (providing farmers with crop and irrigation patterns) to consumer drones (that follow you around as you engage in action sports), to drone racing. Drones are outfitted with a large number of sensors (cameras, accelerometers, gyros, etc.), and can continuously stream these signals in real time for analysis.
This talk introduces the landscape of the various drone technologies that are currently available, and shows you how to acquire and analyze the real-time signals from the drones to design intelligent applications in an IoT pipeline. We will demonstrate how to leverage machine learning models that perform real-time facial detection along with predictions of age, gender, emotion, and object recognition using the signals acquired from the drones. You will walk away understanding the basics of how to develop applications that utilize and visualize these real-time insights.
This talk includes fun with drones, how to tackle the problem of world hunger, and some Game of Thrones silliness. It is targeted at data scientists, students, researchers, and IT professionals who have an interest in building intelligent applications using drones and machine learning. It will be a fun and exciting exploration as we demonstrate a drone with the power of recognizing faces, ages, genders, emotions, and objects. You will learn how to leverage these same machine learning models to imbue intelligence into drones or other applications.
Daniel Shank, Data Scientist, Talla at MLconf SF 2017MLconf
Getting Value Out of Chat Data:
Chat-based interfaces are increasingly common, whether as customers interacting with companies or as employees communicating with each other within an organization. Given the large number of chat logs being captured, along with recent advances in natural language processing, there is a desire to leverage this data for both insight generation and machine learning applications. Unfortunately, chat data is user-generated data, meaning it is often noisy and difficult to normalize. It is also mostly short texts and heavily context-dependent, which cause difficulty in applying methods such as topic modeling and information extraction.
Despite these challenges, it is still possible to extract useful information from these data sources. In this talk, I will be providing an overview of techniques and practices for working with chat-based user interaction data with a focus on machine-augmented data annotation and unsupervised learning methods.
Bio: Daniel Shank is a Senior Data Scientist at Talla, a company developing a platform for intelligent information discovery and delivery. His focus is on developing machine learning techniques to handle various business automation tasks, such as scheduling, polls, expert identification, as well as doing work on NLP. Before joining Talla as the company’s first employee in 2015, Daniel worked with TechStars Boston and did consulting work for ThriveHive, a small business focused marketing company in Boston. He studied economics at the University of Chicago.
Jonas Schneider, Head of Engineering for Robotics, OpenAIMLconf
Machine Learning Systems at Scale:
OpenAI is a non-profit research company, discovering and enacting the path to safe artificial general intelligence. As part of our work, we regularly push the limits of scalability in cutting-edge ML algorithms. We’ve found that in many cases, designing the systems we build around the core algorithms is as important as designing the algorithms themselves. This means that many systems engineering areas, such as distributed computing, networking, and orchestration, are crucial for machine learning to succeed on large problems requiring thousands of computers. As a result, at OpenAI engineers and researchers work closely together to build these large systems as opposed to a strict researcher/engineer split. In this talk, we will go over some of the lessons we’ve learned, and how they come together in the design and internals of our system for learning-based robotics research.
Bio: Jonas leads technology development for OpenAI’s robotics group, developing methods to apply machine learning and AI to robots. He also helped build the infrastructure to scale OpenAI’s distributed ML systems to thousands of machines.
Artemy Malkov, CEO, Data Monsters at The AI Conference 2017 MLconf
Artemy is a CEO of Data Monsters, a Palo Alto based research lab and consulting company. Prior to Data Monsters Artemy founded a business intelligence startup, which raised $6M of venture capital and two years later was sold to a nationwide system integrator. Artemy is an expert in computational social science, knowledge mining, chaos theory.
Why Chatbots Fail, And How To Fix Them
Chatbots look damn smart at demonstrations when presenters follow the pre-designed scripts. But chatbots fail when real users come. Real users talk in an unexpected manner, change topics and so on.
Bots still have very few success stories, with very limited number of use cases. The technology did not take off. In March 2017 Facebook recommended replacing conversational experience with a three-level menu navigation. Another leader, Amazon Alexa has only one frequent use case: “Alexa, play a song”. Everything else does not stick.
The frequency of users’ requests follows the statistical distribution with the long tail. In order to keep conversation a good chatbot should be able to understand thousands topics, not dozens. That requires huge knowledge bases.
We analyzed thousands of chatbot logs and observed a significant probability of missunderstanding that multiplies with every next phrase. 10-30% of users say something which the chatbot is not prepared and trained for. Almost every long conversation frustrates the user. Retention rate is 3-5 times lower for bots than for mobile apps, which is a disaster.
We want to discuss these problems and offer technical solutions in order to improve experience, create knowledge bases faster and build useful self-learning chatbots.
Aran Khanna, Software Engineer, Amazon Web Services at MLconf ATL 2017MLconf
High Performance Deep Learning on Edge Devices With Apache MXNet:
Deep network based models are marked by an asymmetry between the large amount of compute power needed to train a model, and the relatively small amount of compute power needed to deploy a trained model for inference. This is particularly true in computer vision tasks such as object detection or image classification, where millions of labeled images and large numbers of GPUs are needed to produce an accurate model that can be deployed for inference on low powered devices with a single CPU. The challenge when deploying vision models on these low powered devices though, is getting inference to run efficiently enough to allow for near real time processing of a video stream. Fortunately Apache MXNet provides the tools to solve this issues, allowing users to create highly performant models with tools like separable convolutions, quantized weights and sparsity exploitation as well as providing custom hardware kernels to ensure inference calculations are accelerated to the maximum amount allowed by the hardware the model is being deployed on. This is demonstrated though a state of the art MXNet based vision network running in near real time on a low powered Raspberry Pi device. We finally discuss how running inference at the edge as well as leveraging MXNet’s efficient modeling tools can be used to massively drive down compute costs for deploying deep networks in a production system at scale.
Ryan West, Machine Learning Engineer, Nexosis at MLconf ATL 2017MLconf
Codifying Data Science Intuition: Using Decision Theory to Automate Time Series Model Selection:
While models generated from cross-sectional data can utilize cross-validation for model selection, most time series models cannot be cross-validated due to the temporal structure of the data used to create them. It is possible to employ a rolling cross-validation technique, however this process is computationally expensive and provides no indication of the long-term forecast accuracies of the models.
The purpose of this talk is to elaborate how decision theory can be used to automate time series model selection in order to streamline the manual process of validation and testing. By creating consecutive, temporally independent holdout sets, performance metrics for each model’s prediction on each holdout set are fed into a decision function to select an unbiased model. The decision function helps minimize the poorest performance of each model across all holdout sets in order to counteract the possibility of choosing a model that overfits or underfits the holdout sets. Not only does this process improve forecast accuracy, but it also reduces computation time by only requiring the creation of a fixed number of proposed forecasting models.
Talha Obaid, Email Security, Symantec at MLconf ATL 2017MLconf
A Machine Learning approach for detecting a Malware:
The project is to improve the way we detect script based malware using Machine Learning. Malware has become one of the most active channel to deliver threats like Banking Trojans and Ransomware. The talk is aimed at finding a new and effective way to detect the malware. We started with acquiring both malicious and clean samples. Later we performed feature identification, while building on top of existing knowledge base of malware. Then we performed automated feature extraction. After certain feature set is obtained, we teased-out feature which are categorical, interdependent or composite. We applied varying machine learning models, producing both binary and categorical outcomes. We cross validated our results and re-tuned our feature set and our model, until we obtained satisfying results, with least false-positives. We concluded that not all the extracted features are significant, in fact some features are detrimental on the model performance. Once such features are factored-out, it results not only in better match, but also provides a significant gain in performance.
LN Renganarayana, Architect, ML Platform and Services and Madhura Dudhgaonkar...MLconf
Lessons Learnt from building ML Products for enterprise SaaS:
Having spent the last 4+ years productizing ML powered enterprise products, we have learnt a lot! Join us to hear the stories of our stumbles (ahem learnings) in applying machine learning to solve business problems for Fortune 500 companies. Our hands-on experience has shaped our product strategy, ML platform design and organization’s operational principles. And the investments we made based on our learnings have helped us drastically improve our time to market for ML products. Come on by to hear the technical and organizational challenges (and some solutions) in building ML products for enterprise SaaS. Hopefully our learnings will be useful in your journey.
Bio: LN leads the architecture and design of Workday’s ML Platform and Services. He is all about building large scale distributed systems and data platforms. Currently his days (and some nights) are spent on solving the challenges in building ML products for Enterprise SaaS. LN’s career spans across HP, IBM Research, Symantec and now Workday. At Symantec, he was the architect and lead of a streaming platform that ingested and processed 2+ billions of events per day. As a Research Staff Member at IBM T.J. Watson Research Center, LN built optimizations for automatic parallelization, techniques for approximate computing, deployment automation for OpenStack, and analytics for large scale cloud services.
LN holds a Ph.D. in Computer Science from Colorado State University and has published more than 40 technical publications / patents. His work has received awards from ACM, IBM, and HP.
Bio:Madhura Dudhgaonkar is responsible for leading Workday’s search, data science and machine learning teams based in San Francisco. Her teams have spent ~4 years building machine learning products used by Fortune 500 companies. Her experience ranges from being a hands-on engineer to leading large engineering organizations. Madhura’s career spans across SUN Microsystems, Adobe and now Workday. During her career, she has been involved with building a variety of products – from developing Java Language to building a version 1.0 consumer product to building enterprise SaaS products.
She holds a bachelor’s degree in electronics and telecommunications and a master’s degree in math and computer science.
Madhura is originally from a small town in India and came to the United States to pursue her passion in technology. She currently calls San Francisco home, and despite nine years here, can’t get enough of its hilly charm, the diversity of people, culture, and experiences.
Ashrith Barthur, Security Scientist, H2o.ai, at MLconf 2017MLconf
Machine Learning Based Attack Vector Modeling for CyberSecurity:
Connections have behavioural patterns that are unique to protocols, loads, window sizes, bandwidth, and mainly the type of traffic. A CDN enterprise behaves completely differently than how a Cloud service company would behave and they both would be different from a corporation. This also means that attack vectors and attack landscapes are different in all these places. In this talk we speak about modeling different kinds of attacks and build a model that is able to identify these different kinds of attacks using ML.
The method we use is to identify different profiles based on many variables that specifically but robustly identify attacks of different kinds. The variables are specific to business, network profile, traffic. The variables are also high-level i.e. aggregate, and packet-level. This way the models are specifically picking up on constant variations in traffic, and create machine learning models to identify these attacks. Using the power of H2O these analyses are not just limited to a research and analysis of the traffic and concluding with a “OH, this was what it was.” moment but to actually deploy code, besides existing IDS and IPS, or deploying highly optimized, independent programs that can handle high thruputs at the rate of 1.2 Million decisions per second making it one of the fastest implementations of ML to identify, defend and protect critical infrastructure that are potentially under threat.
Rahul Mehrotra, Product Manager, Maluuba at The AI Conference 2017MLconf
Rahul Mehrotra is a Product Manager at Maluuba, a Canadian AI company that’s teaching machines to think, reason and communicate with humans (acquired by Microsoft in January 2017). Based in the AI epicenter of Montréal, Maluuba applies deep learning techniques to solve complex problems in language understanding. Rahul works across Maluuba’s three research areas (Machine Comprehension, Dialogue Systems and Reinforcement Learning) and helps advance breakthrough research by providing real-world problems and use cases. Rahul leads product initiatives to bring cutting-edge academic research to robust product pipelines. Rahul holds a B.ASc in Systems Design Engineering from the University of Waterloo.
Building Literate Machines
Advances in AI research have led to great innovations based on image and voice recognition, and 2017 will see further advances in the field of language, including the creation of more literate machines—those that can comprehend and communicate with humans but also machines that begin to model innate human-like skills.
In this talk, Rahul Mehrotra will explore how advances in deep and reinforcement learning are being applied to solve language understanding problems. You will gain a deeper understanding of the research fundamentals as well as implications and opportunities that language understanding AI services will bring. Rahul will outline how researchers are seeking to equip machines with higher level cognitive skills like common-sense reasoning, information seeking, transfer learning, and decision-making.
He will explain how these capabilities are being applied in enterprise, using practical examples across a range of business functions. These use cases are transformative.
To give just one example, knowledge workers and employees would no longer need to desperately search through an organization’s directories, repositories, emails, and other channels to find a specific document. Instead, the employee would communicate with an AI agent leveraging machine comprehension capabilities. The agent would be capable of answering the question in a security-compliant manner by having a deep understanding of the contents of the organization’s documents instead of simply retrieving based on keywords.
The talk will provide audience with key takeaways on the underlying research as well as the current and future applications of using language understanding AI in enterprise.
Tim Chartier, Chief Academic Officer, Tresata at MLconf ATL 2017MLconf
Beyond a Bit Fit
An emerging and important avenue of sport analytics is biometric data. From the casual athlete tracking steps and sleep to professional athletes tracking heart rate and impact data, biometric data can improve performance and prevent injury. What can we learn from biometric data? How can it aid athletes and coaches? How can you be a bit fitter by analyzing a body’s data? This talk will discuss the data, analysis and insights available and evolving in sports analytics of biometric data.
Jessica Rudd, PhD Student, Analytics and Data Science, Kennesaw State Univers...MLconf
Application of Support Vector Machine Modeling and Graph Theory Metrics for Disease Classification:
Disease classification is a crucial element of biomedical research. Recent studies have demonstrated that machine learning techniques, such as Support Vector Machine (SVM) modeling, produce similar or improved predictive capabilities in comparison to the traditional method of Logistic Regression. In addition, it has been found that social network metrics can provide useful predictive information for disease modeling. In this study, we combine simulated social network metrics with SVM to predict diabetes in a sample of data from the Behavioral Risk Factor Surveillance System. In this dataset, Logistic Regression outperformed SVM with ROC index of 81.8 and 81.7 for models with and without graph metrics, respectively. SVM with a polynomial kernel had ROC index of 72.9 and 75.6 for models with and without graph metrics, respectively. Although this did not perform as well as Logistic Regression, the results are consistent with previous studies utilizing SVM to classify diabetes.
Malika Cantor, Operations Partner, Comet Labs at The AI Conference 2017MLconf
Malika is devoted to bringing great ideas to life. She is an Operations Partner at Comet Labs, a cross between a venture fund and experimental research lab that supports AI and robotics startups. She previously worked in investment banking, and oversaw the development and growth of software and hardware startups in the education, healthcare, and telecom fields, in Asia, Europe and North America. She graduated from the University of Cambridge and has an MBA from Tsinghua University and MIT Sloan.
A VC Perspective on AI
Claudia Perlich leads the machine learning efforts that power Dstillery’s digital intelligence for marketers and media companies. With more than 50 published scientific articles, she is a widely acclaimed expert on big data and machine learning applications, and an active speaker at data science and marketing conferences around the world.
Claudia is the past winner of the Advertising Research Foundation’s (ARF) Grand Innovation Award and has been selected for Crain’s New York’s 40 Under 40 list, Wired Magazine’s Smart List, and Fast Company’s 100 Most Creative People.
Claudia holds multiple patents in machine learning. She has won many data mining competitions and awards at Knowledge Discovery and Data Mining (KDD) conferences, and served as the organization’s General Chair in 2014.
Prior to joining Dstillery in 2010, Claudia worked at IBM’s Watson Research Center, focusing on data analytics and machine learning. She holds a PhD in Information Systems from New York University (where she continues to teach at the Stern School of Business), and an MA in Computer Science from the University of Colorado.
Abstract Summary:
Predictability and other Predicaments:
In the context of building predictive models, predictability is usually considered a blessing. After all – that is the goal: build the model that has the highest predictive performance. The rise of ‘big data’ has in fact vastly improved our ability to predict human behavior thanks to the introduction of much more informative features. However, in practice things are more differentiated than that. For many applications, the relevant outcome is observed for very different reasons. In such mixed scenarios, the model will automatically gravitate to the one that is easiest to predict at the expense of the others. This even holds if the predictable scenario is by far less common or relevant. We present a number of applications where this happens: clicks on ads being performed ‘intentionally’ vs. ‘accidentally’, consumers visiting store locations vs. their phones pretending to be there, and finally customers filling out online forms vs. bots defrauding the advertising industry. In conclusion, the combination of different and highly informative features can have significantly negative impact on the usefulness of predictive modeling.
Semantic Segmentation on Satellite ImageryRAHUL BHOJWANI
This is an Image Semantic Segmentation project targeted on Satellite Imagery. The goal was to detect the pixel-wise segmentation map for various objects in Satellite Imagery including buildings, water bodies, roads etc. The data for this was taken from the Kaggle competition <https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection>.
We implemented FCN, U-Net and Segnet Deep learning architectures for this task.
PhD defence public presentation, Bayesian methods for inverse problems with point clouds: applications to single-photon lidar, ENSEEHIT, Toulouse, France
Deep Convolutional GANs - meaning of latent spaceHansol Kang
DCGAN은 GAN에 단순히 conv net을 적용했을 뿐만 아니라, latent space에서도 의미를 찾음.
DCGAN 논문 리뷰 및 PyTorch 기반의 구현.
VAE 세미나 이슈 사항에 대한 리뷰.
my github : https://github.com/messy-snail/GAN_PyTorch
[참고]
https://github.com/znxlwm/pytorch-MNIST-CelebA-GAN-DCGAN
https://github.com/taeoh-kim/Pytorch_DCGAN
Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks." arXiv preprint arXiv:1511.06434 (2015).
Scalable and Efficient Algorithms for Analysis of Massive, Streaming GraphsJason Riedy
Graph-structured data in network security, social networks, finance, and other applications not only are massive but also under continual evolution. The changes often are scattered across the graph, permitting novel parallel and incremental analysis algorithms. We discuss analysis algorithms for streaming graph data to maintain both local and global metrics with low latency and high efficiency.
Adversarial Reinforced Learning for Unsupervised Domain Adaptationtaeseon ryu
안녕하세요 딥러닝 논문읽기 모임입니다 오늘 업로드된 논문 리뷰 영상은 2021 WACB 에서 발표된 Adversarial Reinforced Learning for Unsupervised Domain Adaptation 라는 제목의 논문입니다.
데이터 분류의 자동화를 위해서는 많은양의 학습데이터가 필요합니다. 그렇기에 레이블이 존재하는 데이터로 학습이 끝난 모델을 재활용해서 새로운 도메인에 적용하는 연구인 도메인 어뎁션 분야는 많은 각광을 받고 있습니다.
논문의 특징으로는 크게 세가지를 둘 수 있습니다.
첫 번째로 본 논문에서는 GAN을 이용하여 비지도 방식으로 도메인 어뎁션이 가능한 프레임워크를 제안하였습니다 여기서 이제 강화학습 모델은 소스와 타겟
도메인간 가장 최적의 피처쌍을 선택하는데 사용됩니다
두 번째로 레이블링 되지않은 타겟 도메인에서 가장 적합한 피처를 찾아내기 위해
소스와 타겟간 상관관계를 보상으로 적용하는 정책을 개발하였습니다
마지막으로 제안된 적대적 강화학습 모델을 소스와 타겟 도메인간
최소화하는 피처쌍의 탐색과 각 도메인의 거리 분포상태의
Alignment 학습을 통해 소타대비 이제 성능을 향상 하였습니다
논문에 대한 디테일한 리뷰를 펀디멘탈팀 이근배님이 많은 도움 주셨습니다!
발표자: 송환준(KAIST 박사과정)
발표일: 2018.8.
(Parallel Clustering Algorithm Optimization for Large-Scale Data Analytics)
Clustering은 데이터 분석에 가장 널리 쓰이는 방법 중 하나로 주어진 데이터를 유사성에 기초하여 여러 개의 그룹으로 나누는 작업이다. 하지만 Clustering 방법의 높은 계산 복잡도 때문에 대용량 데이터 분석에는 잘 사용되지 못하고 있다. 최근 이 높은 복잡도 문제를 해결하기 위해 많은 연구가 Hadoop, Spark와 같은 분산 컴퓨팅 방식을 적용하고 있지만 기존 Clustering 알고리즘을 분산 환경에 최적화시키는 것은 쉽지 않다. 특히, 효율성을 높이기 위해 정확성을 손실하는 문제 그리고 여러 작업자들 간의 부하 불균형 문제는 알고리즘을 분산처리 할 때 발생하는 대표적인 문제이다. 본 세미나에서는 대표적 Clustering 알고리즘인 DBSCAN을 분산처리 할 때 발생하는 여러 도전 과제에 초점을 맞추고 이를 해결 할 수 있는 새로운 해결책을 제시한다. 실제로 이 방법은 최신 연구의 방법과 비교하여 정확도 손실 없이 최대 180배까지 알고리즘의 성능을 향상시켰다.
본 세미나는 SIGMOD 2018에서 발표한 다음 논문에 대한 내용이다.
Song, H. and Lee, J., "RP-DBSCAN: A Superfast Parallel DBSCAN Algorithm Based on Random Partitioning," In Proc. 2018 ACM Int'l Conf. on Management of Data (SIGMOD), Houston, Texas, pp. 1173 ~ 1187, June 2018
1. Background
- Concept of Clustering
- Concept of Distributed Processing (MapReduce)
- Clustering Algorithms (Focus on DBSCAN)
2. Challenges of Parallel Clustering
- Parallelization of Clustering Algorithm (Focus on DBSCAN)
- Existing Work
- Challenges
3. Our Approach
- Key Idea and Key Contribution
- Overview of Random Partitioning-DBSCAN
4. Experimental Results
5. Conclusions
How Do Gain and Discount Functions Affect the Correlation between DCG and Use...Julián Urbano
We present an empirical analysis of the effect that the gain and discount functions have in the correlation between DCG and user satisfaction. Through a large user study we estimate the relationship between satisfaction and the effectiveness computed with a test collection. In particular, we estimate the probabilities that users find a system satisfactory given a DCG score, and that they agree with a difference in DCG as to which of two systems is more satisfactory. We study this relationship for 36 combinations of gain and discount, and find that a linear gain and a constant discount are best correlated with user satisfaction.
Algorithm for efficiently and accurately updating PageRank as the graph changes from a stream of updates. Also includes needs from the upcoming GraphBLAS to support high-performance streaming graph analysis.
Improving Hardware Efficiency for DNN ApplicationsChester Chen
Speaker: Dr. Hai (Helen) Li is the Clare Boothe Luce Associate Professor of Electrical and Computer Engineering and Co-director of the Duke Center for Evolutionary Intelligence at Duke University
In this talk, I will introduce a few recent research spotlights by the Duke Center for Evolutionary Intelligence. The talk will start with the structured sparsity learning (SSL) method which attempts to learn a compact structure from a bigger DNN to reduce computation cost. It generates a regularized structure with high execution efficiency. Our experiments on CPU, GPU, and FPGA platforms show on average 3~5 times speedup of convolutional layer computation of AlexNet. Then, the implementation and acceleration of DNN applications on mobile computing systems will be introduced. MoDNN is a local distributed system which partitions DNN models onto several mobile devices to accelerate computations. ApesNet is an efficient pixel-wise segmentation network, which understands road scenes in real-time, and has achieved promising accuracy. Our prospects on the adoption of emerging technology will also be given at the end of this talk, offering the audiences an alternative thinking about the future evolution and revolution of modern computing systems.
Similar to Hanjun Dai, PhD Student, School of Computational Science and Engineering, Georgia Tech at MLconf ATL 2017 (20)
Jamila Smith-Loud - Understanding Human Impact: Social and Equity Assessments...MLconf
Understanding Human Impact: Social and Equity Assessments for AI Technologies
Social and Equity Impact Assessments have broad applications but can be a useful tool to explore and mitigate for Machine Learning fairness issues and can be applied to product specific questions as a way to generate insights and learnings about users, as well as impacts on society broadly as a result of the deployment of new and emerging technologies.
In this presentation, my goal is to advocate for and highlight the need to consult community and external stakeholder engagement to develop a new knowledge base and understanding of the human and social consequences of algorithmic decision making and to introduce principles, methods and process for these types of impact assessments.
Ted Willke - The Brain’s Guide to Dealing with Context in Language UnderstandingMLconf
The Brain’s Guide to Dealing with Context in Language Understanding
Like the visual cortex, the regions of the brain involved in understanding language represent information hierarchically. But whereas the visual cortex organizes things into a spatial hierarchy, the language regions encode information into a hierarchy of timescale. This organization is key to our uniquely human ability to integrate semantic information across narratives. More and more, deep learning-based approaches to natural language understanding embrace models that incorporate contextual information at varying timescales. This has not only led to state-of-the art performance on many difficult natural language tasks, but also to breakthroughs in our understanding of brain activity.
In this talk, we will discuss the important connection between language understanding and context at different timescales. We will explore how different deep learning architectures capture timescales in language and how closely their encodings mimic the brain. Along the way, we will uncover some surprising discoveries about what depth does and doesn’t buy you in deep recurrent neural networks. And we’ll describe a new, more flexible way to think about these architectures and ease design space exploration. Finally, we’ll discuss some of the exciting applications made possible by these breakthroughs.
Justin Armstrong - Applying Computer Vision to Reduce Contamination in the Re...MLconf
Applying Computer Vision to Reduce Contamination in the Recycling Stream
With China’s recent refusal of most foreign recyclables, North American waste haulers are scrambling to figure out how to make on-shore recycling cost-effective in order to continue providing recycling services. Recyclables that were once being shipped to China for manual sorting are now primarily being redirected to landfills or incinerators. Without a solution, a nearly $5 billion annual recycling market could come to a halt.
Purity in the recycling stream is key to this effort as contaminants in the stream can increase the cost of operations, damage equipment and reduce the ability to create pure commodities suitable for creating recycled goods. This market disruption as a result of China’s new regulations, however, provides us the chance to re-examine and improve our current disposal & collection habits with modern monitoring & artificial intelligence technology.
Using images from our in-dumpster cameras, Compology has developed an ML-based process that helps identify, measure and alert for contaminants in recycling containers before they are picked-up, helping keep the recycling stream clean.
Our convolutional neural network flags potential instances of contamination inside a dumpster, enabling garbage haulers to know which containers have the wrong type of material inside. This allows them to provide targeted, timely education, and when appropriate, assess fines, to improve recycling compliance at the businesses and residences they serve, helping keep recycling services financially viable.
In this presentation, we will walk through our ML-based contamination measurement and scoring process by showing how Waste Management, a national waste hauler, has experienced 57% contamination reduction in nearly 2,000 containers over six months, This progress shows significant strides towards financially viable recycling services.
Igor Markov - Quantum Computing: a Treasure Hunt, not a Gold RushMLconf
Quantum Computing: a Treasure Hunt, not a Gold Rush
Quantum computers promise a significant step up in computational power over conventional computers, but also suffer a number of counterintuitive limitations --- both in their computational model and in leading lab implementations. In this talk, we review how quantum computers compete with conventional computers and how conventional computers try to hold their ground. Then we outline what stands in the way of successful quantum ML applications.
Josh Wills - Data Labeling as Religious ExperienceMLconf
Data Labeling as Religious Experience
One of the most common places to deploy a production machine learning systems is as a replacement for a legacy rules-based system that is having a hard time keeping up with new edge cases and requirements. I'll be walking through the process and tooling we used to help us design, train, and deploy a model to replace a set of static rules we had for handling invite spam at Slack, talk about what we learned, and discuss some problems to solve in order to make these migrations easier for everyone.
Vinay Prabhu - Project GaitNet: Ushering in the ImageNet moment for human Gai...MLconf
Project GaitNet: Ushering in the ImageNet moment for human Gait kinematics
The emergence of the upright human bipedal gait can be traced back 4 to 2.8 million years ago, to the now extinct hominin Australopithecus afarensis. Fine grained analysis of gait using the modern MEMS sensors found on all smartphones not just reveals a lot about the person’s orthopedic and neuromuscular health status, but also has enough idiosyncratic clues that it can be harnessed as a passive biometric. While there were many siloed attempts made by the machine learning community to model Bipedal Gait sensor data, these were done with small datasets oft collected in restricted academic environs. In this talk, we will introduce the ImageNet moment for human gait analysis by presenting 'Project GaitNet', the largest ever planet-sized motion sensor based human bipedal gait dataset ever curated. We’ll also present the associated state-of-the-art results in classifying humans harnessing novel deep neural architectures and the related success stories we have enjoyed in transfer-learning into disparate domains of human kinematics analysis.
Jekaterina Novikova - Machine Learning Methods in Detecting Alzheimer’s Disea...MLconf
Machine Learning Methods in Detecting Alzheimer’s Disease from Speech and Language
Alzheimer's disease affects millions of people worldwide, and it is important to predict the disease as early and as accurate as possible. In this talk, I will discuss development of novel ML models that help classifying healthy people from those who develop Alzheimer's, using short samples of human speech. As an input to the model, features of different modalities are extracted from speech audio samples and transcriptions: (1) syntactic measures, such as e.g. production rules extracted from syntactic parse trees, (2) lexical measures, such as e.g. features of lexical richness and complexity and lexical norms, and (3) acoustic measures, such as e.g. standard Mel-frequency cepstral coefficients. I will present the ML model that detects cognitive impairment by reaching agreement among modalities. The resulting model is able to achieve state of the art performance in both supervised and semi-supervised manner, using manual transcripts of human speech. Additionally, I will discuss potential limitations of any fully-automated speech-based Alzheimer's disease detection model, focusing mostly on the analysis of the impact of a not-so-accurate automatic speech recognition (ASR) on the classification performance. To illustrate this, I will present the experiments with controlled amounts of artificially generated ASR errors and explain how the deletion errors affect Alzheimer's detection performance the most, due to their impact on the features of syntactic and lexical complexity.
Meghana Ravikumar - Optimized Image Classification on the CheapMLconf
Optimized Image Classification on the Cheap
In this talk, we anchor on building an image classifier trained on the Stanford Cars dataset to evaluate two approaches to transfer learning -fine tuning and feature extraction- and the impact of hyperparameter optimization on these techniques. Once we define the most performant transfer learning technique for Stanford Cars, we will double the size of the dataset through image augmentation to boost the classifier’s performance. We will use Bayesian optimization to learn the hyperparameters associated with image transformations using the downstream image classifier’s performance as the guide. In conjunction with model performance, we will also focus on the features of these augmented images and the downstream implications for our image classifier.
To both maximize model performance on a budget and explore the impact of optimization on these methods, we apply a particularly efficient implementation of Bayesian optimization to each of these architectures in this comparison. Our goal is to draw on a rigorous set of experimental results that can help us answer the question: how can resource-constrained teams make trade-offs between efficiency and effectiveness using pre-trained models?
Noam Finkelstein - The Importance of Modeling Data CollectionMLconf
The Importance of Modeling Data Collection
Data sets used in machine learning are often collected in a systematically biased way - certain data points are more likely to be collected than others. We call this "observation bias". For example, in health care, we are more likely to see lab tests when the patient is feeling unwell than otherwise. Failing to account for observation bias can, of course, result in poor predictions on new data. By contrast, properly accounting for this bias allows us to make better use of the data we do have.
In this presentation, we discuss practical and theoretical approaches to dealing with observation bias. When the nature of the bias is known, there are simple adjustments we can make to nonparametric function estimation techniques, such as Gaussian Process models. We also discuss the scenario where the data collection model is unknown. In this case, there are steps we can take to estimate it from observed data. Finally, we demonstrate that having a small subset of data points that are known to be collected at random - that is, in an unbiased way - can vastly improve our ability to account for observation bias in the rest of the data set.
My hope is that attendees of this presentation will be aware of the perils of observation bias in their own work, and be equipped with tools to address it.
The Uncanny Valley of ML
Every so often, the conundrum of the Uncanny Valley re-emerges as advanced technologies evolve from clearly experimental products to refined accepted technologies. We have seen its effects in robotics, computer graphics, and page load times. The debate of how to handle the new technology detracts from its benefits. When machine learning is added to human decision systems a similar effect can be measured in increased response time and decreased accuracy. These systems include radiology, judicial assignments, bus schedules, housing prices, power grids and a growing variety of applications. Unfortunately, the Uncanny Valley of ML can be hard to detect in these systems and can lead to degraded system performance when ML is introduced, at great expense. Here, we'll introduce key design principles for introducing ML into human decision systems to navigate around the Uncanny Valley and avoid its pitfalls.
Sneha Rajana - Deep Learning Architectures for Semantic Relation Detection TasksMLconf
Deep Learning Architectures for Semantic Relation Detection Tasks
Recognizing and distinguishing specific semantic relations from other types of semantic relations is an essential part of language understanding systems. Identifying expressions with similar and contrasting meanings is valuable for NLP systems which go beyond recognizing semantic relatedness and require to identify specific semantic relations. In this talk, I will first present novel techniques for creating labelled datasets required for training deep learning models for classifying semantic relations between phrases. I will further present various neural network architectures that integrate morphological features into integrated path-based and distributional relation detection algorithms and demonstrate that this model outperforms state-of-the-art models in distinguishing semantic relations and is capable of efficiently handling multi-word expressions.
Anoop Deoras - Building an Incrementally Trained, Local Taste Aware, Global D...MLconf
Building an Incrementally Trained, Local Taste Aware, Global Deep Learned Recommender System Model
At Netflix, our main goal is to maximize our members’ enjoyment of the selected show by minimizing the amount of time it takes for them to find it. We try to achieve this goal by personalizing almost all the aspects of our product -- from what shows to recommend, to how to present these shows and construct their home-pages to what images to select per show, among many other things. Everything is recommendations for us and as an applied Machine Learning group, we spend our time building models for personalization that will eventually increase the joy and satisfaction of our members. In this talk we will primarily focus our attention on a) making a global deep learned recommender model that is regional tastes and popularity aware and b) adapting this model to changing taste preferences as well as dynamic catalog availability.
We will first go through some standard recommender system models that use Matrix Factorization and Topic Models and then compare and contrast them with more powerful and higher capacity deep learning based models such as sequence models that use recurrent neural networks. We will show what it entails to build a global model that is aware of regional taste preferences and catalog availability. We will show how models that are built on simple Maximum Likelihood principle fail to do that. We will then describe one solution that we have employed in order to enable the global deep learned models to focus their attention on capturing regional taste preferences and changing catalog.In the latter half of the talk, we will discuss how we do incremental learning of deep learned recommender system models. Why do we need to do that ? Everything changes with time. Users’ tastes change with time. What’s available on Netflix and what’s popular also change over time. Therefore, updating or improving recommendation systems over time is necessary to bring more joy to users. In addition to how we apply incremental learning, we will discuss some of the challenges we face involving large-scale data preparation, infrastructure setup for incremental model training as well as pipeline scheduling. The incremental training enables us to serve fresher models trained on fresher and larger amounts of data. This helps our recommender system to nicely and quickly adapt to catalog and users’ taste changes, and improve overall performance.
Vito Ostuni - The Voice: New Challenges in a Zero UI WorldMLconf
Vito Ostuni - The Voice: New Challenges in a Zero UI World
The adoption of voice-enabled devices has seen an explosive growth in the last few years and music consumption is among the most popular use cases. Music personalization and recommendation plays a major role at Pandora in providing a daily delightful listening experience for millions of users. In turn, providing the same perfectly tailored listening experience through these novel voice interfaces brings new interesting challenges and exciting opportunities. In this talk we will describe how we apply personalization and recommendation techniques in three common voice scenarios which can be defined in terms of request types: known-item, thematic, and broad open-ended. We will describe how we use deep learning slot filling techniques and query classification to interpret the user intent and identify the main concepts in the query.
We will also present the differences and challenges regarding evaluation of voice powered recommendation systems. Since pure voice interfaces do not contain visual UI elements, relevance labels need to be inferred through implicit actions such as play time, query reformulations or other types of session level information. Another difference is that while the typical recommendation task corresponds to recommending a ranked list of items, a voice play request translates into a single item play action. Thus, some considerations about closed feedback loops need to be made. In summary, improving the quality of voice interactions in music services is a relatively new challenge and many exciting opportunities for breakthroughs still remain. There are many new aspects of recommendation system interfaces to address to bring a delightful and effortless experience for voice users. We will share a few open challenges to solve for the future.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
The Art of the Pitch: WordPress Relationships and Sales
Hanjun Dai, PhD Student, School of Computational Science and Engineering, Georgia Tech at MLconf ATL 2017
1. Graph Representation Learning
with Deep Embedding Approach
Hanjun Dai
Ph.D. student in School of Computational Science & Engineering
Georgia Institute of Technology
2. Drug discovery
Dai et.al, structure2vec, ICML 2016
Recommendation
Dai et.al, DeepCoevolve, Recsys DLRS 2016
Knowledge Graph
Dai et.al, VRN, in submission
Trivedi et.al, Know-Evolve, ICML 2017
TSPMaxcut
Dai et.al, S2V-DQN, NIPS 2017
Vertex Cover
Graph applications
3. Outline
• Review of traditional approaches
• Our architecture
• Experiments on RNA and molecules
• Extension to social network and recommendation
• Application in graph combinatorial optimization
4. Review: RNA / Molecule property prediction
Application: High throughput virtual screening
C U U C A G
Structured data Target
Power Conversion Efficiency
(PCE): Regression problem
Handcrafted feature
Binding affinity:
Binary Classification Problem
,S={
}…… ,
C ,S={
}…… ,
U U U U C
C A G
0 2 0 0 … … … … 0 2
AA AC AU AG GGGU
5. Problem with handcrafted features
• Stage 1: Build kernel Matrix • Stage 2: Train classifier on top
Not scalable for millions of data
𝑘(𝜒2, 𝜒3)
Or High-dim Explicit Bag-of-words Feature Map
0 2 0 0 … … … … 0 2
AA AC AU AG GGGU Constructed features are
not aware of task
Review: RNA / Molecule property prediction
6. Review: Temporal Recommendation
who will do what and when?
ChristineAliceDavid Jacob
TowelShoe Book
𝑅user
item
≈
Matrix factorization
𝑈
𝑉
Epoch division
𝑡
7. Minimum vertex/set cover
Advertisers: influence maximization
Review: Graph Combinatorial Optimization
2 - approximation for
minimum vertex cover
Repeat till all edges covered:
• Select uncovered edge with
largest total degree
Manually
designed rule.
Can we learn
from data?
NP-hard problems
8. Outline
• Review of traditional approaches
• Our architecture
• Experiments on RNA and molecules
• Extension to social network and recommendation
• Application in graph combinatorial optimization
9. Intuitive understanding: local filters
Filter applies to each local patch:
Image Graph
Filter applies to each 1-hop neighborhood:
11. Outline
• Review of traditional approaches
• Our architecture
• Experiments on RNA and molecules
• Extension to social network and recommendation
• Application in graph combinatorial optimization
13. Experiment on Molecules
Dataset Harvard clean energy project
Size 2.3 million
Avg node # 28
Avg edge # 33
Power Conversion Efficiency (PCE)
(0 -12 %)
predict
Organic
Solar Panel
Materials
Test MAE Test RMSE # parameters
Mean
predictor
1.986 2.406 1
WL level-3 0.143 0.204 1.6 m
WL level-6 0.096 0.137 1378 m
DE-MF 0.091 0.125 0.1 m
DE-LBP 0.085 0.117 0.1 m
14. Experiment on Molecules
0.1M 1M 10M 100M 1000M
0.085
0.095
0.120
0.150
0.280
Parameter
number
MAE
Embedded
MF
Embedded
BP
Weisfeiler-Lehman
Level 6
Hashed
WL Level 6
Embedding
reduces model
size by 10,000x !
[Dai, Dai & Song 2016]
16. Outline
• Review of traditional approaches
• Our architecture
• Experiments on RNA and molecules
• Extension to social network and recommendation
• Application in graph combinatorial optimization
19. Experiment on network analysis
15
20
25
30
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Blogcatalog
5
9
13
17
21
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9-Fraction of training-
Wikipedia
deepwalk
node2vec
gcn
s2v
Blogcatalog Wikipedia
# nodes 10,312 4,777
# edges 333,983 184,812
# classes 39 40
task Group membership POS tag
20. Represent
𝑋1 𝑋2 𝑋3
𝑋4
𝑋5
𝐻8 𝐻9
𝐻6 𝐻7
𝐻4
𝐻1
𝐻5
𝐻2 𝐻3
𝑋6
LVM
𝐺 = (𝒱, ℇ)
user/item
raw features
Interaction
time/context
time
𝑡0
𝑡2
𝑡1
𝑡3
[Dai, et al. 2016]
Dynamic Graphs for Recommendation
• Unroll the interaction
along timeline
1. The bipartite
interaction graph
2. The temporal
ordering of events
• Mini-batch training
using truncated back
propagation through
time (BPTT)
22. Outline
• Review of traditional approaches
• Our architecture
• Experiments on RNA and molecules
• Extension to social network and recommendation
• Application in graph combinatorial optimization
23. Learning graph opt: Motivation
Minimum vertex/set cover
• Exponential time complexity
Branch and Bound
• Results too weak
Constructive Approximation
Cannot learn from solved instances!
24. Learning graph opt: Motivation
• Data from same / similar distribution
Social network: Barabási–Albert
Road network: fixed graph with
evolving edge weights
Images taken from Wikipedia
• Supervised learning?
• Reinforcement learning!
• No such supervision
• Learning by trial and error
25. Learning graph opt: RL background
[Minh, et al. Nature 2015]
Greedy policy:
𝑖∗ = 𝑎𝑟𝑔𝑚𝑎𝑥𝑖 𝑄(𝑆, 𝑖)
• State 𝑺: current screen
• Reward 𝑹(𝒕): score you earned at current step
• Action value function 𝑸(𝑺, 𝒊):
your predicted future total rewards
• Action 𝒊: move your board left / right
• Policy 𝝅(𝒔): How to choose your action
26. Learning graph opt: RL on graphs
[Dai, et al. NIPS 2017]
min
𝑥 𝑖∈ 0,1
𝑖∈𝓥
𝑥𝑖
𝑠. 𝑡. 𝑥𝑖 + 𝑥𝑗 ≥ 1, ∀ 𝑖, 𝑗 ∈ 𝓔
Reward: 𝑟 𝑡
= −1
• State 𝑺:
current partial solution
• Action value function 𝑸(𝑺, 𝒊):
Expected negative future loss
• Greedy Policy 𝝅(𝒔):
Add best node
27. Learning graph opt: action-value function
• Parameterize Action value function 𝑄(𝑆, 𝑖) with structure2vec
• Train with Deep Q-Learning (DQN)
Bellman optimality equation
28. Learning graph opt: quantitative comparison
approximation ratio ≈ 1
• A distribution of
scale free networks
• Optimal approximated by
running CPLEX for 1 hour
29. 1
1.001
1.002
1.003
1.004
1.005
1.006
1.007
Generalization to large instances
Learning graph opt: quantitative comparison
• Train on small graphs with
50-100 nodes
• Generalize to not only graphs
from same distribution
• But also larger graphs
• Approximation ratio < 1.007
30. Learning graph opt: time-solution tradeoff
Embedded
MF
CPLEX
1st
CPLEX
2nd
CPLEX
3rd
CPLEX
4th
2-approx
2-approx +
Embedding
produces algorithm
with good tradeoff !
RNN
• Generate 200 Barabasi-
Albert networks with
300 nodes
• Let CPLEX produces 1st,
2nd, 3rd, 4th feasible
solutions
31. Learning graph opt: real-world data
http://snap.stanford.edu/netinf/#data
MemeTracker graph: 960 nodes and 5000 edges
Methods Approximation Ratio
Optimal 1.00 ( 473 nodes)
S2V-DQN 1.002 ( 474 nodes)
MVCApprox-Greedy 1.222 (578 nodes)
MVCApprox 1.408 (666 nodes)
• Learning from sampled
subgraphs
32. Learning graph opt: learned strategy
• Learned a greedy algorithm which is different from known ones
33. Learning graph opt: other problems
• Maximum cut
• Traveling Salesman Problem
optimal
Solution
found
• Set cover
Set cover image taken from Wikipedia
Learn with
Bipartite
graph
34. Thanks to my collaborators in this project
• Advisor:
Le Song
• Collaborators (alpha-beta order)
Bo Dai Bistra Dilkina Elias Khalil
Rakshit Trevidi Yichen Wang Yuyu Zhang
Now is the time to put them together. We start with the zero embeddings, and then perform one step of local filtering. For example, to get update of mu_2, we use its neighborhood embeddings and input features. Similarly, we can get updates for all node embeddings. We need to iterate through this process several times. Intuitively, this will allow each embedding capture more and more neighborhood information. In the last step, we merge those embeddings to get a vector representation of entire structure data. We can see this model can be trained in an end to end fashion.
If we unroll the interactions between users and items, we can get a dependency graph which looks like this. Here the dependency comes from the assumption that the user or item's latent representation will change over time, after each interaction event. We can then abstract it as a directed graph on the right hand side.
Different from the previous scenario, here we get a single gigantic graph. So we apply stochastic training with truncated backpropagation through time, which is commonly used in recurrent neural network.