Writing NodeJS applications is an easy task for JavaScript developers. However, getting what is happening under the hood in NodeJS may be intimidating, but understanding it is vital for web developers.
Indeed, when you try to learn NodeJS, most tutorials are about the NodeJS ecosystem like Express, Socket.IO, PassportJS. It is really rare to see some tutorials about the NodeJS runtime itself.
By this meetup, I want to spot the light on some advanced NodeJS topics so as to help developers answering questions an experienced NodeJS developer is expected to answer. Understanding these topics is essential to make you a much more desirable developer. I want to explore several topics including the famous event-loop along with NodeJS Module Patterns and how dependencies actually work in NodeJS.
I hope that this meetup would help you to be more comfortable understanding advanced code written in NodeJS.
Exploring Session Context using Distributed Representations of Queries and Re...Bhaskar Mitra
Search logs contain examples of frequently occurring patterns of user reformulations of queries. Intuitively, the reformulation "san francisco" → "san francisco 49ers" is semantically similar to "detroit" →"detroit lions". Likewise, "london"→"things to do in london" and "new york"→"new york tourist attractions" can also be considered similar transitions in intent. The reformulation "movies" → "new movies" and "york" → "new york", however, are clearly different despite the lexical similarities in the two reformulations. In this paper, we study the distributed representation of queries learnt by deep neural network models, such as the Convolutional Latent Semantic Model, and show that they can be used to represent query reformulations as vectors. These reformulation vectors exhibit favourable properties such as mapping semantically and syntactically similar query changes closer in the embedding space. Our work is motivated by the success of continuous space language models in capturing relationships between words and their meanings using offset vectors. We demonstrate a way to extend the same intuition to represent query reformulations.
Furthermore, we show that the distributed representations of queries and reformulations are both useful for modelling session context for query prediction tasks, such as for query auto-completion (QAC) ranking. Our empirical study demonstrates that short-term (session) history context features based on these two representations improves the mean reciprocal rank (MRR) for the QAC ranking task by more than 10% over a supervised ranker baseline. Our results also show that by using features based on both these representations together we achieve a better performance, than either of them individually.
Paper: http://research.microsoft.com/apps/pubs/default.aspx?id=244728
Invited Talk for the SIGNLL Conference on Computational Natural Language Learning 2017 (CoNLL 2017) Chris Dyer (DeepMind / CMU) 3 Aug 2017. Vancouver, Canada
Writing NodeJS applications is an easy task for JavaScript developers. However, getting what is happening under the hood in NodeJS may be intimidating, but understanding it is vital for web developers.
Indeed, when you try to learn NodeJS, most tutorials are about the NodeJS ecosystem like Express, Socket.IO, PassportJS. It is really rare to see some tutorials about the NodeJS runtime itself.
By this meetup, I want to spot the light on some advanced NodeJS topics so as to help developers answering questions an experienced NodeJS developer is expected to answer. Understanding these topics is essential to make you a much more desirable developer. I want to explore several topics including the famous event-loop along with NodeJS Module Patterns and how dependencies actually work in NodeJS.
I hope that this meetup would help you to be more comfortable understanding advanced code written in NodeJS.
Exploring Session Context using Distributed Representations of Queries and Re...Bhaskar Mitra
Search logs contain examples of frequently occurring patterns of user reformulations of queries. Intuitively, the reformulation "san francisco" → "san francisco 49ers" is semantically similar to "detroit" →"detroit lions". Likewise, "london"→"things to do in london" and "new york"→"new york tourist attractions" can also be considered similar transitions in intent. The reformulation "movies" → "new movies" and "york" → "new york", however, are clearly different despite the lexical similarities in the two reformulations. In this paper, we study the distributed representation of queries learnt by deep neural network models, such as the Convolutional Latent Semantic Model, and show that they can be used to represent query reformulations as vectors. These reformulation vectors exhibit favourable properties such as mapping semantically and syntactically similar query changes closer in the embedding space. Our work is motivated by the success of continuous space language models in capturing relationships between words and their meanings using offset vectors. We demonstrate a way to extend the same intuition to represent query reformulations.
Furthermore, we show that the distributed representations of queries and reformulations are both useful for modelling session context for query prediction tasks, such as for query auto-completion (QAC) ranking. Our empirical study demonstrates that short-term (session) history context features based on these two representations improves the mean reciprocal rank (MRR) for the QAC ranking task by more than 10% over a supervised ranker baseline. Our results also show that by using features based on both these representations together we achieve a better performance, than either of them individually.
Paper: http://research.microsoft.com/apps/pubs/default.aspx?id=244728
Invited Talk for the SIGNLL Conference on Computational Natural Language Learning 2017 (CoNLL 2017) Chris Dyer (DeepMind / CMU) 3 Aug 2017. Vancouver, Canada
Deep Qualia: Philosophy of Statistics, Deep Learning, and Blockchain
Deep learning: What is it, why is it important, and what do I need to know?
The aim of this talk is to discuss deep learning as an advanced computational method and its philosophical implications. Computing is a fundamental model by which we are understanding more about ourselves and the world. We think that reality is composed of patterns, which can be detected by machine learning methods.
Deep learning is a complexity optimization technique in which algorithms learn from data by modeling high-level abstractions and assigning probabilities to nodes as they characterize the system and make predictions. An important challenge in deep learning is that these methods work in certain domains (image, speech, and text recognition), but we do not have a good explanation for why, which impedes a wider application of these solutions.
Another recent advance in computational methods is blockchain technology which allows the secure transfer of assets and information, and the automated coordination of operations via a trackable remunerative ledger and smart contracts (automatically-executing Internet-based programs).
This talk looks at how deep learning technology, particularly as coupled with blockchain systems, might be used to produce a new kind of global computing platform. The goal is for blockchain deep learning systems to address higher-dimensional computing challenges that require learning and dynamic response in domains such as economics and financial risk, epidemiology, social modeling, public health (cancer, aging), dark matter, atomic reactions, network-modeling (transportation, energy, smart cities), artificial intelligence, and consciousness.
Technological Unemployment and the Robo-EconomyMelanie Swan
Technological Unemployment (jobs outsourced to technology) is coming and the challenge is to steward an orderly and beneficial transition to more intense human-technology collaboration
Construisons ensemble le chatbot bancaire dedemain !LINAGORA
Retrouvez les slides réalisées pour notre Meetup collaboratif du jeudi 9 novembre 2017 : "Construisons ensemble le chatbot bancaire de demain !"
Après la publication de son étude sur les chatbots de l'écosystème bancaire "ChatBots et intelligence artificielle arrivent dans les banques : y êtes-vous préparé(e) ?", LinDA, l'agence digitale du groupe LINAGORA, à réaliser un atelier de co-conception du chatbot bancaire de demain.
Cet atelier gratuit d'idéation fut l'occasion d'imaginer, avec plusieurs participants du monde bancaire, la meilleure solution d'agent conversationnel pour leur banque.
Nos animateurs, Christophe Clouzeau (UX Digital Strategist) et Jean-Philippe Mouton (Head of digital consulting), ont appliqué des méthodes de conception UX, utilisées avec nos clients et par les startups innovantes.
Vectorland: Brief Notes from Using Text Embeddings for SearchBhaskar Mitra
(Invited talk at Search Solutions 2015)
A lot of recent work in neural models and “Deep Learning” is focused on learning vector representations for text, image, speech, entities, and other nuggets of information. From word analogies to automatically generating human level descriptions of images, the use of text embeddings has become a key ingredient in many natural language processing (NLP) and information retrieval (IR) tasks.
In this talk, I will present some personal learnings from working on (neural and non-neural) text embeddings for IR, as well as highlight a few key recent insights from the broader academic community. I will talk about the affinity of certain embeddings for certain kinds of tasks, and how the notion of relatedness in an embedding space depends on how the vector representations are trained. The goal of this talk is to encourage everyone to start thinking about text embeddings beyond just as an output of a “black box” machine learning model, and to highlight that the relationships between different embedding spaces are about as interesting as the relationships between items within an embedding space.
Deep Qualia: Philosophy of Statistics, Deep Learning, and Blockchain
Deep learning: What is it, why is it important, and what do I need to know?
The aim of this talk is to discuss deep learning as an advanced computational method and its philosophical implications. Computing is a fundamental model by which we are understanding more about ourselves and the world. We think that reality is composed of patterns, which can be detected by machine learning methods.
Deep learning is a complexity optimization technique in which algorithms learn from data by modeling high-level abstractions and assigning probabilities to nodes as they characterize the system and make predictions. An important challenge in deep learning is that these methods work in certain domains (image, speech, and text recognition), but we do not have a good explanation for why, which impedes a wider application of these solutions.
Another recent advance in computational methods is blockchain technology which allows the secure transfer of assets and information, and the automated coordination of operations via a trackable remunerative ledger and smart contracts (automatically-executing Internet-based programs).
This talk looks at how deep learning technology, particularly as coupled with blockchain systems, might be used to produce a new kind of global computing platform. The goal is for blockchain deep learning systems to address higher-dimensional computing challenges that require learning and dynamic response in domains such as economics and financial risk, epidemiology, social modeling, public health (cancer, aging), dark matter, atomic reactions, network-modeling (transportation, energy, smart cities), artificial intelligence, and consciousness.
Technological Unemployment and the Robo-EconomyMelanie Swan
Technological Unemployment (jobs outsourced to technology) is coming and the challenge is to steward an orderly and beneficial transition to more intense human-technology collaboration
Construisons ensemble le chatbot bancaire dedemain !LINAGORA
Retrouvez les slides réalisées pour notre Meetup collaboratif du jeudi 9 novembre 2017 : "Construisons ensemble le chatbot bancaire de demain !"
Après la publication de son étude sur les chatbots de l'écosystème bancaire "ChatBots et intelligence artificielle arrivent dans les banques : y êtes-vous préparé(e) ?", LinDA, l'agence digitale du groupe LINAGORA, à réaliser un atelier de co-conception du chatbot bancaire de demain.
Cet atelier gratuit d'idéation fut l'occasion d'imaginer, avec plusieurs participants du monde bancaire, la meilleure solution d'agent conversationnel pour leur banque.
Nos animateurs, Christophe Clouzeau (UX Digital Strategist) et Jean-Philippe Mouton (Head of digital consulting), ont appliqué des méthodes de conception UX, utilisées avec nos clients et par les startups innovantes.
Vectorland: Brief Notes from Using Text Embeddings for SearchBhaskar Mitra
(Invited talk at Search Solutions 2015)
A lot of recent work in neural models and “Deep Learning” is focused on learning vector representations for text, image, speech, entities, and other nuggets of information. From word analogies to automatically generating human level descriptions of images, the use of text embeddings has become a key ingredient in many natural language processing (NLP) and information retrieval (IR) tasks.
In this talk, I will present some personal learnings from working on (neural and non-neural) text embeddings for IR, as well as highlight a few key recent insights from the broader academic community. I will talk about the affinity of certain embeddings for certain kinds of tasks, and how the notion of relatedness in an embedding space depends on how the vector representations are trained. The goal of this talk is to encourage everyone to start thinking about text embeddings beyond just as an output of a “black box” machine learning model, and to highlight that the relationships between different embedding spaces are about as interesting as the relationships between items within an embedding space.