Advertisement

Techniques to personalize conversations for virtual assistants

Technology Executive, Innovator and Entrepreneur
Feb. 12, 2020
Advertisement

More Related Content

Advertisement

Techniques to personalize conversations for virtual assistants

  1. Please reach out to info@voicy.ai for any of your needs. AI/ML/DL/DRL consulting (Est: 2015, 10 patents) Chatbot: eCommerce, Physical Retail, Banking Assistant Search: Search Ranking, Query Understanding Conversational Search: eCommerce Mobile App Vision: Fashion outfits, Similar Dresses Machine Learning: Forecasting, Fraud detection Deep Learning: Personalization, Ranking Deep Reinforcement Learning: Pricing, Marketing Vision QA: Robotics Imitation Learning: Digital Twins on Devices Techniques to personalize conversations for virtual assistants
  2. Techniques to personalize conversations for virtual assistants Personalization is: a process that changes the functionality, interface, information access and content, or distinctiveness of a system to increase its personal relevance to an individual or a category of individuals : Marketing/e-commerce a. “Personalization is the combined use of technology and customer information to tailor electronic commerce interactions between a business and each individual customer” b. “Personalization is about building customer loyalty by building a meaningful one-to-one relationship; by understanding the needs of each individual and helping satisfy a goal that efficiently and knowledgeably addresses each individual’s need in a given context” c. “Personalization is the capability to provide users, customers, partners, and employees, with the most relevant web experience possible” d. “Personalization is any behaviors occurring in the interactions intended to contribute to the individuation of the customer” e. An enterprise, process, or ideology in which personalized products and services are integrated and implemented throughout the organization including all points of sale; other points of customer contact; and back-end activities and departments such as inventory, shipping, production, and finance. Cognitive science f. Personalization is “a system that makes explicit assumptions about users’ goals, interests, preferences and knowledge based on an observation of his or her behavior or a set of rules relating behavior to cognitive elements”. g. Personalization is the process of providing relevant content based on individual user preferences or behavior h. Personalization is the“explicit user model that represents user knowledge, goals, interests, and other features that enable the system to distinguish among different users” i. Personalization is the understanding of “the user, the user’s tasks, and the context in which the user accomplishes tasks and goals” Social science j. Technology that reflects and enhances social relationships and social networks. k. “Technology that provide experiences that bridge cultures, languages, currencies, and ideologies” Computer science l. “Personalization is a toolbox of technologies and application features used in the design of an end-user experience” m. “Personalization system is any piece of software that applies business rules to profiles of users and content to provide a variable set of user interfaces” n. Machine-learning algorithms that are integrated into systems to accommodate individual user’s unique patterns of interactions with the system. o. “Computer networks that provides personalized features, services and user interface portability across network boundaries and between terminals” p. Unifying platform embedded in any type of computing devices that support individualized information inflow and outflow. q. Presenting customers with services that are relevant to their current locations, activities, and surrounding environments.
  3. Techniques to personalize conversations for virtual assistants
  4. Design Paradigms: Implementation: What, Whom, and Who? Techniques to personalize conversations for virtual assistants
  5. What: Content, User interface, Delivery channel, and Functionality Techniques to personalize conversations for virtual assistants
  6. What: Techniques to personalize conversations for virtual assistants
  7. What: User Interface MultiModal Dialog Systems: High commercial value Challenges: 1) automatically generate the right responses in appropriate medium forms; 2) jointly consider the visual cues and the side information while selecting product images; and 3) guide the response generation with multi-faceted and heterogeneous knowledge. Techniques to personalize conversations for virtual assistants
  8. What: User Interface Techniques to personalize conversations for virtual assistants
  9. What: User Interface Techniques to personalize conversations for virtual assistants
  10. What: Emotion Detection Techniques to personalize conversations for virtual assistants
  11. What: Personalized avatars Techniques to personalize conversations for virtual assistants
  12. Whom: individual or a user group A chatbot needs to present a coherent personality to gain confidence and trust from the user. Some features are: Agreeableness: cheerful, trusting, amiable, humble, polite, helpful Extroversion: affectionate, friendly, fun-loving, confident Conscientiousness: reliable, consistent, perceptive Openness: insightful, original, clever, daring Neuroticism: no traits Techniques to personalize conversations for virtual assistants
  13. Whom: User Modeling via Stereotypes Techniques to personalize conversations for virtual assistants
  14. Whom: Personality Match Modelling Techniques to personalize conversations for virtual assistants
  15. Whom: Personalized Adaptation using Transfer Learning Techniques to personalize conversations for virtual assistants
  16. Whom: Face to Face Conversation (https://vimeo.com/248025147) Techniques to personalize conversations for virtual assistants
  17. Who:Implicit and Explicit, Data Pipeline Techniques to personalize conversations for virtual assistants
  18. Conclusion: Personalization more important to ensure engagement Big commercial implications Quickly evolving space Great research challenges in Personalized Unconstrained Natural Language, Multi Modal Interactions, and Personalized Avatars. Great System challenges: Real time personalization pipeline Techniques to personalize conversations for virtual assistants
  19. References: Controlling Personality-Based Stylistic Variation with Neural Natural Language Generators (https://arxiv.org/pdf/1805.08352.pdf) The Technological Gap Between Virtual Assistants and Recommendation Systems(https://arxiv.org/pdf/1901.00431.pdf) Conversational Recommender System (https://arxiv.org/pdf/1806.03277.pdf) Towards Deep Conversational Recommendations (https://arxiv.org/pdf/1812.07617v2.pdf) Multimodal Dialog System: Generating Responses via Adaptive Decoders (https://liqiangnie.github.io/paper/fp349-nieAemb.pdf) Recommendations in Dialogue Systems : Thesis (https://escholarship.org/uc/item/4rs1s3ms) Making Personalized Recommendation through Conversation: Architecture Design and Recommendation Methods (https://www.aaai.org/ocs/index.php/WS/AAAIW18/paper/viewFile/17221/15647) The Personalization of Conversational Agents in Health Care: Systematic Review (https://www.jmir.org/2019/11/e15360) Personalizing a Dialogue System With Transfer Reinforcement Learning (https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewPaper/16104) Reinforcement Learning for Personalized Dialogue Management (https://arxiv.org/pdf/1908.00286.pdf) Investigating Deep Reinforcement Learning Techniques in Personalized Dialogue Generation (https://epubs.siam.org/doi/pdf/10.1137/1.9781611975321.71) How to personalize chatbots: 3-step personalization model (https://chatbotslife.com/how-to-personalize-chatbots-3-step-personalization-model-3385c803580) Chatbot Personalities Matters:(https://conversations2018.files.wordpress.com/2018/10/conversations_2018_paper_11_preprint1.pdf) Assigning Personality/Profile to a Chatting Machine for Coherent Conversation Generation (https://www.ijcai.org/proceedings/2018/0595.pdf) Developing a Design Guide for Consistent Manifestation of Conversational Agent Personalities (https://iasdr2019.org/uploads/files/Proceedings/te-f-1175-Kim-H.pdf) User Modeling via Stereotypes *(https://www.cs.utexas.edu/~ear/CogSci.pdf) Ubiqutous User Modelling (http://www.it.usyd.edu.au/~judy/Homec/Pubs/2012_Ubiquitous_User_Modeling.pdf) Top AI Research papers.(https://www.topbots.com/most-important-conversational-ai-research/) Training Millions of Personalized Dialogue Agents https://arxiv.org/abs/1809.01984 Animating an Autonomous 3D Talking Avatar (https://arxiv.org/abs/1903.05448) A Face-to-Face Neural Conversation Model (https://arxiv.org/abs/1812.01525) Systems and methods for virtual agents to help customers and businesses (https://patents.google.com/patent/US20170148073A1/en) Advanced techniques to improve content presentation experiences for businesses and users (https://patents.google.com/patent/US20190139092A1) Personalizing Netflix With Streaming Datasets (https://qconnewyork.com/ny2017/system/files/presentation-slides/qcon_ny_2017-_personalizing_netflix_with_streaming_datasets_1.pdf) What Is Personalization? Perspectives on the Design and Implementation of Personalization in Information Systems (http://people.sunyit.edu/~krieseg/Scrapbook/data/20111105152908/contentserver.asp) Chatbot Personalities Matters (https://conversations2018.files.wordpress.com/2018/10/conversations_2018_paper_11_preprint1.pdf) Techniques to personalize conversations for virtual assistants

Editor's Notes

  1. Personalization is: the process of making something suitable for the needs of a particular person [6]. When applied specifically to digital technologies, personalization can be defined as: a process that changes the functionality, interface, information access and content, or distinctiveness of a system to increase its personal relevance to an individual or a category of individuals [7]. A recent interdisciplinary review study proposed a framework to characterize personalization along three dimensions: (1) what is personalized (ie, content, user interface, delivery channel, and functionality); (2) for whom is it personalized (either a specific individual or a user group, eg, elderly women); and (3) how automated is the personalization (how the information needed for user modelling is collected) [7]
  2. http://people.sunyit.edu/~krieseg/Scrapbook/data/20111105152908/contentserver.asp What Is Personalization? Perspectives on the Design and Implementation of Personalization in Information Systems The scheme is constructed along three dimensions of implementation implicit in the previous section: (a) the aspect of the information system that is manipulated to provide personalization (what is personalized), (b) the target of personalization (to whom to personalize), and (c) who does the personalization (i.e., the user or the system). This classification scheme draws on several previous classification systems. Blom [6] distinguished three motivations to personalize: to access information, to accomplish work goals, and to accommodate individual differences. Rossi et al. [26] made a distinction between base information and behavior, what the user perceives and how the user perceives. This framework is largely concerned with system-level elements such as personalization for links, navigation structure, and navigation context. Instone [13] and Wu et al. [27] classified personalization on e-commerce Web sites into a two-by-two grid with implicit versus explicit personalization on one dimension and Web content versus Web interface on the other dimension. In terms of the first dimension, what is personalized, we can distinguish four aspects of IS that can be personalized: the information itself (content), how the information is presented (user interface), the media through which information is delivered (channel/information access), and what users can do with the system (functionality). These represent the basic elements of IS that can be manipulated in a personalization system to make the system more personally relevant to the user. This dimension focuses on the particular parts of the system that deliver personalization to the user. The second dimension, the target of personalization, can be either a category of individuals or a specific individual. One option is to implement personalization for a particular category of user such as women, single-child families, or members of a club. Insofar as an individual user identifies with this category, he or she is likely to perceive that the system is personalized for them. Another option is to design systems to adapt and cater to the needs of a single user. Individuated personalization is targeted to a specific individual, and its goal is to deliver goods, services, or information unique to each individual as an individual. Research on social identity [28, 29] has shown that people may think of themselves either as members of a social group (a category) or as individuals, dependPERSPECTIVES ON DESIGN AND IMPLEMENTATION IN IS 185 ing on the social cues available in a particular context. Furthermore, research has indicated that people react differently when they are focused on their unique identity as an individual (individuated) as opposed to how they act if their focus on their identity as members of a social group (categorized). When people focus on category membership, their motivation revolves around values and concerns of the social group; they are more influenced by group norms than by individual considerations; they tend to make judgments based on perceived group standards; and they may stereotype members of outgroups, groups they view as opposed or different from their own. When people are individuated, their motivation is largely driven by their particular individual needs; they are not as strongly influenced by norms but make decisions on individual bases, and they are more likely to see others as individuals as well and not as members of other social groups. Personalization systems based on categories are likely to give categorical cues (e.g., “This site is specially designed for members of the Blackwell Club”) and are likely to elicit quite different user reactions than are individuated systems. Interestingly, the actual implementation of individuated personalization may be based on categorical analysis. If it is desirable to capture the unique individuality of a person, this can be defined as the unique intersection of a variety of categories representing the individual’s important characteristics (e.g., female, Hispanic, professional, living in Idaho, 25 years old, one child, etc.) and utilizing enough categories to define the individual uniquely. Although categories are used, this system functions for all intents and purposes as an individuated personalization system. In general, as this example illustrates, individuated personalization takes more system resources than categorical personalization. The third dimension pertains to degree to which personalization is automated. Personalization in which the user participates by making choices or providing information to give the system guidance as to how to adapt is termed explicit personalization. Personalization that is done automatically by the system is termed implicit personalization. As we noted in the previous section, this distinction parallels the differentiation of system-initiated versus user-initiated personalization, adaptive versusadaptablesystems,andstaticversusdynamicpersonalization.Thisdistinctionis animportantonenotonlybecauseithasimplicationsforthetechniquesusedtocarry out personalization but also because users are likely to react differently to a system they know they control (explicit personalization) and one that seems to have a life of itsownandadaptstothemofitsownaccord(implicitpersonalization)[18].Research has suggested that people react to systems that display agency on the same basis as they respond to other human beings, whereas a system that is dependent on human input—and thus clearly responsive rather than proactive—is more likely to be viewed as nonhuman. Hence, implicit personalization would be expected to affect users differently than would explicit personalization
  3. https://chatbotslife.com/how-to-personalize-chatbots-3-step-personalization-model-3385c803580
  4. Architectural Instrumental Motive: To fulfill a human being’s needs for expressing himself/herself through the design of the built environment Motive: To fulfill a human being’s needs for efficiency and productivity Goals: To create a functional and delightful Web environment that is compatible with a sense of personal style Goals: To increase efficiency and productivity of using the system Strategy: Individualization Strategy: Utilization Means: Building a delightful Web environment and immersive Web experience Means: Designing, enabling, and utilizing useful, usable, user-friendly tools User model: Cognitive, affective, and socialcultural aspects of the user User model: Situated needs of the user Relational Commercial Motive: To fulfill a human being’s needs for socialization and a sense of belonging Motive: To fulfill a human’s beings needs for material and psychic welfare Goals: To create a common, convenient platform for social interaction that is compatible with the individual’s desired level of privacy Goals: To increase sales and to enhance customer loyalty Strategy: Mediation Strategy: Segmentation Means: Building social interactions and interpersonal relationships Means: Differentiating product, service, and information User model: Social context and relational aspects of the user User models: User preference or demographic profiling; user online behavior and user purchasing history http://people.sunyit.edu/~krieseg/Scrapbook/data/20111105152908/contentserver.asp What Is Personalization? Perspectives on the Design and Implementation of Personalization in Information Systems
  5. Conversational Recommender System https://arxiv.org/pdf/1806.03277.pdf Several aspects are important in the process. First, how to understand the user’s intention correctly. Second, how to make sequential decisions and take appropriate actions in each turn. Third, how to make personalized recommendations in order to maximize the user satisfaction. Figure 1 presents the overview of our proposed framework. At a time step in the dialogue, the user utters “I want to find a Bar”. The framework calls the belief tracker to convert the utterance into a vector representation or “belief”; then the belief is sent to the policy network to make a decision. For example, the policy network may decide to request the city information next. Then the agent may respond with “Which city are you in?”, and gets a reward, which is used to train the policy. A different decision is to make a recommendation. Then the agent calls the recommender system to get a list of items personalized for the user. We introduce each component and the relationships among them in more details in the following sections For the NLU module, we train a deep belief tracker to analyze a user’s current utterance based on context and extract the facet values of the targeted item from the user utterance. Its output is used to update the current user intention, which is represented as a user query that is a set of facet-value pairs about the target. The user query will be used by both the dialogue manager and the recommender system. For the DM module, we train a deep policy network that decides which machine action to take at each turn given the current user query and long term user preferences learned by the recommender system. The action could be asking the user for information about a particular facet or recommending a list of products. The deep policy network selects an action that maximizes the expected reward in the entire conversation session. When the user query collected so far is sufficient to identify the user’s information need, the optimal action usually is recommending a list of items that is personalized for the user. When the user query collected is not sufficient, the optimal action usually is asking for more information. The Neural Belief Tracker (NBT) is a model designed to detect the slot-value pairs that make up the user’s goal at a given turn during the flow of dialogue. Its input consists of the system dialogue acts preceding the user input, the user utterance itself, and a single candidate slot-value pair that it needs to make a decision about. For instance, the model might have to decide whether the goal FOOD=ITALIAN has been expressed in ‘I’m looking for good pizza’. To perform belief tracking, the NBT model iterates over all candidate slot-value pairs (defined by the ontology), and decides which ones have just been expressed by the user. https://arxiv.org/pdf/1606.03777.pdf
  6. Conversational Recommender System https://arxiv.org/pdf/1806.03277.pdf When trying to buy products on an e-Commerce website, users often navigate the product space through faceted search [31][6][24]. Motivated by this and in order to assist users to find the item they want in conversation, it is crucial that the system understands which values the user has provided for product facets, and represents the user utterances with a semi-structured query. We introduce a Belief Tracker module similar to [5] to extract facet-value pairs from user utterances during the conversation, and maintain the facet-value pairs as the memory state (i.e. user query) of the agent. In this paper, we view the product facet (or attribute, metadata) f along with its specific value v as a facet-value pair (f ,v). Each facet-value pair represents a constraint on the items. For example, (color,red) is a facet-value pair which constrains that the items need to be red in color. The network structure of belief tracker is shown in the lower part in Figure 2. We train a belief tracker for each facet of the items. The belief tracker takes the current and the past user utterances as the input, and outputs a probability distribution across all the possible values of a facet at the current time point. The structure of recommendation model is shown in the upper left part of Figure 2. Let U denote the users and I the items. For M users and N items in the dataset, the users and items are represented as the sets: {u1,u2, ...,uM } and {i1,i2, ...,iN }. The input feature x is the concatenation of the 1-hot encoded user/item vector, where the only element that is not zero in the vector corresponds to the index of the encoded info, and the dialogue belief: x = um ⊕ in ⊕ st (5) um = {0, 0, ..., 1, ..., 0},with 1 at the mth element. (6) in = {0, 0, ..., 1, ..., 0},with 1 at the nth element. (7) where m and n denotes that in is rated by the um. The output ym,n can be either a rating score for the explicit feedback or a 0-1 scalar for the implicit feedback. We use a 2-way (K = 2) FM: ym,n = w0 + X N α=1 wα xα + X N α=1 X N β=α+1 ⟨vα , vβ ⟩xα , xβ (8) ⟨vα , vβ ⟩ = X K κ=1 vα,κ vκ, β (9) where w0, wα , vα and vβ are learnable parameters. α and β denote the index of the input vector x, and ym,n is the um’s feedback to in. For rating prediction, stochastic gradient descent is used to minimize the L2 loss between the predicted rating score and the real rating score. The objective function scales linearly with the size of the data
  7. https://liqiangnie.github.io/paper/fp349-nieAemb.pdf: Multimodal Dialog System: Generating Responses via Adaptive Decoders
  8. https://liqiangnie.github.io/paper/fp349-nieAemb.pdf: Multimodal Dialog System: Generating Responses via Adaptive Decoders A multimodal dialog system between a shopper and a chatbot. The shopper expresses his requirements step by step as the dialog goes on. And the chatbot generates different responses according to the context.
  9. https://liqiangnie.github.io/paper/fp349-nieAemb.pdf: Multimodal Dialog System: Generating Responses via Adaptive Decoders . To be more specific, our proposed MAGIC model first embeds the historical utterances via a multimodal context encoder. It then understands users’ diverse intentions conveyed in the multimodal context by classifying them into 15 categories, such as greeting, giving criteria, and purchasing. According to our statistics over the MMD dataset, responses to these 15 kinds of intentions are in three variants without exception: general responses in texts, knowledge-enriched responses in texts, and the multimodal responses in the form of texts and images. In the light of this, MAGIC automatically judges the response type and its corresponding medium form by looking up our pre-defined tables with triplet entries (Intention Category, Response Types, Medium Forms). Hereafter, MAGIC employs the adaptive decoders to generate the desired response types, whereby the input of the decoders is the embedding of the historical utterances. In particular, 1) a simple recurrent neural network (RNN) is applied to generating general responses; while 2) a knowledge-aware RNN decoder embeds the multiform domain knowledge into a knowledge vector in a high-dimensional space via the Memory Network [32] and the Key-Value Memory Network [26], and then the knowledge vector is incorporated into a unified RNN decoder to produce more knowledge-enriched responses. And 3) the recommender model learns the product representations by jointly considering the textual attributes and the visual images via a neural model optimized by the max-margin loss. Ultimately, the recommender ranks the product candidates based on the similarity between the product representation and the embedding of the historical utterances
  10. file:///C:/Users/nomul/Downloads/Context-Dependent_Sentiment_Analysis_in_User-Gener.pdf
  11. https://arxiv.org/pdf/1903.05448.pdf Animating an Autonomous 3D Talking Avatar We observed from casual dyadic conversations that people are mainly in stances, fidgets, gestures and transitions (between stances). Stances are synonymous to idle (e.g. arm on waist, or body weight on one side), arXiv:1903.05448v1 [cs.HC] 13 Mar 2019 fidgets are ticks and small subtle gestures, while gestures are more functional. The final element are the transitions between the stances such as changing the body weight to another side, or having a hand going from the waist to a shoulder. Creating all combinations of head motions with gestures, fidgets and weight shifts is not feasible and we therefore break the motion space down into body layers. Specifically we decompose the motions into three layers: body, arms, and head. Simply composing the layers by masking, results in robotic and uncanny motions, because the dynamics for other body parts is lost. body → head, spine, legs arms → head, spine, arms head → head, spine
  12. A chatbot needs to present a coherent personality to gain confidence and trust from the user follows: Agreeableness: cheerful, trusting, amiable, humble, polite, helpful Extroversion: affectionate, friendly, fun-loving, confident Conscientiousness: reliable, consistent, perceptive Openness: insightful, original, clever, daring Neuroticism: no traits Chatbot B on the other https://conversations2018.files.wordpress.com/2018/10/conversations_2018_paper_11_preprint1.pdf
  13. https://www.cs.utexas.edu/~ear/CogSci.pdf • In a system with a natural language front end, stereotypes could be triggered by the use of arbitrary words, phrases, or grammatical constructions. • In a system with a specific set of commands that the user can issue, stereotypes can be triggered by the use of particular commands. • Stereotypes could be triggered by any other information that the system has about the user. For example, his account number might indicate his status in some way.
  14. https://www.ijcai.org/Proceedings/2018/0595.pdf Assigning Personality/Profile to a Chatting Machine for Coherent Conversation Generation : In this paper we define personality as a set of profile keys and values 2 and propose a model consisting of three key modules: a profile detector which detects whether a profile key and which key should be addressed, a bidirectional decoder that generates a response backward and forward from a selected profile value, and a position detector which predicts a proper word position at which a profile value can be replaced during the training of the decoder. Our model works as follows (see Figure 1): given a post, the profile detector will predict whether the profile should be used. If not, a general seq2seq decoder will be used to generate the response; otherwise, the profile detector will further select an appropriate profile key and its value. Starting from the selected profile value, a response will be generated forward and backward by the bidirectional decoder. To train the bidirectional decoder on generic dialogue data (see Figure 2), the position detector predicts a word position from which decoding should start given the selected profile value
  15. file:///C:/Users/nomul/Downloads/16104-76845-1-PB.pdf Personalizing a Dialogue System with Transfer Reinforcement Learning In this paper, we propose a PErsonalized Task-oriented diALogue (PETAL) system, which is a transfer reinforcement learning framework based on the POMDP for learning a personalized dialogue system. The PETAL system first learns common dialogue knowledge from the source domain and then adapts this knowledge to the target user. To achieve this goal, the PETAL system models personalized policies with a personalized Q-function defined as the expected cumulative general reward plus the expected cumulative personal reward. The personalized Q-function can model differences between the source and target users and thus can avoid the negative transfer problem brought by the differences. Experimental results on a real-world coffee-ordering dataset and simulation data show that the proposed PETAL system can choose optimal actions for different users and thus can effectively improve the dialogue quality under the personalized setting. In this paper, we tackle the problem of learning a personalized dialogue system. We propose the PETAL system, a transfer reinforcement learning framework based on the POMDP. The PETAL system first learns common dialogue knowledge from the source domain and then adapts this knowledge to the target user. We propose to model a personalized policy with a personalized Q-function, which can avoid the negative transfer problem brought by differences between the source users and the target user. As a future direction, we will investigate to transfer knowledge from heterogeneous domains such as knowledge graphs and images.
  16. We adopt FACS [8] in this paper. Particularly, we use 18 action unit each controls a face muscle, as well as 3 dimensions to represent the 3D head pose. Compared to the SUE and FL, FACS not only captures subtle detail gestures, but also produces highly interpretable gesture representation which makes animation simple and straight-forward. We detect FACS from images using the off-the-shelf OpenFace software [2]. A Face-to-Face Neural Conversation Model categorizes gesture into six emotions: anger, disgust, fear, happiness, sadness and surprise. It is effective in encoding high-level emotion, but it is overly abstract to describe detailed gestures. Each emotion involves a combination of up to 6 muscle movements, making it difficult for face synthesis and animation. https://arxiv.org/abs/1812.01525
  17. https://qconnewyork.com/ny2017/system/files/presentation-slides/qcon_ny_2017-_personalizing_netflix_with_streaming_datasets_1.pdf
Advertisement