Context-aware recommender systems (CARS) adapt their recommendations to users’ specific situations. In many recommender systems, particularly those based on collaborative filtering, the contextual constraints may lead to sparsity: fewer matches between the current user context and previous situations. Our earlier work proposed an approach called differential context relaxation (DCR), in which different subsets of contextual features were applied in different components of a recommendation algorithm. In this paper, we expand on our previous work on DCR, proposing a more general approach — differential context weighting (DCW), in which contextual features are weighted. We compare DCR and DCW on two real-world datasets, and DCW demonstrates improved accuracy over DCR with comparable coverage. We also show that particle swarm optimization (PSO) can be used to efficiently determine the weights for DCW.
[SOCRS2013]Differential Context Modeling in Collaborative FilteringYONG ZHENG
Abstract: Context-aware recommender systems (CARS) try to adapt their recommendations to users’ specific contextual situations. In many recommender systems, particularly those based on collaborative filtering (CF), the additional contextual constraints may lead to increased sparsity in the user preference data, thus fewer matches between the current user context and previous situations. Our earlier work proposed two approaches to deal with this problem – differential context relaxation (DCR) and differential context weighting (DCW) and we have successfully examined them using user-based collaborative filtering (UBCF). In this paper, we put DCR and DCW into one framework called differential context modeling (DCM). As a general framework, DCM is able to be applied to other recommendation algorithms other than UBCF. We expand the application of DCM to the other two CF approaches: item-based CF and slope one recommender. Predictive performances are evaluated based on two real-world data sets and experimental results demonstrate that applying DCM to those two algorithms is able to improve predictive accuracy compared with our baselines: context-free CF algorithms and contextual pre-filtering algorithms.
[WI 2014]Context Recommendation Using Multi-label ClassificationYONG ZHENG
Context-aware recommender systems (CARS) are extensions of traditional recommenders that also take into account contextual condition of a user to whom a recommendation is made. The recommendation problem is, however, still focused on recommending a set of items to a target user. In this paper, we consider the problem of recommending to a user the appropriate contexts in which an item should be selected. We believe that context recommenders can be used as another set of tools to assist users' decision making. We formulate the context recommendation problem and discuss the motivation behind and possible applications of the concept. We identify two general classes of algorithms to solve this problem: direct context prediction and indirect context recommendation. Furthermore, we present and evaluate several direct context prediction algorithms based on multi-label classification (MLC). Our experiments demonstrate that the proposed approaches outperform the baseline methods, and also that personalization is required to enhance the effectiveness of context recommenders.
Context-aware recommender systems (CARS) help improve the effectiveness of recommendations by adapting to users' preferences in different contextual situations. One approach to CARS that has been shown to be particularly effective is Context-Aware Matrix Factorization (CAMF). CAMF incorporates contextual dependencies into the standard matrix factorization (MF) process, where users and items are represented as collections of weights over various latent factors. In this paper, we introduce another CARS approach based on an extension of matrix factorization, namely, the Sparse Linear Method (SLIM). We develop a family of deviation-based contextual SLIM (CSLIM) recommendation algorithms by learning rating deviations in different contextual conditions. Our CSLIM approach is better at explaining the underlying reasons behind contextual recommendations, and our experimental evaluations over five context-aware data sets demonstrate that these CSLIM algorithms outperform the state-of-the-art CARS algorithms in the top-$N$ recommendation task. We also discuss the criteria for selecting the appropriate CSLIM algorithm in advance based on the underlying characteristics of the data.
[ECWEB2012]Differential Context Relaxation for Context-Aware Travel Recommend...YONG ZHENG
Context-aware recommendation (CARS) has been shown to be an effective approach to recommendation in a number of domains. However, the problem of identifying appropriate contextual variables remains: using too many contextual variables risks a drastic increase in dimensionality and a loss of accuracy in recommendation. In this paper, we propose a novel treatment of context – identifying influential contexts for different algorithm components instead of for the whole algorithm. Based on this idea, we take traditional user-based collaborative filtering (CF) as an example, decompose it into three context-sensitive components, and propose a hybrid contextual approach. We then identify appropriate relaxations of contextual constraints for each algorithm component. The effectiveness of context relaxation is demonstrated by comparison of three algorithms using a travel data set: a contenxt-ignorant approach, contextual pre-filtering, and our hybrid contextual algorithm. The experiments show that choosing an appropriate relaxation of the contextual constraints for each component of an algorithm outperforms strict application of the context.
[SOCRS2013]Differential Context Modeling in Collaborative FilteringYONG ZHENG
Abstract: Context-aware recommender systems (CARS) try to adapt their recommendations to users’ specific contextual situations. In many recommender systems, particularly those based on collaborative filtering (CF), the additional contextual constraints may lead to increased sparsity in the user preference data, thus fewer matches between the current user context and previous situations. Our earlier work proposed two approaches to deal with this problem – differential context relaxation (DCR) and differential context weighting (DCW) and we have successfully examined them using user-based collaborative filtering (UBCF). In this paper, we put DCR and DCW into one framework called differential context modeling (DCM). As a general framework, DCM is able to be applied to other recommendation algorithms other than UBCF. We expand the application of DCM to the other two CF approaches: item-based CF and slope one recommender. Predictive performances are evaluated based on two real-world data sets and experimental results demonstrate that applying DCM to those two algorithms is able to improve predictive accuracy compared with our baselines: context-free CF algorithms and contextual pre-filtering algorithms.
[WI 2014]Context Recommendation Using Multi-label ClassificationYONG ZHENG
Context-aware recommender systems (CARS) are extensions of traditional recommenders that also take into account contextual condition of a user to whom a recommendation is made. The recommendation problem is, however, still focused on recommending a set of items to a target user. In this paper, we consider the problem of recommending to a user the appropriate contexts in which an item should be selected. We believe that context recommenders can be used as another set of tools to assist users' decision making. We formulate the context recommendation problem and discuss the motivation behind and possible applications of the concept. We identify two general classes of algorithms to solve this problem: direct context prediction and indirect context recommendation. Furthermore, we present and evaluate several direct context prediction algorithms based on multi-label classification (MLC). Our experiments demonstrate that the proposed approaches outperform the baseline methods, and also that personalization is required to enhance the effectiveness of context recommenders.
Context-aware recommender systems (CARS) help improve the effectiveness of recommendations by adapting to users' preferences in different contextual situations. One approach to CARS that has been shown to be particularly effective is Context-Aware Matrix Factorization (CAMF). CAMF incorporates contextual dependencies into the standard matrix factorization (MF) process, where users and items are represented as collections of weights over various latent factors. In this paper, we introduce another CARS approach based on an extension of matrix factorization, namely, the Sparse Linear Method (SLIM). We develop a family of deviation-based contextual SLIM (CSLIM) recommendation algorithms by learning rating deviations in different contextual conditions. Our CSLIM approach is better at explaining the underlying reasons behind contextual recommendations, and our experimental evaluations over five context-aware data sets demonstrate that these CSLIM algorithms outperform the state-of-the-art CARS algorithms in the top-$N$ recommendation task. We also discuss the criteria for selecting the appropriate CSLIM algorithm in advance based on the underlying characteristics of the data.
[ECWEB2012]Differential Context Relaxation for Context-Aware Travel Recommend...YONG ZHENG
Context-aware recommendation (CARS) has been shown to be an effective approach to recommendation in a number of domains. However, the problem of identifying appropriate contextual variables remains: using too many contextual variables risks a drastic increase in dimensionality and a loss of accuracy in recommendation. In this paper, we propose a novel treatment of context – identifying influential contexts for different algorithm components instead of for the whole algorithm. Based on this idea, we take traditional user-based collaborative filtering (CF) as an example, decompose it into three context-sensitive components, and propose a hybrid contextual approach. We then identify appropriate relaxations of contextual constraints for each algorithm component. The effectiveness of context relaxation is demonstrated by comparison of three algorithms using a travel data set: a contenxt-ignorant approach, contextual pre-filtering, and our hybrid contextual algorithm. The experiments show that choosing an appropriate relaxation of the contextual constraints for each component of an algorithm outperforms strict application of the context.
[RecSys 2014] Deviation-Based and Similarity-Based Contextual SLIM Recommenda...YONG ZHENG
Yong Zheng. "Deviation-Based and Similarity-Based Contextual SLIM Recommendation Algorithms". ACM RecSys Doctoral Symposium, Proceedings of the 8th ACM Conference on Recommender Systems (ACM RecSys 2014), pp. 437-440, Silicon Valley, CA, USA, Oct 2014 [Doctoral Symposium, Acceptance rate: 47%]
Recommender Systems from A to Z – Model EvaluationCrossing Minds
The third meetup will be about evaluating different models for our recommender system. We will review the strategies we have to check if a model is under fitting or overfitting. After that, we will present and analyze the losses that are typically used in recommendation systems to train models. We will compare regression, classification, and rank based losses and when it's convenient to use each one. Finally, we are going to cover all the metrics that are typically used to evaluate the performance of different recommendation systems and how to test that the models are giving good results in production.
Recommender systems analyze patterns of user interest in
products to provide personalized recommendations. They seek to predict the rating or preference that user would
give to an item. Some of the most successful realizations of latent factor models are based on matrix factorization...
(Gaurav sawant & dhaval sawlani)bia 678 final project reportGaurav Sawant
PROJECT REPORT
• Performed memory-based collaborative filtering techniques like Cosine similarities, Pearson’s r & model-based Matrix Factorization techniques like Alternating Least Squares (ALS) method
• Studied the scalability of these methods on local machines & on Hadoop clusters
From the NYC Machine Learning meetup on Jan 17, 2013: http://www.meetup.com/NYC-Machine-Learning/events/97871782/
Video is available here: http://vimeo.com/57900625
Maximizing the Representation Gap between In-domain & OOD examplesJay Nandy
Paper presented in ICML 2020 Workshop on Uncertainty & Robustness in Deep Learning
Paper link: http://www.gatsby.ucl.ac.uk/~balaji/udl2020/accepted-papers/UDL2020-paper-134.pdf
Recommendation and Information Retrieval: Two Sides of the Same Coin?Arjen de Vries
Status update on our current understanding of how collaborative filtering relates far more closely to information retrieval than usually thought. Includes work by Jun Wang and Alejandro Bellogín. This presentation has been given at the Siks PhD student course on computational intelligence, May 24th, 2013
DetectoRS for Object Detection/Segmentation
On COCO test-dev, DetectoRS achieves state-of-the art 55.7% box AP for object detection, 48.5% mask AP for instance segmentation, and 50.0% PQ for panoptic segmentation.
(2020.07)
A Multiscale Visualization of Attention in the Transformer Modeltaeseon ryu
안녕하세요 딥러닝 논문읽기 모임입니다 오늘 업로드된 논문 리뷰 영상은 2019 ACL 에서 발표된 A Multiscale Visualization of Attention in the Transformer Model 라는 제목의 논문입니다.
본 논문은 최고의 주가를 달리는 트랜스포머가 연산하는 과정을 다양한 관점에서 비주얼라이징을 할 수 있는 툴에 관련된 논문 입니다.
트랜스포머와 버트, GPT에 대한 간단한 소개와 더불어, 해당 비주얼라이징이 어떻게 활용 될 수 있는지에 대한 여러 유스케이스를 자연어 처리팀 백지윤님이 자세한 리뷰 도와 주셨습니다.
https://telecombcn-dl.github.io/2017-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
• Performed memory-based collaborative filtering techniques like Cosine similarities, Pearson’s r & model-based Matrix Factorization techniques like Alternating Least Squares (ALS) method
• Studied the scalability of these methods on local machines & on Hadoop clusters
[RecSys 2014] Deviation-Based and Similarity-Based Contextual SLIM Recommenda...YONG ZHENG
Yong Zheng. "Deviation-Based and Similarity-Based Contextual SLIM Recommendation Algorithms". ACM RecSys Doctoral Symposium, Proceedings of the 8th ACM Conference on Recommender Systems (ACM RecSys 2014), pp. 437-440, Silicon Valley, CA, USA, Oct 2014 [Doctoral Symposium, Acceptance rate: 47%]
Recommender Systems from A to Z – Model EvaluationCrossing Minds
The third meetup will be about evaluating different models for our recommender system. We will review the strategies we have to check if a model is under fitting or overfitting. After that, we will present and analyze the losses that are typically used in recommendation systems to train models. We will compare regression, classification, and rank based losses and when it's convenient to use each one. Finally, we are going to cover all the metrics that are typically used to evaluate the performance of different recommendation systems and how to test that the models are giving good results in production.
Recommender systems analyze patterns of user interest in
products to provide personalized recommendations. They seek to predict the rating or preference that user would
give to an item. Some of the most successful realizations of latent factor models are based on matrix factorization...
(Gaurav sawant & dhaval sawlani)bia 678 final project reportGaurav Sawant
PROJECT REPORT
• Performed memory-based collaborative filtering techniques like Cosine similarities, Pearson’s r & model-based Matrix Factorization techniques like Alternating Least Squares (ALS) method
• Studied the scalability of these methods on local machines & on Hadoop clusters
From the NYC Machine Learning meetup on Jan 17, 2013: http://www.meetup.com/NYC-Machine-Learning/events/97871782/
Video is available here: http://vimeo.com/57900625
Maximizing the Representation Gap between In-domain & OOD examplesJay Nandy
Paper presented in ICML 2020 Workshop on Uncertainty & Robustness in Deep Learning
Paper link: http://www.gatsby.ucl.ac.uk/~balaji/udl2020/accepted-papers/UDL2020-paper-134.pdf
Recommendation and Information Retrieval: Two Sides of the Same Coin?Arjen de Vries
Status update on our current understanding of how collaborative filtering relates far more closely to information retrieval than usually thought. Includes work by Jun Wang and Alejandro Bellogín. This presentation has been given at the Siks PhD student course on computational intelligence, May 24th, 2013
DetectoRS for Object Detection/Segmentation
On COCO test-dev, DetectoRS achieves state-of-the art 55.7% box AP for object detection, 48.5% mask AP for instance segmentation, and 50.0% PQ for panoptic segmentation.
(2020.07)
A Multiscale Visualization of Attention in the Transformer Modeltaeseon ryu
안녕하세요 딥러닝 논문읽기 모임입니다 오늘 업로드된 논문 리뷰 영상은 2019 ACL 에서 발표된 A Multiscale Visualization of Attention in the Transformer Model 라는 제목의 논문입니다.
본 논문은 최고의 주가를 달리는 트랜스포머가 연산하는 과정을 다양한 관점에서 비주얼라이징을 할 수 있는 툴에 관련된 논문 입니다.
트랜스포머와 버트, GPT에 대한 간단한 소개와 더불어, 해당 비주얼라이징이 어떻게 활용 될 수 있는지에 대한 여러 유스케이스를 자연어 처리팀 백지윤님이 자세한 리뷰 도와 주셨습니다.
https://telecombcn-dl.github.io/2017-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
• Performed memory-based collaborative filtering techniques like Cosine similarities, Pearson’s r & model-based Matrix Factorization techniques like Alternating Least Squares (ALS) method
• Studied the scalability of these methods on local machines & on Hadoop clusters
Finding Meaning in Points, Areas and Surfaces: Spatial Analysis in RRevolution Analytics
Everything happens somewhere and spatial analysis attempts to use location as an explanatory variable. Such analysis is made complex by the very many ways we habitually record spatial location, the complexity of spatial data structures, and the wide variety of possible domain-driven questions we might ask. One option is to develop and use software for specific types of spatial data, another is to use a purpose-built geographical information system (GIS), but determined work by R enthusiasts has resulted in a multiplicity of packages in the R environment that can also be used.
A new similarity measurement based on hellinger distance for collaborating fi...Prabhu Kumar
This project proposed a similarity measurement which is focusing on recommendation performance under the cold start problem [The problem which occurs in the recommendation for newly comer items and users, which doesn't have any recognition in the system] and also perfectly suitable for sparse data set.
This technique solves the problem of the cold start in recommender system as well as improves the performance of recommendation to the users.
MediaEval 2015 - CERTH at MediaEval 2015 Synchronization of Multi-User Event ...multimediaeval
This paper describes the results of our participation to the Synchronization of Multi-User Event Media Task at the MediaEval 2015 challenge. Using multiple similarity measures, we identify pairs of similar media from different galleries. We use a graph-based approach to temporally synchronize user galleries; subsequently we use time information, geolocation information and visual concept detection results to cluster all photos into different sub-events. Our method achieves good accuracy on considerably diverse datasets.
http://ceur-ws.org/Vol-1436/
http://www.multimediaeval.org
Slides for the 2016/2017 edition of the Data Mining and Text Mining Course at the Politecnico di Milano. The course is also part of the joint program with the University of Illinois at Chicago.
A Visual Exploration of Distance, Documents, and DistributionsRebecca Bilbro
Machine learning often requires us to think spatially and make choices about what it means for two instances to be close or far apart. So which is best - Euclidean? Manhattan? Cosine? It all depends! In this talk, we'll explore open source tools and visual diagnostic strategies for picking good distance metrics when doing machine learning on text.
Machine learning often requires us to think spatially and make choices about what it means for two instances to be close or far apart. So which is best - Euclidean? Manhattan? Cosine? It all depends! In this talk, we'll explore open source tools and visual diagnostic strategies for picking good distance metrics when doing machine learning on text.
[RIIT 2017] Identifying Grey Sheep Users By The Distribution of User Similari...YONG ZHENG
Yong Zheng, Mayur Agnani, Mili Singh. “Identifying Grey Sheep Users By The Distribution of User Similarities In Collaborative Filtering”. Proceedings of The 6th ACM Conference on Research in Information Technology (RIIT), Rochester, NY, USA, October, 2017
[Decisions2013@RecSys]The Role of Emotions in Context-aware RecommendationYONG ZHENG
Context-aware recommender systems try to adapt to users' preferences across different contexts and have been proven to provide better predictive performance in a number of domains. Emotion is one of the most popular contextual variables, but few researchers have explored how emotions take effect in recommendations -- especially the usage of the emotional variables other than the effectiveness alone. In this paper, we explore the role of emotions in context-aware recommendation algorithms. More specifically, we evaluate two types of popular context-aware recommendation algorithms -- context-aware splitting approaches and differential context modeling. We examine predictive performance, and also explore the usage of emotions to discover how emotional features interact with those context-aware recommendation algorithms in the recommendation process.
A topic trend can be inferred by the usage of tags -- we name it as attention. Time-series analysis for tagging prediction can indicate the evolution of attention flow. This side takes political analysis for example, using time-series technique and discover interesting patterns.
[HetRec2011@RecSys]Experience Discovery: Hybrid Recommendation of Student Act...YONG ZHENG
The aim of the Experience Discovery project is to recommend extracurricular activities to high school and middle school students in urban areas. In implementing this system, we have been able to make use of both usage data and data drawn from a social networking site. Using pilot data, we are able to show that very simple aggregation techniques applied to the social network can improve recommendation accuracy.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Accelerate your Kubernetes clusters with Varnish Caching
[UMAP2013] Recommendation with Differential Context Weighting
1. Recommendation with
Differential Context Weighting
Recommendation with
Differential Context Weighting
Yong Zheng
Robin Burke
Bamshad Mobasher
Center for Web Intelligence
DePaul University
Chicago, IL USA
Yong Zheng
Robin Burke
Bamshad Mobasher
Center for Web Intelligence
DePaul University
Chicago, IL USA
Conference on UMAP
June 12, 2013
2. Overview
• Introduction (RS and Context-aware RS)
• Sparsity of Contexts and Relevant Solutions
• Differential Context Relaxation & Weighting
• Experimental Results
• Conclusion and Future Work
5. Context-aware RS (CARS)
• Traditional RS: Users × Items Ratings
• Context-aware RS: Users × Items × Contexts Ratings
Companion
Example of Contexts in different domains:
Food: time (lunch, dinner), occasion (business lunch, family dinner)
Movie: time (weekend, weekday), location (home, cinema), etc
Music: time (morning, evening), activity (study, sports, party), etc
Book: a book as a gift for kids or mother, etc
Recommendation cannot live alone without considering contexts.
7. Sparsity of Contexts
• Assumption of Context-aware RS: It is better to use
preferences in the same contexts for predictions in
recommender systems.
• Same contexts? How about multiple contexts & sparsity?
An example in the movie domain:
Are there rating profiles in the contexts <Weekday, Home, Sister>?
User Movie Time Location Companion Rating
U1 Titanic Weekend Home Girlfriend 4
U2 Titanic Weekday Home Girlfriend 5
U3 Titanic Weekday Cinema Sister 4
U1 Titanic Weekday Home Sister ?
8. Relevant Solutions
Context Matching The same contexts <Weekday, Home, Sister>?
1.Context Selection Use the influential dimensions only
2.Context Relaxation Use a relaxed set of dimensions, e.g. time
3.Context Weighting We can use all dimensions, but measure how
similar the contexts are! (to be continued later)
Differences between context selection and context relaxation:
Context selection is conducted by surveys or statistics;
Context relaxation is directly towards optimization on predictions;
Optimal context relaxation/weighting is a learning process!
User Movie Time Location Companion Rating
U1 Titanic Weekend Home Girlfriend 4
U2 Titanic Weekday Home Girlfriend 5
U3 Titanic Weekday Cinema Sister 4
U1 Titanic Weekday Home Sister ?
9. DCR and DCW
• Differential Context Relaxation (DCR)
• Differential Context Weighting (DCW)
• Particle Swarm Intelligence as Optimizer
10. Differential Context Relaxation
Differential Context Relaxation (DCR) is our first attempt to alleviate
the sparsity of contexts, and differential context weighting (DCW) is a
finer-grained improvement over DCR.
• There are two notion in DCR
“Differential” Part Algorithm Decomposition
Separate one algorithm into different functional components;
Apply appropriate context constraints to each component;
Maximize the global contextual effects together;
“Relaxation” Part Context Relaxation
We use a set of relaxed dimensions instead of all of them.
• References
Y. Zheng, R. Burke, B. Mobasher. "Differential Context Relaxation for Context-aware
Travel Recommendation". In EC-WEB, 2012
Y. Zheng, R. Burke, B. Mobasher. "Optimal Feature Selection for Context-Aware
Recommendation using Differential Relaxation". In RecSys Workshop on CARS, 2012
11. DCR – Algorithm Decomposition
Take User-based Collaborative Filtering (UBCF) for example.
Pirates of the
Caribbean 4
Kung Fu Panda
2
Harry Potter
6
Harry Potter
7
U1 4 4 2 2
U2 3 4 2 1
U3 2 2 4 4
U4 4 4 1 ?
Standard Process in UBCF (Top-K UserKNN, K=1 for example):
1). Find neighbors based on user-user similarity
2). Aggregate neighbors’ contribution
3). Make final predictions
12. DCR – Algorithm Decomposition
Take User-based Collaborative Filtering (UBCF) for example.
1.Neighbor Selection 2.Neighbor contribution
3.User baseline 4.User Similarity
All components contribute to the final predictions, where
we assume appropriate contextual constraints can leverage the
contextual effect in each algorithm component.
e.g. use neighbors who rated in same contexts.
13. DCR – Context Relaxation
User Movie Time Location Companion Rating
U1 Titanic Weekend Home Girlfriend 4
U2 Titanic Weekday Home Girlfriend 5
U3 Titanic Weekday Cinema Sister 4
U1 Titanic Weekday Home Sister ?
Notion of Context Relaxation:
• Use {Time, Location, Companion} 0 record matched!
• Use {Time, Location} 1 record matched!
• Use {Time} 2 records matched!
In DCR, we choose appropriate context relaxation for each component.
# of matched ratings best performances & least noises
Balance
14. DCR – Context Relaxation
3.User baseline 4.User Similarity
2.Neighbor contribution
1.Neighbor Selection
c is the original contexts, e.g. <Weekday, Home, Sister>
C1, C2, C3, C4 are the relaxed contexts.
The selection is modeled by a binary vector.
E.g. <1, 0, 0> denotes we just selected the first context dimension
Take neighbor selection for example:
Originally select neighbors by users who rated the same item.
DCR further filter those neighbors by contextual constraint C1
i.e.. C1 = <1,0,0> Time=Weekday u must rated i on weekdays
15. DCR – Drawbacks
3.User baseline 4.User Similarity
2.Neighbor contribution
1.Neighbor Selection
1. Context relaxation is still strict, especially when data is sparse.
2. Components are dependent. For example, neighbor contribution is
dependent with neighbor selection. E.g. neighbors are selected by
C1: Location = Cinema, it is not guaranteed, neighbor has ratings
under contexts C2: Time = Weekend
A finer-grained solution is required!! Differential Context Weighting
16. Differential Context Weighting
User Movie Time Location Companion Rating
U1 Titanic Weekend Home Girlfriend 4
U2 Titanic Weekday Home Girlfriend 5
U3 Titanic Weekday Cinema Sister 4
U1 Titanic Weekday Home Sister ?
Goal: Use all dimensions, but we measure the similarity of contexts.
Assumption: More similar two contexts are given, the ratings may be
more useful for calculations in predictions.
c and d are two contexts. (Two red regions in the Table above.)
σ is the weighting vector <w1, w2, w3> for three dimensions.
Assume they are equal weights, w1 = w2 = w3 = 1.
J(c, d, σ) = # of matched dimensions / # of all dimensions = 2/3
Similarity of contexts is measured by
Weighted Jaccard similarity
17. Differential Context Weighting
3.User baseline
4.User Similarity
2.Neighbor contribution1.Neighbor Selection
1.“Differential” part Components are all the same as in DCR.
2.“Context Weighting” part (for each individual component):
σ is the weighting vector
ϵ is a threshold for the similarity of contexts.
i.e., only records with similar enough (≥ ϵ) contexts can be included.
3.In calculations, similarity of contexts are the weights, for example
2.Neighbor
contribution
It is similar calculation for the other components.
18. Particle Swarm Optimization (PSO)
The remaining work is to find optimal context relaxation vectors for
DCR and context weighting vectors for DCW. PSO is derived from
swarm intelligence which helps achieve a goal by collaborative
Fish Birds Bees
Why PSO?
1). Easy to implement as a non-linear optimizer;
2). Has been used in weighted CF before, and was demonstrated
to work better than other non-linear optimizer, e.g. genetic algorithm;
3). Our previous work successfully applied BPSO for DCR;
19. Particle Swarm Optimization (PSO)
Swarm = a group of birds
Particle = each bird ≈ each run in algorithm
Vector = bird’s position in the space ≈ Vectors we need
Goal = the location of pizza ≈ Lower prediction error
So, how to find goal by swam?
1.Looking for the pizza
Assume a machine can tell the distance
2.Each iteration is an attempt or move
3.Cognitive learning from particle itself
Am I closer to the pizza comparing with
my “best ”locations in previous history?
4.Social Learning from the swarm
Hey, my distance is 1 mile. It is the closest!
. Follow me!! Then other birds move towards here.
DCR – Feature selection – Modeled by binary vectors – Binary PSO
DCW – Feature weighting – Modeled by real-number vectors – PSO
How it works? Take DCR and Binary PSO for example:
Assume there are 4 components and 3 contextual dimensions
Thus there are 4 binary vectors for each component respectively
We merge the vectors into a single one, the vector size is 3*4 = 12
This single vector is the particle’s position vector in PSO process.
21. Context-aware Data Sets
AIST Food Data Movie Data
# of Ratings 6360 1010
# of Users 212 69
# of Items 20 176
# of Contexts
Real hunger
(full/normal/hungry)
Virtual hunger
Time (weekend, weekday)
Location (home, cinema)
Companions (friends, alone, etc)
Other
Features
User gender
Food genre, Food style
Food stuff
User gender
Year of the movie
Density Dense Sparse
Context-aware data sets are usually difficult to get….
Those two data sets were collected from surveys.
22. Evaluation Protocols
Metric: root-mean-square error (RMSE) and coverage which
denotes the percentage we can find neighbors for a prediction.
Our goal: improve RMSE (i.e. less errors) within a decent
coverage. We allow a decline in coverage, because applying
contextual constraints usually bring low coverage (i.e. the sparsity
of contexts!).
Baselines:
context-free CF, i.e. the original UBCF
contextual pre-filtering CF which just apply the contextual
constraints to the neighbor selection component – no other
components in DCR and DCW.
Other settings in DCR & DCW:
K = 10 for UserKNN evaluated on 5-folds cross-validation
T = 100 as the maximal iteration limit in the PSO process
Weights are ranged within [0, 1]
We use the same similarity threshold for each component,
which was iterated from 0.0 to 1.0 with 0.1 increment in DCW
23. Predictive Performances
Blue bars are RMSE values, Red lines are coverage curves.
Findings:
1) DCW works better than DCR and two baselines;
2) Significance t-test shows DCW works significantly in movie data,
but DCR was not significant over two baselines; DCW can further
alleviate sparsity of contexts and compensate DCR;
3) DCW offers better coverage over baselines!
24. Performances of Optimizer
Running time is in seconds.
Using 3 particles is the best configuration for two data sets here!
Factors influencing the running performances:
More particles, quicker convergence but probably more costs;
# of contextual variables: more contexts, probably slower;
Density of the data set: denser, more calculations in DCW;
Typically DCW costs more than DCR, because it uses all
contextual dimensions and the calculation for similarity of contexts
is time-consuming, especially for dense data, like the Food data.
25. Other Results (Optional)
1.The optimal threshold for similarity of contexts
For Food data set, it is 0.6;
For Movie data set, it is 0.1;
2.The optimal weighting vectors (e.g. Movie data)
Note: Darker smaller weights; Lighter Larger weights
27. Conclusions
We propose DCW which is a finer-grained improvement over DCR;
It can further improve predictive accuracy within decent coverage;
PSO is demonstrated to be the efficient optimizer;
We found underlying factors influencing running time of optimizer;
Stay Tuned
DCR and DCW are general frameworks (DCM, i.e. differential context
modeling as the name of this framework), and they can be applied to
any recommendation algorithms which can be decomposed into
multiple components.
We have successfully extend its applications to item-based
collaborative filtering and slope one recommender.
References
Y. Zheng, R. Burke, B. Mobasher. "Differential Context Modeling in
Collaborative Filtering ". In SOCRS-2013, Chicago, IL USA 2013
28. Acknowledgement
Student Travel Support from US NSF (UMAP Platinum Sponsor)
Future Work
Try other similarity of contexts instead of the simple Jaccard one;
Introduce semantics into the similarity of contexts to further alleviate
the sparsity of contexts, e.g., Rome is closer to Florence than Paris.
Parallel PSO or put PSO on MapReduce to speed up optimizer;
See u later…
The 19th ACM SIGKDD Conference on Knowledge Discovery and
Data Mining (KDD), Chicago, IL USA, Aug 11-14, 2013