•

5 likes•2,609 views

Slides from RecSys 2010 presentation. Context has been recognized as an important factor to con- sider in personalized Recommender Systems. However, most model-based Collaborative Filtering approaches such as Ma- trix Factorization do not provide a straightforward way of integrating context information into the model. In this work, we introduce a Collaborative Filtering method based on Tensor Factorization, a generalization of Matrix Factoriza- tion that allows for a flexible and generic integration of con- textual information by modeling the data as a User-Item- Context N-dimensional tensor instead of the traditional 2D User-Item matrix. In the proposed model, called Multiverse Recommendation, different types of context are considered as additional dimensions in the representation of the data as a tensor

Report

Share

Report

Share

Download to read offline

[UMAP2013]Tutorial on Context-Aware User Modeling for Recommendation by Bamsh...

by Professor Bamshad Mobasher. "Context-Aware User Modeling for Recommendation". Tutorials at Conference on UMAP, June 10, 2013

Stochastic optimization from mirror descent to recent algorithms

The document discusses stochastic optimization algorithms. It begins with an introduction to stochastic optimization and online optimization settings. Then it covers Mirror Descent and its extension Composite Objective Mirror Descent (COMID). Recent algorithms for deep learning like Momentum, ADADELTA, and ADAM are also discussed. The document provides convergence analysis and empirical studies of these algorithms.

Dynamics of structures with uncertainties

This document discusses dynamics of structures with uncertainties. It begins with an introduction to stochastic single degree of freedom systems and how natural frequency variability can be modeled using probability distributions. It then discusses how to extend this approach to stochastic multi degree of freedom systems using stochastic finite element formulations and modal projections. Key challenges with statistical overlap of eigenvalues are noted. The document provides mathematical models of equivalent damping in stochastic systems and examples of stochastic frequency response functions.

Phonons & Phonopy: Pro Tips (2014)

This document provides an overview of phonons and lattice dynamics as well as tips for using the phonopy software package. It discusses the theory of phonons in crystals and the harmonic and quasi-harmonic approximations. It also outlines the workflow for using phonopy to calculate forces, construct the dynamical matrix, and post-process results to obtain phonon dispersions, densities of states, and thermal properties. Helpful tips are provided for optimizing VASP settings for force calculations and manipulating phonopy settings and output files.

SVD and the Netflix Dataset

Short summary and explanation of LSI (SVD) and how it can be applied to recommendation systems and the Netflix dataset in particular.

Kinetic bands versus Bollinger Bands

This research paper demonstrates the invention of the kinetic bands, based on Romanian mathematician and statistician Octav Onicescu’s kinetic energy, also known as “informational energy”, where we use historical data of foreign exchange currencies or indexes to predict the trend displayed by a stock or an index and whether it will go up or down in the future. Here, we explore the imperfections of the Bollinger Bands to determine a more sophisticated triplet of indicators that predict the future movement of prices in the Stock Market. An Extreme Gradient Boosting Modelling was conducted in Python using historical data set from Kaggle, the historical data set spanning all current 500 companies listed. An invariable importance feature was plotted. The results displayed that Kinetic Bands, derived from (KE) are very influential as features or technical indicators of stock market trends. Furthermore, experiments done through this invention provide tangible evidence of the empirical aspects of it. The machine learning code has low chances of error if all the proper procedures and coding are in play. The experiment samples are attached to this study for future references or scrutiny.

know Machine Learning Basic Concepts.pdf

This document provides an overview of machine learning concepts. It defines machine learning as creating computer programs that improve with experience. Supervised learning uses labeled training data to build models that can classify or predict new examples, while unsupervised learning finds patterns in unlabeled data. Examples of machine learning applications include spam filtering, recommendation systems, and medical diagnosis. The document also discusses important machine learning techniques like k-nearest neighbors, decision trees, regularization, and cross-validation.

Data-Driven Recommender Systems

The document discusses recommender systems and sequential recommendation problems. It covers several key points:
1) Matrix factorization and collaborative filtering techniques are commonly used to build recommender systems, but have limitations like cold start problems and how to incorporate additional constraints.
2) Sequential recommendation problems can be framed as multi-armed bandit problems, where past recommendations influence future recommendations.
3) Various bandit algorithms like UCB, Thompson sampling, and LinUCB can be applied, but extending guarantees to models like matrix factorization is challenging. Offline evaluation on real-world datasets is important.

[UMAP2013]Tutorial on Context-Aware User Modeling for Recommendation by Bamsh...

by Professor Bamshad Mobasher. "Context-Aware User Modeling for Recommendation". Tutorials at Conference on UMAP, June 10, 2013

Stochastic optimization from mirror descent to recent algorithms

The document discusses stochastic optimization algorithms. It begins with an introduction to stochastic optimization and online optimization settings. Then it covers Mirror Descent and its extension Composite Objective Mirror Descent (COMID). Recent algorithms for deep learning like Momentum, ADADELTA, and ADAM are also discussed. The document provides convergence analysis and empirical studies of these algorithms.

Dynamics of structures with uncertainties

This document discusses dynamics of structures with uncertainties. It begins with an introduction to stochastic single degree of freedom systems and how natural frequency variability can be modeled using probability distributions. It then discusses how to extend this approach to stochastic multi degree of freedom systems using stochastic finite element formulations and modal projections. Key challenges with statistical overlap of eigenvalues are noted. The document provides mathematical models of equivalent damping in stochastic systems and examples of stochastic frequency response functions.

Phonons & Phonopy: Pro Tips (2014)

This document provides an overview of phonons and lattice dynamics as well as tips for using the phonopy software package. It discusses the theory of phonons in crystals and the harmonic and quasi-harmonic approximations. It also outlines the workflow for using phonopy to calculate forces, construct the dynamical matrix, and post-process results to obtain phonon dispersions, densities of states, and thermal properties. Helpful tips are provided for optimizing VASP settings for force calculations and manipulating phonopy settings and output files.

SVD and the Netflix Dataset

Short summary and explanation of LSI (SVD) and how it can be applied to recommendation systems and the Netflix dataset in particular.

Kinetic bands versus Bollinger Bands

This research paper demonstrates the invention of the kinetic bands, based on Romanian mathematician and statistician Octav Onicescu’s kinetic energy, also known as “informational energy”, where we use historical data of foreign exchange currencies or indexes to predict the trend displayed by a stock or an index and whether it will go up or down in the future. Here, we explore the imperfections of the Bollinger Bands to determine a more sophisticated triplet of indicators that predict the future movement of prices in the Stock Market. An Extreme Gradient Boosting Modelling was conducted in Python using historical data set from Kaggle, the historical data set spanning all current 500 companies listed. An invariable importance feature was plotted. The results displayed that Kinetic Bands, derived from (KE) are very influential as features or technical indicators of stock market trends. Furthermore, experiments done through this invention provide tangible evidence of the empirical aspects of it. The machine learning code has low chances of error if all the proper procedures and coding are in play. The experiment samples are attached to this study for future references or scrutiny.

know Machine Learning Basic Concepts.pdf

This document provides an overview of machine learning concepts. It defines machine learning as creating computer programs that improve with experience. Supervised learning uses labeled training data to build models that can classify or predict new examples, while unsupervised learning finds patterns in unlabeled data. Examples of machine learning applications include spam filtering, recommendation systems, and medical diagnosis. The document also discusses important machine learning techniques like k-nearest neighbors, decision trees, regularization, and cross-validation.

Data-Driven Recommender Systems

The document discusses recommender systems and sequential recommendation problems. It covers several key points:
1) Matrix factorization and collaborative filtering techniques are commonly used to build recommender systems, but have limitations like cold start problems and how to incorporate additional constraints.
2) Sequential recommendation problems can be framed as multi-armed bandit problems, where past recommendations influence future recommendations.
3) Various bandit algorithms like UCB, Thompson sampling, and LinUCB can be applied, but extending guarantees to models like matrix factorization is challenging. Offline evaluation on real-world datasets is important.

Stat982(chap13)

This document discusses an analysis of variance (ANOVA) study conducted by Burke Marketing Services to evaluate potential new versions of a children's dry cereal. The experimental design and ANOVA were used to test differences between the cereal versions and make a product recommendation. The document provides an introduction to ANOVA, including how it can test for differences between three or more population means. It also outlines the assumptions of ANOVA, how to calculate test statistics like mean squares, and how to conduct an F-test to determine whether population means are equal or not.

Tutorial: Context In Recommender Systems

This document provides an overview of a tutorial on context-aware recommender systems. The tutorial will cover traditional recommendation techniques, context-aware recommendation which incorporates additional contextual information such as time and location, and context suggestion. It includes an agenda with topics, background information on recommender systems and evaluation metrics, and descriptions of techniques for context-aware recommendation including context filtering and modeling.

Investigation of-combined-use-of-mfcc-and-lpc-features-in-speech-recognition-...

This document discusses a study investigating the combined use of Mel Frequency Cepstral Coefficients (MFCC) and Linear Predictive Coding (LPC) features in automatic speech recognition systems. It begins by outlining the challenges of automatic speech recognition and then describes the MFCC and LPC algorithms for extracting basic speech features. The study suggests combining MFCC and LPC-based recognition subsystems to improve reliability. Neural networks are used for training and recognition, and results show the combined approach improves recognition quality compared to individual methods.

NIPS2017 Few-shot Learning and Graph Convolution

The document discusses meta-learning and prototypical networks for few-shot learning. It introduces prototypical networks, which learn a metric space such that classification can be performed by finding the nearest class prototype to a query example in embedding space. The document summarizes results on few-shot image classification benchmarks like Omniglot and miniImageNet, finding that prototypical networks achieve state-of-the-art performance.

An introduction to digital health surveillance from online user-generated con...

This talk provides a brief introduction on methods for performing health surveillance tasks using online user-generated data. Four case studies are being presented: a) the original Google Flu Trends model, b) a basic model for mapping Twitter data to influenza rates, c) an improved Google Flu Trends model, and d) a method for assessing the impact of a health intervention from Internet data.

EPFL workshop on sparsity

This document discusses near-optimal sensor placement for linear inverse problems. It introduces the concept of using sensors to measure physical fields and describes how inverse problems aim to estimate parameters of interest from sensor measurements. It presents the FrameSense algorithm, which uses a greedy approach to minimize frame potential as a proxy for minimizing mean squared error in sensor placement. FrameSense provides near-optimal sensor placement for linear inverse problems in polynomial time. As an example application, the document describes how FrameSense can be used for optimal placement of temperature sensors on a microprocessor to reconstruct thermal maps from sparse measurements.

Gene's law

Gene's law, Common gate, kernel Principal Component Analysis, ASIC Physical Design Post-Layout Verification, TSMC180nm, 0.13um IBM CMOS technology, Cadence Virtuoso, FPAA, in Spanish, Bruun E,

Error entropy minimization for brain image registration using hilbert huang t...

Error entropy minimization for brain image registration using hilbert huang t...eSAT Publishing House

IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.2019 Fall Series: Postdoc Seminars - Special Guest Lecture, There is a Kernel...

2019 Fall Series: Postdoc Seminars - Special Guest Lecture, There is a Kernel...The Statistical and Applied Mathematical Sciences Institute

In this lecture, I will present a general tour of some of the most commonly used kernel methods in statistical machine learning and data mining. I will touch on elements of artificial neural networks and then highlight their intricate connections to some general purpose kernel methods like Gaussian process learning machines. I will also resurrect the famous universal approximation theorem and will most likely ignite a [controversial] debate around the theme: could it be that [shallow] networks like radial basis function networks or Gaussian processes are all we need for well-behaved functions? Do we really need many hidden layers as the hype around Deep Neural Network architectures seem to suggest or should we heed Ockham’s principle of parsimony, namely “Entities should not be multiplied beyond necessity.” (“Entia non sunt multiplicanda praeter necessitatem.”) I intend to spend the last 15 minutes of this lecture sharing my personal tips and suggestions with our precious postdoctoral fellows on how to make the most of their experience.Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...

This is lecure 3 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=dtpgJLRt90M

GDRR Opening Workshop - Modeling Approaches for High-Frequency Financial Time...

GDRR Opening Workshop - Modeling Approaches for High-Frequency Financial Time...The Statistical and Applied Mathematical Sciences Institute

Analyzing high-frequency time series is increasingly useful with the current explosion in the availability of these data in several application areas, including but not limited to, climate, finance, health analytics, transportation, etc. This talk will give an overview of two statistical frameworks that could be useful for analyzing high-frequency financial time series leading to quantification of financial risk. These include a distribution free approach using penalized estimating functions for modeling inter-event durations and an approximate Bayesian approach for modeling counts of events in regular intervals. A few other potentially useful lines of research in this area will also be introduced. A walk through the intersection between machine learning and mechanistic mode...

Talk at EURECOM, France.
It overviews regression in several of its forms: regularized, constrained, and mixed. It builds the bridge between machine learning and dynamical models.

Optimum Algorithm for Computing the Standardized Moments Using MATLAB 7.10(R2...

A fundamental task in many statistical analyses is to characterize the location and variability of a data set. A further characterization of the data includes skewness and kurtosis. This paper emphasizes the real time computational problem for generally the rth standardized moments and specially for both skewness and kurtosis. It has therefore been important to derive an optimum computational technique for the standardized moments. A new algorithm has been designed for the evaluation of the standardized moments. The evaluation of error analysis has been discussed. The new algorithm saved computational energy by approximately 99.95% than that of the previously published algorithms.

HOP-Rec_RecSys18

Keynote of HOP-Rec @ RecSys 2018
Presenter: Jheng-Hong Yang
These slides aim to be a complementary material for the short paper: HOP-Rec @ RecSys18. It explains the intuition and some abstract idea behind the descriptions and mathematical symbols by illustrating some plots and figures.

Lecture12 xing

The document discusses learning graphical models from data. It describes two main tasks: inference, which is computing answers to queries about a probability distribution described by a Bayesian network, and learning, which is estimating a model from data. It provides examples of learning for completely observed models, including maximum likelihood estimation for the parameters of a conditional Gaussian model. It also discusses supervised versus unsupervised learning of hidden Markov models, and techniques for dealing with small training sets like adding pseudocounts to estimates.

Thesis seminar

1. The document discusses implicit shape representations for liver segmentation from CT scans, comparing heat, signed distance, and Poisson transforms.
2. It evaluates these representations using principal component analysis to build a linear shape space model from training data.
3. Results show the Poisson transform provides the most stable and effective implicit representation for segmentation, outperforming other methods in experiments projecting new shapes into the learned shape space.

Steffen Rendle, Research Scientist, Google at MLconf SF

Title: Factorization Machines
Abstract:
Developing accurate recommender systems for a specific problem setting seems to be a complicated and time-consuming task: models have to be defined, learning algorithms derived and implementations written. In this talk, I present the factorization machine (FM) model which is a generic factorization approach that allows to be adapted to problems by feature engineering. Efficient FM learning algorithms are discussed among them SGD, ALS/CD and MCMC inference including automatic hyperparameter selection. I will show on several tasks, including the Netflix prize and KDDCup 2012, that FMs are flexible and generate highly competitive accuracy. With FMs these results can be achieved by simple data preprocessing and without any tuning of regularization parameters or learning rates.

Steffen Rendle, Research Scientist, Google at MLconf SF

Abstract:
Developing accurate recommender systems for a specific problem setting seems to be a complicated and time-consuming task: models have to be defined, learning algorithms derived and implementations written. In this talk, I present the factorization machine (FM) model which is a generic factorization approach that allows to be adapted to problems by feature engineering. Efficient FM learning algorithms are discussed among them SGD, ALS/CD and MCMC inference including automatic hyperparameter selection. I will show on several tasks, including the Netflix prize and KDDCup 2012, that FMs are flexible and generate highly competitive accuracy. With FMs these results can be achieved by simple data preprocessing and without any tuning of regularization parameters or learning rates.

Computational methods for nanoscale bio sensors

This document describes computational methods for modeling nanoscale biosensors. It discusses using classical beam theory to model 1D carbon nanotube sensors and derive equations relating frequency shift to added mass. The static deformation approximation is used, assuming the nanotube deflects a fixed amount under the attached mass. Analytical expressions are derived and validated against finite element models. Linear and cubic approximations relate frequency shift and mass added.

PMF BPMF and BPTF

Probabilistic Matrix Factorization (PMF)
Bayesian Probabilistic Matrix Factorization (BPMF) using
Markov Chain Monte Carlo (MCMC)
BPMF using MCMC – Overall Model
BPMF using MCMC – Gibbs Sampling

Deep Learning for Recommender Systems RecSys2017 Tutorial

Deep learning techniques are increasingly being used for recommender systems. Neural network models such as word2vec, doc2vec and prod2vec learn embedding representations of items from user interaction data that capture their relationships. These embeddings can then be used to make recommendations by finding similar items. Deep collaborative filtering models apply neural networks to matrix factorization techniques to learn joint representations of users and items from rating data.

Deep Learning for Recommender Systems - Budapest RecSys Meetup

1. Deep learning techniques such as convolutional neural networks, recurrent neural networks, and autoencoders can be applied to recommender systems.
2. Convolutional neural networks are commonly used to extract features from images, audio, and video that can then be used for recommendation. Recurrent neural networks can model user sessions as sequences of clicks.
3. Autoencoders learn lower-dimensional representations of items that capture similarities and can be used to make recommendations, especially for cold start problems where little is known about new users or items.

Stat982(chap13)

This document discusses an analysis of variance (ANOVA) study conducted by Burke Marketing Services to evaluate potential new versions of a children's dry cereal. The experimental design and ANOVA were used to test differences between the cereal versions and make a product recommendation. The document provides an introduction to ANOVA, including how it can test for differences between three or more population means. It also outlines the assumptions of ANOVA, how to calculate test statistics like mean squares, and how to conduct an F-test to determine whether population means are equal or not.

Tutorial: Context In Recommender Systems

This document provides an overview of a tutorial on context-aware recommender systems. The tutorial will cover traditional recommendation techniques, context-aware recommendation which incorporates additional contextual information such as time and location, and context suggestion. It includes an agenda with topics, background information on recommender systems and evaluation metrics, and descriptions of techniques for context-aware recommendation including context filtering and modeling.

Investigation of-combined-use-of-mfcc-and-lpc-features-in-speech-recognition-...

This document discusses a study investigating the combined use of Mel Frequency Cepstral Coefficients (MFCC) and Linear Predictive Coding (LPC) features in automatic speech recognition systems. It begins by outlining the challenges of automatic speech recognition and then describes the MFCC and LPC algorithms for extracting basic speech features. The study suggests combining MFCC and LPC-based recognition subsystems to improve reliability. Neural networks are used for training and recognition, and results show the combined approach improves recognition quality compared to individual methods.

NIPS2017 Few-shot Learning and Graph Convolution

The document discusses meta-learning and prototypical networks for few-shot learning. It introduces prototypical networks, which learn a metric space such that classification can be performed by finding the nearest class prototype to a query example in embedding space. The document summarizes results on few-shot image classification benchmarks like Omniglot and miniImageNet, finding that prototypical networks achieve state-of-the-art performance.

An introduction to digital health surveillance from online user-generated con...

This talk provides a brief introduction on methods for performing health surveillance tasks using online user-generated data. Four case studies are being presented: a) the original Google Flu Trends model, b) a basic model for mapping Twitter data to influenza rates, c) an improved Google Flu Trends model, and d) a method for assessing the impact of a health intervention from Internet data.

EPFL workshop on sparsity

This document discusses near-optimal sensor placement for linear inverse problems. It introduces the concept of using sensors to measure physical fields and describes how inverse problems aim to estimate parameters of interest from sensor measurements. It presents the FrameSense algorithm, which uses a greedy approach to minimize frame potential as a proxy for minimizing mean squared error in sensor placement. FrameSense provides near-optimal sensor placement for linear inverse problems in polynomial time. As an example application, the document describes how FrameSense can be used for optimal placement of temperature sensors on a microprocessor to reconstruct thermal maps from sparse measurements.

Gene's law

Gene's law, Common gate, kernel Principal Component Analysis, ASIC Physical Design Post-Layout Verification, TSMC180nm, 0.13um IBM CMOS technology, Cadence Virtuoso, FPAA, in Spanish, Bruun E,

Error entropy minimization for brain image registration using hilbert huang t...

Error entropy minimization for brain image registration using hilbert huang t...eSAT Publishing House

IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.2019 Fall Series: Postdoc Seminars - Special Guest Lecture, There is a Kernel...

2019 Fall Series: Postdoc Seminars - Special Guest Lecture, There is a Kernel...The Statistical and Applied Mathematical Sciences Institute

In this lecture, I will present a general tour of some of the most commonly used kernel methods in statistical machine learning and data mining. I will touch on elements of artificial neural networks and then highlight their intricate connections to some general purpose kernel methods like Gaussian process learning machines. I will also resurrect the famous universal approximation theorem and will most likely ignite a [controversial] debate around the theme: could it be that [shallow] networks like radial basis function networks or Gaussian processes are all we need for well-behaved functions? Do we really need many hidden layers as the hype around Deep Neural Network architectures seem to suggest or should we heed Ockham’s principle of parsimony, namely “Entities should not be multiplied beyond necessity.” (“Entia non sunt multiplicanda praeter necessitatem.”) I intend to spend the last 15 minutes of this lecture sharing my personal tips and suggestions with our precious postdoctoral fellows on how to make the most of their experience.Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...

This is lecure 3 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=dtpgJLRt90M

GDRR Opening Workshop - Modeling Approaches for High-Frequency Financial Time...

GDRR Opening Workshop - Modeling Approaches for High-Frequency Financial Time...The Statistical and Applied Mathematical Sciences Institute

Analyzing high-frequency time series is increasingly useful with the current explosion in the availability of these data in several application areas, including but not limited to, climate, finance, health analytics, transportation, etc. This talk will give an overview of two statistical frameworks that could be useful for analyzing high-frequency financial time series leading to quantification of financial risk. These include a distribution free approach using penalized estimating functions for modeling inter-event durations and an approximate Bayesian approach for modeling counts of events in regular intervals. A few other potentially useful lines of research in this area will also be introduced. A walk through the intersection between machine learning and mechanistic mode...

Talk at EURECOM, France.
It overviews regression in several of its forms: regularized, constrained, and mixed. It builds the bridge between machine learning and dynamical models.

Optimum Algorithm for Computing the Standardized Moments Using MATLAB 7.10(R2...

A fundamental task in many statistical analyses is to characterize the location and variability of a data set. A further characterization of the data includes skewness and kurtosis. This paper emphasizes the real time computational problem for generally the rth standardized moments and specially for both skewness and kurtosis. It has therefore been important to derive an optimum computational technique for the standardized moments. A new algorithm has been designed for the evaluation of the standardized moments. The evaluation of error analysis has been discussed. The new algorithm saved computational energy by approximately 99.95% than that of the previously published algorithms.

HOP-Rec_RecSys18

Keynote of HOP-Rec @ RecSys 2018
Presenter: Jheng-Hong Yang
These slides aim to be a complementary material for the short paper: HOP-Rec @ RecSys18. It explains the intuition and some abstract idea behind the descriptions and mathematical symbols by illustrating some plots and figures.

Lecture12 xing

The document discusses learning graphical models from data. It describes two main tasks: inference, which is computing answers to queries about a probability distribution described by a Bayesian network, and learning, which is estimating a model from data. It provides examples of learning for completely observed models, including maximum likelihood estimation for the parameters of a conditional Gaussian model. It also discusses supervised versus unsupervised learning of hidden Markov models, and techniques for dealing with small training sets like adding pseudocounts to estimates.

Thesis seminar

1. The document discusses implicit shape representations for liver segmentation from CT scans, comparing heat, signed distance, and Poisson transforms.
2. It evaluates these representations using principal component analysis to build a linear shape space model from training data.
3. Results show the Poisson transform provides the most stable and effective implicit representation for segmentation, outperforming other methods in experiments projecting new shapes into the learned shape space.

Steffen Rendle, Research Scientist, Google at MLconf SF

Title: Factorization Machines
Abstract:
Developing accurate recommender systems for a specific problem setting seems to be a complicated and time-consuming task: models have to be defined, learning algorithms derived and implementations written. In this talk, I present the factorization machine (FM) model which is a generic factorization approach that allows to be adapted to problems by feature engineering. Efficient FM learning algorithms are discussed among them SGD, ALS/CD and MCMC inference including automatic hyperparameter selection. I will show on several tasks, including the Netflix prize and KDDCup 2012, that FMs are flexible and generate highly competitive accuracy. With FMs these results can be achieved by simple data preprocessing and without any tuning of regularization parameters or learning rates.

Steffen Rendle, Research Scientist, Google at MLconf SF

Abstract:
Developing accurate recommender systems for a specific problem setting seems to be a complicated and time-consuming task: models have to be defined, learning algorithms derived and implementations written. In this talk, I present the factorization machine (FM) model which is a generic factorization approach that allows to be adapted to problems by feature engineering. Efficient FM learning algorithms are discussed among them SGD, ALS/CD and MCMC inference including automatic hyperparameter selection. I will show on several tasks, including the Netflix prize and KDDCup 2012, that FMs are flexible and generate highly competitive accuracy. With FMs these results can be achieved by simple data preprocessing and without any tuning of regularization parameters or learning rates.

Computational methods for nanoscale bio sensors

This document describes computational methods for modeling nanoscale biosensors. It discusses using classical beam theory to model 1D carbon nanotube sensors and derive equations relating frequency shift to added mass. The static deformation approximation is used, assuming the nanotube deflects a fixed amount under the attached mass. Analytical expressions are derived and validated against finite element models. Linear and cubic approximations relate frequency shift and mass added.

PMF BPMF and BPTF

Probabilistic Matrix Factorization (PMF)
Bayesian Probabilistic Matrix Factorization (BPMF) using
Markov Chain Monte Carlo (MCMC)
BPMF using MCMC – Overall Model
BPMF using MCMC – Gibbs Sampling

Stat982(chap13)

Stat982(chap13)

Tutorial: Context In Recommender Systems

Tutorial: Context In Recommender Systems

Investigation of-combined-use-of-mfcc-and-lpc-features-in-speech-recognition-...

Investigation of-combined-use-of-mfcc-and-lpc-features-in-speech-recognition-...

NIPS2017 Few-shot Learning and Graph Convolution

NIPS2017 Few-shot Learning and Graph Convolution

An introduction to digital health surveillance from online user-generated con...

An introduction to digital health surveillance from online user-generated con...

EPFL workshop on sparsity

EPFL workshop on sparsity

Gene's law

Gene's law

Error entropy minimization for brain image registration using hilbert huang t...

Error entropy minimization for brain image registration using hilbert huang t...

2019 Fall Series: Postdoc Seminars - Special Guest Lecture, There is a Kernel...

2019 Fall Series: Postdoc Seminars - Special Guest Lecture, There is a Kernel...

Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...

Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...

GDRR Opening Workshop - Modeling Approaches for High-Frequency Financial Time...

GDRR Opening Workshop - Modeling Approaches for High-Frequency Financial Time...

A walk through the intersection between machine learning and mechanistic mode...

A walk through the intersection between machine learning and mechanistic mode...

Optimum Algorithm for Computing the Standardized Moments Using MATLAB 7.10(R2...

Optimum Algorithm for Computing the Standardized Moments Using MATLAB 7.10(R2...

HOP-Rec_RecSys18

HOP-Rec_RecSys18

Lecture12 xing

Lecture12 xing

Thesis seminar

Thesis seminar

Steffen Rendle, Research Scientist, Google at MLconf SF

Steffen Rendle, Research Scientist, Google at MLconf SF

Steffen Rendle, Research Scientist, Google at MLconf SF

Steffen Rendle, Research Scientist, Google at MLconf SF

Computational methods for nanoscale bio sensors

Computational methods for nanoscale bio sensors

PMF BPMF and BPTF

PMF BPMF and BPTF

Deep Learning for Recommender Systems RecSys2017 Tutorial

Deep learning techniques are increasingly being used for recommender systems. Neural network models such as word2vec, doc2vec and prod2vec learn embedding representations of items from user interaction data that capture their relationships. These embeddings can then be used to make recommendations by finding similar items. Deep collaborative filtering models apply neural networks to matrix factorization techniques to learn joint representations of users and items from rating data.

Deep Learning for Recommender Systems - Budapest RecSys Meetup

1. Deep learning techniques such as convolutional neural networks, recurrent neural networks, and autoencoders can be applied to recommender systems.
2. Convolutional neural networks are commonly used to extract features from images, audio, and video that can then be used for recommendation. Recurrent neural networks can model user sessions as sequences of clicks.
3. Autoencoders learn lower-dimensional representations of items that capture similarities and can be used to make recommendations, especially for cold start problems where little is known about new users or items.

Machine Learning for Recommender Systems MLSS 2015 Sydney

The slides from the Machine Learning Summers School 2015 in Sydney on Machine Learning for Recommender Systems. Collaborative filtering algorithms, Context-aware methods, Restricted Boltzmann Machines, Recurrent Neural Networks, Tensor Factorization, etc.

Ranking and Diversity in Recommendations - RecSys Stammtisch at SoundCloud, B...

Ranking and Diversity in Recommendations - RecSys Stammtisch at SoundCloud, B...Alexandros Karatzoglou

Slides from my talk at the RecSys Stammtisch at SoundCloud in Berlin. The presentation is split in two part one focusing on ranking and relevance and one on diversity and how to achieve it using genres. We introduce a novel diversity metric called Binomial Diversity.Learning to Rank for Recommender Systems - ACM RecSys 2013 tutorial

The slides from the Learning to Rank for Recommender Systems tutorial given at ACM RecSys 2013 in Hong Kong by Alexandros Karatzoglou, Linas Baltrunas and Yue Shi.

ESSIR 2013 Recommender Systems tutorial

Recommenders Systems tutorial slides from the European Summer School of Information Retrieval (ESSIR).
Covers basic ideas on Collaborative Filtering, Content-based methods, Matrix Factorization, Restricted Boltzmann Machines, Ranking, Diversity.
The slides include material from Xavier Amatriain, Saul Vargas and Linas Baltrunas.

TFMAP: Optimizing MAP for Top-N Context-aware Recommendation

Slides from the presentation of TFMAP at SIGIR 2012.
TFMAP, is a Collaborative Filtering model that directly maximizes Mean Average Precision with the aim of creating an optimally ranked list of items for individual users under a given context. TFMAP uses tensor factorization to model implicit feedback data (e.g., purchases, clicks) along with contextual information

CLiMF: Collaborative Less-is-More Filtering

RecSys presentation slides of CLiMF,
a Collaborative Filtering algorithm based on a novel ranking algorithm

Machine Learning in R

This document provides an overview of machine learning in R. It discusses R's capabilities for statistical analysis and visualization. It describes key R concepts like objects, data structures, plots, and packages. It explains how to import and work with data, perform basic statistics and machine learning algorithms like linear models, naive Bayes, and decision trees. The document serves as an introduction for using R for machine learning tasks.

Deep Learning for Recommender Systems RecSys2017 Tutorial

Deep Learning for Recommender Systems RecSys2017 Tutorial

Deep Learning for Recommender Systems - Budapest RecSys Meetup

Deep Learning for Recommender Systems - Budapest RecSys Meetup

Machine Learning for Recommender Systems MLSS 2015 Sydney

Machine Learning for Recommender Systems MLSS 2015 Sydney

Ranking and Diversity in Recommendations - RecSys Stammtisch at SoundCloud, B...

Ranking and Diversity in Recommendations - RecSys Stammtisch at SoundCloud, B...

Learning to Rank for Recommender Systems - ACM RecSys 2013 tutorial

Learning to Rank for Recommender Systems - ACM RecSys 2013 tutorial

ESSIR 2013 Recommender Systems tutorial

ESSIR 2013 Recommender Systems tutorial

TFMAP: Optimizing MAP for Top-N Context-aware Recommendation

TFMAP: Optimizing MAP for Top-N Context-aware Recommendation

CLiMF: Collaborative Less-is-More Filtering

CLiMF: Collaborative Less-is-More Filtering

Machine Learning in R

Machine Learning in R

Building RAG with self-deployed Milvus vector database and Snowpark Container...

This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.

Communications Mining Series - Zero to Hero - Session 1

This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A

Climate Impact of Software Testing at Nordic Testing Days

My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.

PCI PIN Basics Webinar from the Controlcase Team

PCI PIN Basics

Encryption in Microsoft 365 - ExpertsLive Netherlands 2024

In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.

Monitoring Java Application Security with JDK Tools and JFR Events

Slides for JNation 2024

20240605 QFM017 Machine Intelligence Reading List May 2024

Everything I found interesting about machines behaving intelligently during May 2024

GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...

Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.

TrustArc Webinar - 2024 Global Privacy Survey

How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program

Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack

Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.

UiPath Test Automation using UiPath Test Suite series, part 5

Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.

Large Language Model (LLM) and it’s Geospatial Applications

Large Language Model (LLM) and it’s Geospatial Applications.

Pushing the limits of ePRTC: 100ns holdover for 100 days

At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.

Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...

Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.

How to Get CNIC Information System with Paksim Ga.pptx

Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.

GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024

Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.

Microsoft - Power Platform_G.Aspiotis.pdf

Revolutionizing Application Development
with AI-powered low-code, presentation by George Aspiotis, Sr. Partner Development Manager, Microsoft

Artificial Intelligence for XMLDevelopment

In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.

Essentials of Automations: The Art of Triggers and Actions in FME

In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!

RESUME BUILDER APPLICATION Project for students

A mini project idea for students

Building RAG with self-deployed Milvus vector database and Snowpark Container...

Building RAG with self-deployed Milvus vector database and Snowpark Container...

Communications Mining Series - Zero to Hero - Session 1

Communications Mining Series - Zero to Hero - Session 1

Climate Impact of Software Testing at Nordic Testing Days

Climate Impact of Software Testing at Nordic Testing Days

PCI PIN Basics Webinar from the Controlcase Team

PCI PIN Basics Webinar from the Controlcase Team

Encryption in Microsoft 365 - ExpertsLive Netherlands 2024

Encryption in Microsoft 365 - ExpertsLive Netherlands 2024

Monitoring Java Application Security with JDK Tools and JFR Events

Monitoring Java Application Security with JDK Tools and JFR Events

20240605 QFM017 Machine Intelligence Reading List May 2024

20240605 QFM017 Machine Intelligence Reading List May 2024

GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...

GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...

TrustArc Webinar - 2024 Global Privacy Survey

TrustArc Webinar - 2024 Global Privacy Survey

Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack

Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack

UiPath Test Automation using UiPath Test Suite series, part 5

UiPath Test Automation using UiPath Test Suite series, part 5

Large Language Model (LLM) and it’s Geospatial Applications

Large Language Model (LLM) and it’s Geospatial Applications

Pushing the limits of ePRTC: 100ns holdover for 100 days

Pushing the limits of ePRTC: 100ns holdover for 100 days

Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...

Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...

How to Get CNIC Information System with Paksim Ga.pptx

How to Get CNIC Information System with Paksim Ga.pptx

GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024

GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024

Microsoft - Power Platform_G.Aspiotis.pdf

Microsoft - Power Platform_G.Aspiotis.pdf

Artificial Intelligence for XMLDevelopment

Artificial Intelligence for XMLDevelopment

Essentials of Automations: The Art of Triggers and Actions in FME

Essentials of Automations: The Art of Triggers and Actions in FME

RESUME BUILDER APPLICATION Project for students

RESUME BUILDER APPLICATION Project for students

- 1. Multiverse Recommendation: N-dimensional Tensor Factorization for Context-aware Collaborative Filtering Alexandros Karatzoglou1 Xavier Amatriain 1 Linas Baltrunas 2 Nuria Oliver 2 1Telefonica Research Barcelona, Spain 2Free University of Bolzano Bolzano, Italy November 4, 2010 1
- 2. Context in Recommender Systems 2
- 3. Context in Recommender Systems Context is an important factor to consider in personalized Recommendation 2
- 4. Context in Recommender Systems Context is an important factor to consider in personalized Recommendation 2
- 5. Context in Recommender Systems Context is an important factor to consider in personalized Recommendation 2
- 6. Context in Recommender Systems Context is an important factor to consider in personalized Recommendation 2
- 7. Current State of the Art in Context- aware Recommendation 3
- 8. Current State of the Art in Context- aware Recommendation Pre-Filtering Techniques 3
- 9. Current State of the Art in Context- aware Recommendation Pre-Filtering Techniques Post-Filtering Techniques 3
- 10. Current State of the Art in Context- aware Recommendation Pre-Filtering Techniques Post-Filtering Techniques Contextual modeling 3
- 11. Current State of the Art in Context- aware Recommendation Pre-Filtering Techniques Post-Filtering Techniques Contextual modeling The approach presented here ﬁts in the Contextual Modeling category 3
- 12. Collaborative Filtering problem setting Typically data sizes e.g. Netlix data n = 5 × 105, m = 17 × 103 4
- 13. Standard Matrix Factorization Find U ∈ Rn×d and M ∈ Rd×m so that F = UM minimizeU,ML(F, Y) + λΩ(U, M) Movies! Users! U! M! 5
- 14. Multiverse Recommendation: Tensors for Context Aware Collaborative Filtering Movies! Users! 6
- 15. Tensors for Context Aware Collaborative Filtering Movies! Users! U! C! M! S! 7
- 16. Tensors for Context Aware Collaborative Filtering Movies! Users! U! C! M! S! Fijk = S ×U Ui∗ ×M Mj∗ ×C Ck∗ R[U, M, C, S] := L(F, Y) + Ω[U, M, C] + Ω[S] 7
- 17. Regularization Ω[F] = λM M 2 F + λU U 2 F + λC C 2 F Ω[S] := λS S 2 F (1) 8
- 18. Squared Error Loss Function Many implementations of MF used a simple squared error regression loss function l(f, y) = 1 2 (f − y)2 thus the loss over all users and items is: L(F, Y) = n i m j l(fij, yij) Note that this loss provides an estimate of the conditional mean 9
- 19. Absolute Error Loss Function Alternatively one can use the absolute error loss function l(f, y) = |f − y| thus the loss over all users and items is: L(F, Y) = n i m j l(fij, yij) which provides an estimate of the conditional median 10
- 20. Optimization - Stochastic Gradient Descent for TF The partial gradients with respect to U, M, C and S can then be written as: ∂Ui∗ l(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )S ×M Mj∗ ×C Ck∗ ∂Mj∗ l(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )S ×U Ui∗ ×C Ck∗ ∂Ck∗ l(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )S ×U Ui∗ ×M Mj∗ ∂Sl(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )Ui∗ ⊗ Mj∗ ⊗ Ck∗ 11
- 21. Optimization - Stochastic Gradient Descent for TF The partial gradients with respect to U, M, C and S can then be written as: ∂Ui∗ l(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )S ×M Mj∗ ×C Ck∗ ∂Mj∗ l(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )S ×U Ui∗ ×C Ck∗ ∂Ck∗ l(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )S ×U Ui∗ ×M Mj∗ ∂Sl(Fijk , Yijk ) = ∂Fijk l(Fijk , Yijk )Ui∗ ⊗ Mj∗ ⊗ Ck∗ We then iteratively update the parameter matrices and tensors using the following update rules: Ut+1 i∗ = Ut i∗ − η∂UL − ηλUUi∗ Mt+1 j∗ = Mt j∗ − η∂ML − ηλMMj∗ Ct+1 k∗ = Ct k∗ − η∂CL − ηλCCk∗ St+1 = St − η∂Sl(Fijk , Yijk ) − ηλSS where η is the learning rate. 11
- 22. Optimization - Stochastic Gradient Descent for TF Movies! Users! U! C! M! S! 12
- 23. Optimization - Stochastic Gradient Descent for TF Movies! Users! U! C! M! S! 13
- 24. Optimization - Stochastic Gradient Descent for TF Movies! Users! U! C! M! S! 14
- 25. Optimization - Stochastic Gradient Descent for TF Movies! Users! U! C! M! S! 15
- 26. Optimization - Stochastic Gradient Descent for TF Movies! Users! U! C! M! S! 16
- 27. Experimental evaluation We evaluate our model on contextual rating data and computing the Mean Absolute Error (MAE),using 5-fold cross validation deﬁned as follows: MAE = 1 K n,m,c ijk Dijk |Yijk − Fijk | 17
- 28. Data Data set Users Movies Context Dim. Ratings Scale Yahoo! 7642 11915 2 221K 1-5 Adom. 84 192 5 1464 1-13 Food 212 20 2 6360 1-5 Table: Data set statistics 18
- 29. Context Aware Methods Pre-ﬁltering based approach, (G. Adomavicius et.al), computes recommendations using only the ratings made in the same context as the target one Item splitting method (L. Baltrunas, F. Ricci) which identiﬁes items which have signiﬁcant differences in their rating under different context situations. 19
- 30. Results: Context vs. No Context No context Tensor Factorization 1.9 2.0 2.1 2.2 2.3 2.4 2.5 MAE (a) No context Tensor Factorization 0.80 0.85 0.90 0.95 MAE (b) Figure: Comparison of matrix (no context) and tensor (context) factorization on the Adom and Food data. 20
- 31. Yahoo Artiﬁcial Data 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 MAE α=0.1 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 α=0.5 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 α=0.9 No Context Reduction Item-Split Tensor Factorization Figure: Comparison of context-aware methods on the Yahoo! artiﬁcial data 21
- 32. Yahoo Artiﬁcial Data 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Probability of contextual influence 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95MAE No Context Reduction Item-Split Tensor Factorization 22
- 33. Tensor Factorization Reduction Item-Split Tensor Factorization 1.9 2.0 2.1 2.2 2.3 2.4 2.5 MAE Figure: Comparison of context-aware methods on the Adom data. 23
- 34. Tensor Factorization No context Reduction Tensor Factorization 0.80 0.85 0.90 0.95 MAE Figure: Comparison of context-aware methods on the Food data. 24
- 35. Conclusions Tensor Factorization methods seem to be promising for CARS Many different TF methods exist Future work: extend to implicit taste data Tensor representation of context data seems promising 25
- 36. Thank You ! 26