Context-aware similarities within the factorization framework (CaRR 2013 presentation)

Balázs Hidasi
Balázs HidasiHead of Data Mining and Research at Gravity R&D
CONTEXT-AWARE SIMILARITIES
WITHIN THE FACTORIZATION
FRAMEWORK

Balázs Hidasi
Domonkos Tikk

CARR WORKSHOP, 5TH FEBRUARY
2013, ROME
OUTLINE
 Background & scope
 Levels of CA similarity

 Experiments

 Future work
BACKGROUND



              Implicit
             feedback            Context




                                  I2I
         Factorization
                             recommendation




                         This work
IMPLICIT FEEDBACK
 User preference not coded explicitely in data
 E.g. purchase history
     Presence  preference assumed
     Absence  ???

   Binary „preference” matrix
       Zeroes should be considered
   Optimize for
     Weighted RMSE
     Partial ordering (ranking)
CONTEXT
 Can mean anything
 Here: event context
     User U has an event
     on Item I
     while the context is C

 E.g.: time, weather, mood of U, freshness of I, etc.
 In the experiments:
       Seasonality (time of the day or time of the week)
           Time period: week / day
FACTORIZATION I
   Preference data can be organized into matrix
     Size of dimensions high
     Data is sparse

   Approximate this matrix by the product of two
    low rank matrices
     Each item and user has a feature vector
     Predicted preference is the scalar product of the




                                                          item
      appropriate vectors
                                                   user          r
                   T
     r       Uu       Ii
       u ,i

   Here we optimize for wRMSE (implicit case)
       Learning features with ALS
FACTORIZATION II.
 Context introduced  additional context
  dimension
 Matrix  tensor (table of records)




                                                                                 item
 Models for preference prediction:
       Elementwise product model                                         user          r
                       T
            WeightedU u  I i product
            r    1 ( scalar  C c )
         u ,i , c

        

       Pairwise model
                                T                    T               T
           ContextU u I i
            ru , i , c dependent user/item I i C c
                               U u Cc      bias
        
                                           context




                                                                context
                     item




                            +                        +
            user                    user                 item                      r
ITEM-TO-ITEM RECOMMENDATIONS
 Items similar to the current item
 E.g.: user cold start, related items, etc.

 Approaches: association rules, similarity between
  item consumption vectors, etc.
 In the factorization framework:
       Similarity between the feature vectors
        Scalar product:s i , j
                                 T
                              Ii I j
                                               T
                                      Ii           Ij
       Cosine similarity: s i , j
                                     Ii    2
                                               Ij
                                                        2
SCOPE OF THIS PROJECT
   Examination wether item similarities can be
    improved
     using context-aware learning or prediction
     compared to the basic feature based solution

   Motivation:
       If factorization models are used anyways, it would be
        good to use them for I2I recommendations as well
   Out of scope:
       Comparision with other approaches (e.g. association
        rules)
CONTEXT-AWARE SIMILARITIES: LEVEL1
   The form of computing similarity remains
                     T
       si, j   Ii       Ij            Ii
                                                T
                                                    Ij
                              si, j
                                      Ii    2
                                                Ij
                                                         2



   Similarity is NOT CA
   Context-aware learning is used
     Assumption: item models will be more accurate
     Reasons: during the learning context is modelled
      separately
   Elementwise product model
       Context related effects coded in the context feature vector
   Pairwise model
       Context related biases removed
CONTEXT-AWARE SIMILARITIES: LEVEL2
 Incorporating the context features
 Elementwise product model
     Similarities reweighted by the context feature
     Assumption: will be sensitive to the quality of the
      context
                                                     T
     si , j ,c
                   T
                  1 Ii  Cc  I j                1       Ii  Cc  I j
                                    s i , j ,c
   Pairwise model                                       Ii   2
                                                                  Ij
                                                                       2

     Context dependent promotions/demotions for the
      participating items
     Assumption: minor improvements to the basic
      similarity I j I i T C c I j T C c
      s i , j ,c Ii
                    T


    
CONTEXT-AWARE SIMILARITIES: LEVEL2
NORMALIZATION

 Normalization of context vector
 Only for cosine similarity

 Elementwise product model:
     Makes no difference in the ordering
     Recommendation in a given context to a given user

   Pairwise model
     Might affect results
     Controls the importance of item
      promotions/demotions
             I I
                T
                       I C
                          T
                               I
                                  T
                                    C
                            i             j            i             c    j            c
         s i , j ,c
                      Ii            Ij                     Ii   2
                                                                          Ij
                                2             2                                    2

                                T                           T                      T
                       Ii            Ij            Ii           Cc        Ij           Cc
        s i , j ,c
                      Ii            Ij            Ii    2
                                                                Cc   2
                                                                         Ij        Cc
                            2             2                                    2            2
EXPERIMENTS - SETUP
   Four implicit dataset
     LastFM 1K – music
     TV1, TV2 – IPTV
     Grocery – online grocery shopping

   Context: seasonality
       Both with manually and automatically determined
        time bands
   Evaluation: recommend similar items to the
    users’ previous item
     Recall@20
     MAP@20
     Coverage@20
EXPERIMENTS - RESULTS
       Improvement for Grocery                                          Improvement for LastFM
35.00%                                                           200.00%                         183.91%
                                  29.40%
30.00%
                                                                 150.00%
25.00%                   21.40%
20.00%                                                Recall     100.00%
                                                                                                                           Recall
15.00%                                     12.90%     Coverage
                                                                                                                           Coverage
10.00%                                                             50.00%
          6.11% 6.79%                                                       6.14%    17.99%19.72%
                            5.14%   5.83%
       3.91% 3.53%                                                             5.27%
 5.00%
                                             0.87%                  0.00%
 0.00%                                                                             -7.92%              -1.62% -3.18%
                                                                                                         -18.26% -21.96%
                                                                  -50.00%

           Improvement for TV1                                              Improvement for TV2
 900.00%              797.51%                                     20.00%
 800.00%                                                                      0.91%      0.79%
                                                                   0.00%              0.59%
 700.00%
                                                                                                       -0.23% -3.23%
 600.00%                                                          -20.00% -8.08%                          -8.13% -10.40%
 500.00%                                             Recall                                                                Recall
 400.00%                                                          -40.00%
                                                     Coverage                                    -37.24%                   Coverage
 300.00%
                                                                  -60.00%
 200.00%
                              7.34% 46.40%
 100.00% 0.69% 8.31%
                  1.77% 6.77% 5.89% 4.86%                         -80.00%
   0.00%                                                                                     -82.32%
           -3.53%                                                -100.00%
-100.00%
        From left to right: L1 elementwise, L1 pairwise, L2 elementwise, L2 pairwise, L2 pairwise (norm)
EXPERIMENTS - CONCLUSION
 Context awareness generally helps
 Impromevent greatly depends on method and
  context quality
 All but the elementwise level2 method:
     Minor improvements
     Tolerant of context quality

   Elementwise product level2:
     Possibly huge improvements
     Or huge decrease in recommendations
     Depends on the context/problem
(POSSIBLE) FUTURE WORK
 Experimentation with different contexts
 Different similarity between feature vectors

 Predetermination wether context is useful for
     User bias
     Item bias
     Reweighting

 Predetermination of context quality
 Different evaluation methods
       E.g. recommend to session
THANKS FOR THE
   ATTENTION!




For more of my recommender systems related research visit my website:
http://www.hidasi.eu
1 of 17

Recommended

Ireland by
IrelandIreland
Irelandanesah
461 views4 slides
Pasi Leino :: Using XML standards for system integration by
Pasi Leino :: Using XML standards for system integrationPasi Leino :: Using XML standards for system integration
Pasi Leino :: Using XML standards for system integrationgeorge.james
1.3K views30 slides
How To Set Kill Ratios For Defects by
How To Set Kill Ratios For DefectsHow To Set Kill Ratios For Defects
How To Set Kill Ratios For DefectsStuart Riley
2.9K views26 slides
EMOOCs2016 MOOC Conference 2016 by
EMOOCs2016 MOOC Conference 2016EMOOCs2016 MOOC Conference 2016
EMOOCs2016 MOOC Conference 2016Mohammad Khalil
153 views1 slide
Khởi công, công ty tổ chức lễ khởi công chuyên nghiệp nhất tại bình dương, lo... by
Khởi công, công ty tổ chức lễ khởi công chuyên nghiệp nhất tại bình dương, lo...Khởi công, công ty tổ chức lễ khởi công chuyên nghiệp nhất tại bình dương, lo...
Khởi công, công ty tổ chức lễ khởi công chuyên nghiệp nhất tại bình dương, lo...Công ty Tổ Chức Sự Kiện VietSky
193 views17 slides
Nhà bạt không gian, cho thuê nhà bạt không gian chuyên nghiệp nhất ở tại đồng... by
Nhà bạt không gian, cho thuê nhà bạt không gian chuyên nghiệp nhất ở tại đồng...Nhà bạt không gian, cho thuê nhà bạt không gian chuyên nghiệp nhất ở tại đồng...
Nhà bạt không gian, cho thuê nhà bạt không gian chuyên nghiệp nhất ở tại đồng...Công ty Tổ Chức Sự Kiện VietSky
197 views17 slides

More Related Content

Viewers also liked

Công ty tổ chức lễ khởi công lớn nhất tại tp.hcm, đồng nai, bình dương by
Công ty tổ chức lễ khởi công lớn nhất tại tp.hcm, đồng nai, bình dươngCông ty tổ chức lễ khởi công lớn nhất tại tp.hcm, đồng nai, bình dương
Công ty tổ chức lễ khởi công lớn nhất tại tp.hcm, đồng nai, bình dươngCông ty Tổ Chức Sự Kiện VietSky
169 views17 slides
ejbresume10mar2016 by
ejbresume10mar2016ejbresume10mar2016
ejbresume10mar2016Eric Bielski
368 views3 slides
Miikka Tunturi_B.Sc degree Häme by
Miikka Tunturi_B.Sc degree HämeMiikka Tunturi_B.Sc degree Häme
Miikka Tunturi_B.Sc degree HämeMiikka Tunturi
140 views4 slides
Tổ chức lễ động thổ chuyên nghiệp lớn nhất ở tại đồng nai, bình dương by
Tổ chức lễ động thổ chuyên nghiệp lớn nhất ở tại đồng nai, bình dươngTổ chức lễ động thổ chuyên nghiệp lớn nhất ở tại đồng nai, bình dương
Tổ chức lễ động thổ chuyên nghiệp lớn nhất ở tại đồng nai, bình dươngCông ty Tổ Chức Sự Kiện VietSky
156 views17 slides
RSDL2016 by
RSDL2016RSDL2016
RSDL2016jbpvuurens
772 views26 slides
Caracteristicas lipidos by
Caracteristicas lipidosCaracteristicas lipidos
Caracteristicas lipidosmagomez
124.4K views50 slides

Similar to Context-aware similarities within the factorization framework (CaRR 2013 presentation)

Context-aware similarities within the factorization framework - presented at ... by
Context-aware similarities within the factorization framework - presented at ...Context-aware similarities within the factorization framework - presented at ...
Context-aware similarities within the factorization framework - presented at ...Domonkos Tikk
524 views17 slides
RecSys 2008: Social Ranking by
RecSys 2008: Social RankingRecSys 2008: Social Ranking
RecSys 2008: Social RankingUCL-CS MobiSys
266 views6 slides
Recommendation systems in the scope of opinion formation: a model by
Recommendation systems in the scope of opinion formation: a modelRecommendation systems in the scope of opinion formation: a model
Recommendation systems in the scope of opinion formation: a modelMarcel Blattner, PhD
1.4K views40 slides
Statistical Models for Massive Web Data by
Statistical Models for Massive Web DataStatistical Models for Massive Web Data
Statistical Models for Massive Web DataDeepak Agarwal
1K views26 slides
CFAR-m Presentation English by
CFAR-m Presentation EnglishCFAR-m Presentation English
CFAR-m Presentation Englishbusinessangeleu
1.7K views52 slides
Leveraging collaborativetaggingforwebitemdesign ajithajjarani by
Leveraging collaborativetaggingforwebitemdesign ajithajjaraniLeveraging collaborativetaggingforwebitemdesign ajithajjarani
Leveraging collaborativetaggingforwebitemdesign ajithajjaraniAjith Ajjarani
171 views32 slides

Similar to Context-aware similarities within the factorization framework (CaRR 2013 presentation)(8)

Context-aware similarities within the factorization framework - presented at ... by Domonkos Tikk
Context-aware similarities within the factorization framework - presented at ...Context-aware similarities within the factorization framework - presented at ...
Context-aware similarities within the factorization framework - presented at ...
Domonkos Tikk524 views
Recommendation systems in the scope of opinion formation: a model by Marcel Blattner, PhD
Recommendation systems in the scope of opinion formation: a modelRecommendation systems in the scope of opinion formation: a model
Recommendation systems in the scope of opinion formation: a model
Statistical Models for Massive Web Data by Deepak Agarwal
Statistical Models for Massive Web DataStatistical Models for Massive Web Data
Statistical Models for Massive Web Data
Deepak Agarwal1K views
Leveraging collaborativetaggingforwebitemdesign ajithajjarani by Ajith Ajjarani
Leveraging collaborativetaggingforwebitemdesign ajithajjaraniLeveraging collaborativetaggingforwebitemdesign ajithajjarani
Leveraging collaborativetaggingforwebitemdesign ajithajjarani
Ajith Ajjarani171 views
Partial Object Detection in Inclined Weather Conditions by IRJET Journal
Partial Object Detection in Inclined Weather ConditionsPartial Object Detection in Inclined Weather Conditions
Partial Object Detection in Inclined Weather Conditions
IRJET Journal4 views
Transferring Semantic Categories with Vertex Kernels: Recommendations with Se... by Matthew Rowe
Transferring Semantic Categories with Vertex Kernels: Recommendations with Se...Transferring Semantic Categories with Vertex Kernels: Recommendations with Se...
Transferring Semantic Categories with Vertex Kernels: Recommendations with Se...
Matthew Rowe1.2K views

More from Balázs Hidasi

Egyedi termék kreatívok tömeges gyártása generatív AI segítségével by
Egyedi termék kreatívok tömeges gyártása generatív AI segítségévelEgyedi termék kreatívok tömeges gyártása generatív AI segítségével
Egyedi termék kreatívok tömeges gyártása generatív AI segítségévelBalázs Hidasi
45 views57 slides
The Effect of Third Party Implementations on Reproducibility by
The Effect of Third Party Implementations on ReproducibilityThe Effect of Third Party Implementations on Reproducibility
The Effect of Third Party Implementations on ReproducibilityBalázs Hidasi
113 views41 slides
GRU4Rec v2 - Recurrent Neural Networks with Top-k Gains for Session-based Rec... by
GRU4Rec v2 - Recurrent Neural Networks with Top-k Gains for Session-based Rec...GRU4Rec v2 - Recurrent Neural Networks with Top-k Gains for Session-based Rec...
GRU4Rec v2 - Recurrent Neural Networks with Top-k Gains for Session-based Rec...Balázs Hidasi
3.2K views30 slides
Deep Learning in Recommender Systems - RecSys Summer School 2017 by
Deep Learning in Recommender Systems - RecSys Summer School 2017Deep Learning in Recommender Systems - RecSys Summer School 2017
Deep Learning in Recommender Systems - RecSys Summer School 2017Balázs Hidasi
11.3K views77 slides
Parallel Recurrent Neural Network Architectures for Feature-rich Session-base... by
Parallel Recurrent Neural Network Architectures for Feature-rich Session-base...Parallel Recurrent Neural Network Architectures for Feature-rich Session-base...
Parallel Recurrent Neural Network Architectures for Feature-rich Session-base...Balázs Hidasi
2.3K views15 slides
Context aware factorization methods for implicit feedback based recommendatio... by
Context aware factorization methods for implicit feedback based recommendatio...Context aware factorization methods for implicit feedback based recommendatio...
Context aware factorization methods for implicit feedback based recommendatio...Balázs Hidasi
524 views34 slides

More from Balázs Hidasi(17)

Egyedi termék kreatívok tömeges gyártása generatív AI segítségével by Balázs Hidasi
Egyedi termék kreatívok tömeges gyártása generatív AI segítségévelEgyedi termék kreatívok tömeges gyártása generatív AI segítségével
Egyedi termék kreatívok tömeges gyártása generatív AI segítségével
Balázs Hidasi45 views
The Effect of Third Party Implementations on Reproducibility by Balázs Hidasi
The Effect of Third Party Implementations on ReproducibilityThe Effect of Third Party Implementations on Reproducibility
The Effect of Third Party Implementations on Reproducibility
Balázs Hidasi113 views
GRU4Rec v2 - Recurrent Neural Networks with Top-k Gains for Session-based Rec... by Balázs Hidasi
GRU4Rec v2 - Recurrent Neural Networks with Top-k Gains for Session-based Rec...GRU4Rec v2 - Recurrent Neural Networks with Top-k Gains for Session-based Rec...
GRU4Rec v2 - Recurrent Neural Networks with Top-k Gains for Session-based Rec...
Balázs Hidasi3.2K views
Deep Learning in Recommender Systems - RecSys Summer School 2017 by Balázs Hidasi
Deep Learning in Recommender Systems - RecSys Summer School 2017Deep Learning in Recommender Systems - RecSys Summer School 2017
Deep Learning in Recommender Systems - RecSys Summer School 2017
Balázs Hidasi11.3K views
Parallel Recurrent Neural Network Architectures for Feature-rich Session-base... by Balázs Hidasi
Parallel Recurrent Neural Network Architectures for Feature-rich Session-base...Parallel Recurrent Neural Network Architectures for Feature-rich Session-base...
Parallel Recurrent Neural Network Architectures for Feature-rich Session-base...
Balázs Hidasi2.3K views
Context aware factorization methods for implicit feedback based recommendatio... by Balázs Hidasi
Context aware factorization methods for implicit feedback based recommendatio...Context aware factorization methods for implicit feedback based recommendatio...
Context aware factorization methods for implicit feedback based recommendatio...
Balázs Hidasi524 views
Deep learning to the rescue - solving long standing problems of recommender ... by Balázs Hidasi
Deep learning to the rescue - solving long standing problems of recommender ...Deep learning to the rescue - solving long standing problems of recommender ...
Deep learning to the rescue - solving long standing problems of recommender ...
Balázs Hidasi13.6K views
Deep learning: the future of recommendations by Balázs Hidasi
Deep learning: the future of recommendationsDeep learning: the future of recommendations
Deep learning: the future of recommendations
Balázs Hidasi15.1K views
Context-aware preference modeling with factorization by Balázs Hidasi
Context-aware preference modeling with factorizationContext-aware preference modeling with factorization
Context-aware preference modeling with factorization
Balázs Hidasi1.7K views
Approximate modeling of continuous context in factorization algorithms (CaRR1... by Balázs Hidasi
Approximate modeling of continuous context in factorization algorithms (CaRR1...Approximate modeling of continuous context in factorization algorithms (CaRR1...
Approximate modeling of continuous context in factorization algorithms (CaRR1...
Balázs Hidasi1.4K views
Utilizing additional information in factorization methods (research overview,... by Balázs Hidasi
Utilizing additional information in factorization methods (research overview,...Utilizing additional information in factorization methods (research overview,...
Utilizing additional information in factorization methods (research overview,...
Balázs Hidasi628 views
Az implicit ajánlási probléma és néhány megoldása (BME TMIT szeminárium előad... by Balázs Hidasi
Az implicit ajánlási probléma és néhány megoldása (BME TMIT szeminárium előad...Az implicit ajánlási probléma és néhány megoldása (BME TMIT szeminárium előad...
Az implicit ajánlási probléma és néhány megoldása (BME TMIT szeminárium előad...
Balázs Hidasi383 views
iTALS: implicit tensor factorization for context-aware recommendations (ECML/... by Balázs Hidasi
iTALS: implicit tensor factorization for context-aware recommendations (ECML/...iTALS: implicit tensor factorization for context-aware recommendations (ECML/...
iTALS: implicit tensor factorization for context-aware recommendations (ECML/...
Balázs Hidasi637 views
Initialization of matrix factorization (CaRR 2012 presentation) by Balázs Hidasi
Initialization of matrix factorization (CaRR 2012 presentation)Initialization of matrix factorization (CaRR 2012 presentation)
Initialization of matrix factorization (CaRR 2012 presentation)
Balázs Hidasi737 views
ShiftTree: model alapú idősor-osztályozó (VK 2009 előadás) by Balázs Hidasi
ShiftTree: model alapú idősor-osztályozó (VK 2009 előadás)ShiftTree: model alapú idősor-osztályozó (VK 2009 előadás)
ShiftTree: model alapú idősor-osztályozó (VK 2009 előadás)
Balázs Hidasi327 views
ShiftTree: model alapú idősor-osztályozó (ML@BP előadás, 2012) by Balázs Hidasi
ShiftTree: model alapú idősor-osztályozó (ML@BP előadás, 2012)ShiftTree: model alapú idősor-osztályozó (ML@BP előadás, 2012)
ShiftTree: model alapú idősor-osztályozó (ML@BP előadás, 2012)
Balázs Hidasi357 views
ShiftTree: model based time series classifier (ECML/PKDD 2011 presentation) by Balázs Hidasi
ShiftTree: model based time series classifier (ECML/PKDD 2011 presentation)ShiftTree: model based time series classifier (ECML/PKDD 2011 presentation)
ShiftTree: model based time series classifier (ECML/PKDD 2011 presentation)
Balázs Hidasi725 views

Recently uploaded

DALI Basics Course 2023 by
DALI Basics Course  2023DALI Basics Course  2023
DALI Basics Course 2023Ivory Egg
14 views12 slides
Java Platform Approach 1.0 - Picnic Meetup by
Java Platform Approach 1.0 - Picnic MeetupJava Platform Approach 1.0 - Picnic Meetup
Java Platform Approach 1.0 - Picnic MeetupRick Ossendrijver
25 views39 slides
ChatGPT and AI for Web Developers by
ChatGPT and AI for Web DevelopersChatGPT and AI for Web Developers
ChatGPT and AI for Web DevelopersMaximiliano Firtman
181 views82 slides
6g - REPORT.pdf by
6g - REPORT.pdf6g - REPORT.pdf
6g - REPORT.pdfLiveplex
9 views23 slides
The details of description: Techniques, tips, and tangents on alternative tex... by
The details of description: Techniques, tips, and tangents on alternative tex...The details of description: Techniques, tips, and tangents on alternative tex...
The details of description: Techniques, tips, and tangents on alternative tex...BookNet Canada
121 views24 slides
Spesifikasi Lengkap ASUS Vivobook Go 14 by
Spesifikasi Lengkap ASUS Vivobook Go 14Spesifikasi Lengkap ASUS Vivobook Go 14
Spesifikasi Lengkap ASUS Vivobook Go 14Dot Semarang
35 views1 slide

Recently uploaded(20)

DALI Basics Course 2023 by Ivory Egg
DALI Basics Course  2023DALI Basics Course  2023
DALI Basics Course 2023
Ivory Egg14 views
6g - REPORT.pdf by Liveplex
6g - REPORT.pdf6g - REPORT.pdf
6g - REPORT.pdf
Liveplex9 views
The details of description: Techniques, tips, and tangents on alternative tex... by BookNet Canada
The details of description: Techniques, tips, and tangents on alternative tex...The details of description: Techniques, tips, and tangents on alternative tex...
The details of description: Techniques, tips, and tangents on alternative tex...
BookNet Canada121 views
Spesifikasi Lengkap ASUS Vivobook Go 14 by Dot Semarang
Spesifikasi Lengkap ASUS Vivobook Go 14Spesifikasi Lengkap ASUS Vivobook Go 14
Spesifikasi Lengkap ASUS Vivobook Go 14
Dot Semarang35 views
Case Study Copenhagen Energy and Business Central.pdf by Aitana
Case Study Copenhagen Energy and Business Central.pdfCase Study Copenhagen Energy and Business Central.pdf
Case Study Copenhagen Energy and Business Central.pdf
Aitana12 views
Transcript: The Details of Description Techniques tips and tangents on altern... by BookNet Canada
Transcript: The Details of Description Techniques tips and tangents on altern...Transcript: The Details of Description Techniques tips and tangents on altern...
Transcript: The Details of Description Techniques tips and tangents on altern...
BookNet Canada130 views
Unit 1_Lecture 2_Physical Design of IoT.pdf by StephenTec
Unit 1_Lecture 2_Physical Design of IoT.pdfUnit 1_Lecture 2_Physical Design of IoT.pdf
Unit 1_Lecture 2_Physical Design of IoT.pdf
StephenTec11 views
Perth MeetUp November 2023 by Michael Price
Perth MeetUp November 2023 Perth MeetUp November 2023
Perth MeetUp November 2023
Michael Price15 views
PharoJS - Zürich Smalltalk Group Meetup November 2023 by Noury Bouraqadi
PharoJS - Zürich Smalltalk Group Meetup November 2023PharoJS - Zürich Smalltalk Group Meetup November 2023
PharoJS - Zürich Smalltalk Group Meetup November 2023
Noury Bouraqadi120 views
SAP Automation Using Bar Code and FIORI.pdf by Virendra Rai, PMP
SAP Automation Using Bar Code and FIORI.pdfSAP Automation Using Bar Code and FIORI.pdf
SAP Automation Using Bar Code and FIORI.pdf
STPI OctaNE CoE Brochure.pdf by madhurjyapb
STPI OctaNE CoE Brochure.pdfSTPI OctaNE CoE Brochure.pdf
STPI OctaNE CoE Brochure.pdf
madhurjyapb12 views
HTTP headers that make your website go faster - devs.gent November 2023 by Thijs Feryn
HTTP headers that make your website go faster - devs.gent November 2023HTTP headers that make your website go faster - devs.gent November 2023
HTTP headers that make your website go faster - devs.gent November 2023
Thijs Feryn19 views

Context-aware similarities within the factorization framework (CaRR 2013 presentation)

  • 1. CONTEXT-AWARE SIMILARITIES WITHIN THE FACTORIZATION FRAMEWORK Balázs Hidasi Domonkos Tikk CARR WORKSHOP, 5TH FEBRUARY 2013, ROME
  • 2. OUTLINE  Background & scope  Levels of CA similarity  Experiments  Future work
  • 3. BACKGROUND Implicit feedback Context I2I Factorization recommendation This work
  • 4. IMPLICIT FEEDBACK  User preference not coded explicitely in data  E.g. purchase history  Presence  preference assumed  Absence  ???  Binary „preference” matrix  Zeroes should be considered  Optimize for  Weighted RMSE  Partial ordering (ranking)
  • 5. CONTEXT  Can mean anything  Here: event context  User U has an event  on Item I  while the context is C  E.g.: time, weather, mood of U, freshness of I, etc.  In the experiments:  Seasonality (time of the day or time of the week)  Time period: week / day
  • 6. FACTORIZATION I  Preference data can be organized into matrix  Size of dimensions high  Data is sparse  Approximate this matrix by the product of two low rank matrices  Each item and user has a feature vector  Predicted preference is the scalar product of the item appropriate vectors user r T  r Uu Ii u ,i  Here we optimize for wRMSE (implicit case)  Learning features with ALS
  • 7. FACTORIZATION II.  Context introduced  additional context dimension  Matrix  tensor (table of records) item  Models for preference prediction:  Elementwise product model user r T WeightedU u  I i product r 1 ( scalar  C c )  u ,i , c   Pairwise model T T T  ContextU u I i ru , i , c dependent user/item I i C c U u Cc bias  context context item + + user user item r
  • 8. ITEM-TO-ITEM RECOMMENDATIONS  Items similar to the current item  E.g.: user cold start, related items, etc.  Approaches: association rules, similarity between item consumption vectors, etc.  In the factorization framework:  Similarity between the feature vectors Scalar product:s i , j T  Ii I j T Ii Ij  Cosine similarity: s i , j Ii 2 Ij 2
  • 9. SCOPE OF THIS PROJECT  Examination wether item similarities can be improved  using context-aware learning or prediction  compared to the basic feature based solution  Motivation:  If factorization models are used anyways, it would be good to use them for I2I recommendations as well  Out of scope:  Comparision with other approaches (e.g. association rules)
  • 10. CONTEXT-AWARE SIMILARITIES: LEVEL1  The form of computing similarity remains T  si, j Ii Ij Ii T Ij si, j Ii 2 Ij 2  Similarity is NOT CA  Context-aware learning is used  Assumption: item models will be more accurate  Reasons: during the learning context is modelled separately  Elementwise product model  Context related effects coded in the context feature vector  Pairwise model  Context related biases removed
  • 11. CONTEXT-AWARE SIMILARITIES: LEVEL2  Incorporating the context features  Elementwise product model  Similarities reweighted by the context feature  Assumption: will be sensitive to the quality of the context T  si , j ,c T 1 Ii  Cc  I j 1 Ii  Cc  I j s i , j ,c  Pairwise model Ii 2 Ij 2  Context dependent promotions/demotions for the participating items  Assumption: minor improvements to the basic similarity I j I i T C c I j T C c s i , j ,c Ii T 
  • 12. CONTEXT-AWARE SIMILARITIES: LEVEL2 NORMALIZATION  Normalization of context vector  Only for cosine similarity  Elementwise product model:  Makes no difference in the ordering  Recommendation in a given context to a given user  Pairwise model  Might affect results  Controls the importance of item promotions/demotions I I T I C T I T C i j i c j c s i , j ,c Ii Ij Ii 2 Ij 2 2 2 T T T Ii Ij Ii Cc Ij Cc s i , j ,c Ii Ij Ii 2 Cc 2 Ij Cc 2 2 2 2
  • 13. EXPERIMENTS - SETUP  Four implicit dataset  LastFM 1K – music  TV1, TV2 – IPTV  Grocery – online grocery shopping  Context: seasonality  Both with manually and automatically determined time bands  Evaluation: recommend similar items to the users’ previous item  Recall@20  MAP@20  Coverage@20
  • 14. EXPERIMENTS - RESULTS Improvement for Grocery Improvement for LastFM 35.00% 200.00% 183.91% 29.40% 30.00% 150.00% 25.00% 21.40% 20.00% Recall 100.00% Recall 15.00% 12.90% Coverage Coverage 10.00% 50.00% 6.11% 6.79% 6.14% 17.99%19.72% 5.14% 5.83% 3.91% 3.53% 5.27% 5.00% 0.87% 0.00% 0.00% -7.92% -1.62% -3.18% -18.26% -21.96% -50.00% Improvement for TV1 Improvement for TV2 900.00% 797.51% 20.00% 800.00% 0.91% 0.79% 0.00% 0.59% 700.00% -0.23% -3.23% 600.00% -20.00% -8.08% -8.13% -10.40% 500.00% Recall Recall 400.00% -40.00% Coverage -37.24% Coverage 300.00% -60.00% 200.00% 7.34% 46.40% 100.00% 0.69% 8.31% 1.77% 6.77% 5.89% 4.86% -80.00% 0.00% -82.32% -3.53% -100.00% -100.00%  From left to right: L1 elementwise, L1 pairwise, L2 elementwise, L2 pairwise, L2 pairwise (norm)
  • 15. EXPERIMENTS - CONCLUSION  Context awareness generally helps  Impromevent greatly depends on method and context quality  All but the elementwise level2 method:  Minor improvements  Tolerant of context quality  Elementwise product level2:  Possibly huge improvements  Or huge decrease in recommendations  Depends on the context/problem
  • 16. (POSSIBLE) FUTURE WORK  Experimentation with different contexts  Different similarity between feature vectors  Predetermination wether context is useful for  User bias  Item bias  Reweighting  Predetermination of context quality  Different evaluation methods  E.g. recommend to session
  • 17. THANKS FOR THE ATTENTION! For more of my recommender systems related research visit my website: http://www.hidasi.eu