Be the first to like this
Most existing approaches in Context-Aware Recommender Systems (CRS) focus on recommending relevant items to users taking into account contextual information, such as time, loca-tion, or social aspects. However, none of them have considered the problem of user’s content dynamicity. This problem has been studied in the reinforcement learning community, but without paying much attention to the contextual aspect of the recommendation. We introduce in this paper an algorithm that tackles the user’s content dynamicity by modeling the CRS as a contextual bandit algorithm. It is based on dynamic explora-tion/exploitation and it includes a metric to decide which user’s situation is the most relevant to exploration or exploitation. Within a deliberately designed offline simulation framework, we conduct extensive evaluations with real online event log data. The experimental results and detailed analysis demon-strate that our algorithm outperforms surveyed algorithms.