Feature-based Similarity Models for Top-n Recommendation of New Items

Date of Submission: 
July 24, 2014
Report Number: 
14-016
Report PDF: 
Abstract: 
Recommending new items for suitable users is an important yet challenging problem due to the lack of preference history for the new items. Non-collaborative user modeling techniques that rely on the item features can be used to recommend new items. However, they only use the past preferences of each user to provide recommendations for that user. They do not utilize information from the past preferences of other users which can potentially be ignoring useful information. More recent factor models transfer knowledge across users using their preference information in order to provide more accurate recommendations. These methods learn a low rank approximation for the preference matrix which can lead to loss of information. Moreover, they might not be able to learn useful patterns given very sparse datasets. In this work we present FSM, a method for top-n recommendation of new items given binary user preferences. FSM learns Feature-based item-Similarity Models and its strength lies in combining two points: (i) exploiting preference information across all users to learn multiple global item similarity functions, and (ii) learning user-specific weights that determine the contribution of each global similarity function in generating recommendations for each user. FSM can be considered as a sparse high-dimensional factor model where the previous preferences of each user are incorporated within his latent representation. This way FSM combines the merits of item similarity models that capture local relations among items and factor models that learn global preference patterns. A comprehensive set of experiments was conduced to compare FSM against state-of-the-art collaborative factor models and non-collaborative user modeling techniques. Results show that FSM outperforms other techniques in terms of recommendation quality. FSM manages to yield better recommendations even with very sparse datasets. Results also show that FSM can efficiently handle high-dimensional as well as low-dimensional item feature spaces.