Evaluating collaborative filtering recommender systems
, , , и .
ACM Trans. Inf. Syst. (Январь 2004)

Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated.
  • @hotho
  • @folke
  • @jaeschke
  • @stephandoerfel
К этой публикации ещё не было создано рецензий.

распределение оценок
средняя оценка пользователей0,0 из 5.0 на основе 0 рецензий
    Пожалуйста, войдите в систему, чтобы принять участие в дискуссии (добавить собственные рецензию, или комментарий)