Conference articles
An analysis of tag-recommender evaluation procedures.
In:
Proceedings of the 7th ACM conference on Recommender systems, series RecSys '13, pages 343-346.
ACM, New York, NY, USA, 2013.
Stephan Doerfel and Robert Jäschke.
[doi]
[abstract]
[BibTeX]
Since the rise of collaborative tagging systems on the web, the tag recommendation task -- suggesting suitable tags to users of such systems while they add resources to their collection -- has been tackled. However, the (offline) evaluation of tag recommendation algorithms usually suffers from difficulties like the sparseness of the data or the cold start problem for new resources or users. Previous studies therefore often used so-called post-cores (specific subsets of the original datasets) for their experiments. In this paper, we conduct a large-scale experiment in which we analyze different tag recommendation algorithms on different cores of three real-world datasets. We show, that a recommender's performance depends on the particular core and explore correlations between performances on different cores.
Extending Open Innovation Platforms into the real world -
Using Large Displays in Public Spaces.
In:
10. European Academy of Management Conference (EURAM) 2010.
Rome, Italy, 2010.
197 (45-10)
I. Blohm, F. Ott, U. Bretschneider, M. Huber, M. Rieger, F. Glatz, M. Koch, J. M. Leimeister and H. Krcmar.
[doi]
[BibTeX]
Extending Open Innovation Platforms into the real world - Using Large Displays in Public Spaces.
In:
10. European Academy of Management Conference (EURAM) 2010.
Rome, Italy, 2010.
197 (45-10)
I. Blohm, F. Ott, U. Bretschneider, M. Huber, M. Rieger, F. Glatz, M. Koch, J. M. Leimeister and H. Krcmar.
[doi]
[BibTeX]
Evaluation of Collaborative Filtering Algorithms for Recommending Articles on CiteULike.
In:
Proceedings of the Workshop on Web 3.0: Merging Semantic Web and Social Web, volume 467, series CEUR Workshop Proceedings.
2009.
Denis Parra and Peter Brusilovsky.
[doi]
[abstract]
[BibTeX]
Motivated by the potential use of collaborative tagging systems to develop new recommender systems, we have implemented and compared three variants of user-based collaborative filtering algorithms to provide recommendations of articles on CiteULike. On our first approach, Classic Collaborative filtering (CCF), we use Pearson correlation to calculate similarity between users and a classic adjusted ratings formula to rank the recommendations. Our second approach, Neighbor-weighted Collaborative Filtering (NwCF), incorporates the amount of raters in the ranking formula of the recommendations. A modified version of the Okapi BM25 IR model over users ’ tags is implemented on our third approach to form the user neighborhood. Our results suggest that incorporating the number of raters into the algorithms leads to an improvement of precision, and they also support that tags can be considered as an alternative to Pearson correlation to calculate the similarity between users and their neighbors in a collaborative tagging system.
Journal articles
Evaluating collaborative filtering recommender systems.
ACM Transactions on Information Systems, 22(1):5-53, 2004.
J.L. Herlocker, J.A. Konstan, L.G. Terveen and J.T. Riedl.
[BibTeX]
Evaluating collaborative filtering recommender systems.
ACM Trans. Inf. Syst., 22:5-53, 2004.
Jonathan L. Herlocker, Joseph A. Konstan, Loren G. Terveen and John T. Riedl.
[doi]
[abstract]
[BibTeX]
Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated.
Evaluating collaborative filtering recommender systems.
ACM Trans. Inf. Syst., 22(1):5-53, 2004.
Jonathan L. Herlocker, Joseph A. Konstan, Loren G. Terveen and John T. Riedl.
[doi]
[abstract]
[BibTeX]
Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated.