TY - CONF AU - Blohm, I. AU - Ott, F. AU - Bretschneider, U. AU - Huber, M. AU - Rieger, M. AU - Glatz, F. AU - Koch, M. AU - Leimeister, J. M. AU - Krcmar, H. A2 - T1 - Extending Open Innovation Platforms into the real world -

Using Large Displays in Public Spaces T2 - 10. European Academy of Management Conference (EURAM) 2010 PB - C1 - Rome, Italy PY - 2010/ CY - VL - IS - 10 SP - EP - UR - http://www.uni-kassel.de/fb7/ibwl/leimeister/pub/JML_197.pdf DO - KW - IdeaMirror KW - collaborative KW - community KW - computing KW - evaluation KW - filtering KW - idea KW - innovation KW - itegpub KW - myown KW - open KW - pub_jml KW - pub_ubr KW - support KW - ubiquitous L1 - SN - N1 - N1 - AB - ER - TY - CONF AU - Blohm, I. AU - Ott, F. AU - Bretschneider, U. AU - Huber, M. AU - Rieger, M. AU - Glatz, F. AU - Koch, M. AU - Leimeister, J. M. AU - Krcmar, H. A2 - T1 - Extending Open Innovation Platforms into the real world - Using Large Displays in Public Spaces T2 - 10. European Academy of Management Conference (EURAM) 2010 PB - C1 - Rome, Italy PY - 2010/ CY - VL - IS - 10 SP - EP - UR - http://pubs.wi-kassel.de/wp-content/uploads/2013/03/JML_248.pdf DO - KW - IdeaMirror KW - collaborative KW - community KW - computing KW - dempub KW - evaluation KW - filtering KW - idea KW - innovation KW - itegpub KW - open KW - pub_jml KW - pub_ubr KW - support KW - ubiquitous L1 - SN - N1 - N1 - AB - ER - TY - CONF AU - Parra, Denis AU - Brusilovsky, Peter A2 - T1 - Evaluation of Collaborative Filtering Algorithms for Recommending Articles on CiteULike T2 - Proceedings of the Workshop on Web 3.0: Merging Semantic Web and Social Web PB - C1 - PY - 2009/06 CY - VL - 467 IS - SP - EP - UR - http://ceur-ws.org/Vol-467/paper5.pdf DO - KW - algorithms KW - citedBy:doerfel2012leveraging KW - collaborative KW - evaluation KW - filtering L1 - SN - N1 - N1 - AB - Motivated by the potential use of collaborative tagging systems to develop new recommender systems, we have implemented and compared three variants of user-based collaborative filtering algorithms to provide recommendations of articles on CiteULike. On our first approach, Classic Collaborative filtering (CCF), we use Pearson correlation to calculate similarity between users and a classic adjusted ratings formula to rank the recommendations. Our second approach, Neighbor-weighted Collaborative Filtering (NwCF), incorporates the amount of raters in the ranking formula of the recommendations. A modified version of the Okapi BM25 IR model over users ’ tags is implemented on our third approach to form the user neighborhood. Our results suggest that incorporating the number of raters into the algorithms leads to an improvement of precision, and they also support that tags can be considered as an alternative to Pearson correlation to calculate the similarity between users and their neighbors in a collaborative tagging system. ER - TY - JOUR AU - Herlocker, J.L. AU - Konstan, J.A. AU - Terveen, L.G. AU - Riedl, J.T. T1 - Evaluating collaborative filtering recommender systems JO - ACM Transactions on Information Systems PY - 2004/ VL - 22 IS - 1 SP - 5 EP - 53 UR - DO - KW - collaborative KW - evaluation KW - filtering KW - recommender L1 - SN - N1 - N1 - AB - ER - TY - JOUR AU - Herlocker, Jonathan L. AU - Konstan, Joseph A. AU - Terveen, Loren G. AU - Riedl, John T. T1 - Evaluating collaborative filtering recommender systems JO - ACM Trans. Inf. Syst. PY - 2004/01 VL - 22 IS - SP - 5 EP - 53 UR - http://doi.acm.org/10.1145/963770.963772 DO - 10.1145/963770.963772 KW - collaEvaluating KW - collaborative KW - eval KW - evaluation KW - filtering KW - recommend KW - recommendation KW - recommender KW - systems L1 - SN - N1 - Evaluating collaborative filtering recommender systems N1 - AB - Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated. ER - TY - JOUR AU - Herlocker, Jonathan L. AU - Konstan, Joseph A. AU - Terveen, Loren G. AU - Riedl, John T. T1 - Evaluating collaborative filtering recommender systems JO - ACM Trans. Inf. Syst. PY - 2004/ VL - 22 IS - 1 SP - 5 EP - 53 UR - http://portal.acm.org/citation.cfm?id=963770.963772 DO - http://doi.acm.org/10.1145/963770.963772 KW - collaborative KW - evaluation KW - filtering KW - recommender L1 - SN - N1 - Evaluating collaborative filtering recommender systems N1 - AB - Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated. ER -