TY - GEN AU - Weston, Jason AU - Wang, Chong AU - Weiss, Ron AU - Berenzweig, Adam A2 - T1 - Latent Collaborative Retrieval JO - PB - C1 - PY - 2012/ VL - IS - SP - EP - UR - http://arxiv.org/abs/1206.4603 DO - KW - recommender KW - tensor KW - toread L1 - N1 - Latent Collaborative Retrieval N1 - AB - Retrieval tasks typically require a ranking of items given a query. Collaborative filtering tasks, on the other hand, learn to model user's preferences over items. In this paper we study the joint problem of recommending items to a user with respect to a given query, which is a surprisingly common task. This setup differs from the standard collaborative filtering one in that we are given a query x user x item tensor for training instead of the more traditional user x item matrix. Compared to document retrieval we do have a query, but we may or may not have content features (we will consider both cases) and we can also take account of the user's profile. We introduce a factorized model for this new task that optimizes the top-ranked items returned for the given query and user. We report empirical results where it outperforms several baselines. ER - TY - CONF AU - Chang, Jonathan AU - Boyd-Graber, Jordan L. AU - Gerrish, Sean AU - Wang, Chong AU - Blei, David M. A2 - Bengio, Yoshua A2 - Schuurmans, Dale A2 - Lafferty, John D. A2 - Williams, Christopher K. I. A2 - Culotta, Aron T1 - Reading Tea Leaves: How Humans Interpret Topic Models T2 - NIPS PB - Curran Associates, Inc. C1 - PY - 2009/ CY - VL - IS - SP - 288 EP - 296 UR - http://books.nips.cc/papers/files/nips22/NIPS2009_0125.pdf DO - KW - model KW - topic KW - intelligence KW - cirg KW - collective L1 - SN - 9781615679119 N1 - N1 - AB - Probabilistic topic models are a popular tool for the unsupervised analysis of text, providing both a predictive model of future text and a latent topic representation of the corpus. Practitioners typically assume that the latent space is semantically meaningful. It is used to check models, summarize the corpus, and guide exploration of its contents. However, whether the latent space is interpretable is in need of quantitative evaluation. In this paper, we present new quantitative methods for measuring semantic meaning in inferred topics. We back these measures with large-scale user studies, showing that they capture aspects of the model that are undetected by previous measures of model quality based on held-out likelihood. Surprisingly, topic models which perform better on held-out likelihood may infer less semantically meaningful topics. ER -