%0 %0 Generic %A Weston, Jason; Wang, Chong; Weiss, Ron & Berenzweig, Adam %D 2012 %T Latent Collaborative Retrieval %E %B %C %I %V %6 %N %P %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 Latent Collaborative Retrieval %3 misc %4 %# %$ %F weston2012latent %K recommender, tensor, toread %X Retrieval tasks typically require a ranking of items given a query. Collaborative filtering tasks, on the other hand, learn to model user's preferences over items. In this paper we study the joint problem of recommending items to a user with respect to a given query, which is a surprisingly common task. This setup differs from the standard collaborative filtering one in that we are given a query x user x item tensor for training instead of the more traditional user x item matrix. Compared to document retrieval we do have a query, but we may or may not have content features (we will consider both cases) and we can also take account of the user's profile. We introduce a factorized model for this new task that optimizes the top-ranked items returned for the given query and user. We report empirical results where it outperforms several baselines. %Z cite arxiv:1206.4603Comment: ICML2012 %U http://arxiv.org/abs/1206.4603 %+ %^ %0 %0 Conference Proceedings %A Chang, Jonathan; Boyd-Graber, Jordan L.; Gerrish, Sean; Wang, Chong & Blei, David M. %D 2009 %T Reading Tea Leaves: How Humans Interpret Topic Models %E Bengio, Yoshua; Schuurmans, Dale; Lafferty, John D.; Williams, Christopher K. I. & Culotta, Aron %B NIPS %C %I Curran Associates, Inc. %V %6 %N %P 288--296 %& %Y %S %7 %8 %9 %? %! %Z %@ 9781615679119 %( %) %* %L %M %1 %2 %3 inproceedings %4 %# %$ %F chang2009reading %K model, topic, intelligence, cirg, collective %X Probabilistic topic models are a popular tool for the unsupervised analysis of text, providing both a predictive model of future text and a latent topic representation of the corpus. Practitioners typically assume that the latent space is semantically meaningful. It is used to check models, summarize the corpus, and guide exploration of its contents. However, whether the latent space is interpretable is in need of quantitative evaluation. In this paper, we present new quantitative methods for measuring semantic meaning in inferred topics. We back these measures with large-scale user studies, showing that they capture aspects of the model that are undetected by previous measures of model quality based on held-out likelihood. Surprisingly, topic models which perform better on held-out likelihood may infer less semantically meaningful topics. %Z %U http://books.nips.cc/papers/files/nips22/NIPS2009_0125.pdf %+ %^