TY - CONF AU - Kumar, Ravi AU - Tomkins, Andrew AU - Vassilvitskii, Sergei AU - Vee, Erik A2 - T1 - Inverting a Steady-State T2 - Proceedings of the Eighth ACM International Conference on Web Search and Data Mining PB - ACM CY - New York, NY, USA PY - 2015/ M2 - VL - IS - SP - 359 EP - 368 UR - http://doi.acm.org/10.1145/2684822.2685310 M3 - 10.1145/2684822.2685310 KW - markov KW - state KW - steady KW - toread L1 - SN - 978-1-4503-3317-7 N1 - Inverting a Steady-State N1 - AB - We consider the problem of inferring choices made by users based only on aggregate data containing the relative popularity of each item. We propose a framework that models the problem as that of inferring a Markov chain given a stationary distribution. Formally, we are given a graph and a target steady-state distribution on its nodes. We are also give a mapping from per-node scores to a transition matrix, from a broad family of such mappings. The goal is to set the scores of each node such that the resulting transition matrix induces the desired steady state. We prove sufficient conditions under which this problem is feasible and, for the feasible instances, obtain a simple algorithm for a generic version of the problem. This iterative algorithm provably finds the unique solution to this problem and has a polynomial rate of convergence; in practice we find that the algorithm converges after fewer than ten iterations. We then apply this framework to choice problems in online settings and show that our algorithm is able to explain the observed data and predict the user choices much better than other competing baselines across a variety of diverse datasets. ER -