PUMA publications for /author/Sergei%20Vassilvitskii/markovhttps://puma.uni-kassel.de/author/Sergei%20Vassilvitskii/markovPUMA RSS feed for /author/Sergei%20Vassilvitskii/markov2019-11-19T10:45:20+01:00Inverting a Steady-Statehttps://puma.uni-kassel.de/bibtex/2e0e10a01d0f65da00f5390482407abd2/hothohotho2015-02-12T10:08:35+01:00markov state steady toread <span class="authorEditorList"><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Ravi Kumar" itemprop="url" href="/author/Ravi%20Kumar"><span itemprop="name">R. Kumar</span></a></span>, <span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Andrew Tomkins" itemprop="url" href="/author/Andrew%20Tomkins"><span itemprop="name">A. Tomkins</span></a></span>, <span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Sergei Vassilvitskii" itemprop="url" href="/author/Sergei%20Vassilvitskii"><span itemprop="name">S. Vassilvitskii</span></a></span>, und <span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Erik Vee" itemprop="url" href="/author/Erik%20Vee"><span itemprop="name">E. Vee</span></a></span>. </span><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the Eighth ACM International Conference on Web Search and Data Mining</span>, </em></span><em>Seite <span itemprop="pagination">359--368</span>. </em><em>New York, NY, USA, </em><em><span itemprop="publisher">ACM</span>, </em>(<em><span>2015<meta content="2015" itemprop="datePublished"/></span></em>)Thu Feb 12 10:08:35 CET 2015New York, NY, USAProceedings of the Eighth ACM International Conference on Web Search and Data Mining359--368WSDM '15Inverting a Steady-State2015markov state steady toread We consider the problem of inferring choices made by users based only on aggregate data containing the relative popularity of each item. We propose a framework that models the problem as that of inferring a Markov chain given a stationary distribution. Formally, we are given a graph and a target steady-state distribution on its nodes. We are also give a mapping from per-node scores to a transition matrix, from a broad family of such mappings. The goal is to set the scores of each node such that the resulting transition matrix induces the desired steady state. We prove sufficient conditions under which this problem is feasible and, for the feasible instances, obtain a simple algorithm for a generic version of the problem. This iterative algorithm provably finds the unique solution to this problem and has a polynomial rate of convergence; in practice we find that the algorithm converges after fewer than ten iterations. We then apply this framework to choice problems in online settings and show that our algorithm is able to explain the observed data and predict the user choices much better than other competing baselines across a variety of diverse datasets.Inverting a Steady-State