PUMA publications for /tag/2011%20social%20ranking%20letorhttps://puma.uni-kassel.de/tag/2011%20social%20ranking%20letorPUMA RSS feed for /tag/2011%20social%20ranking%20letor2024-03-28T18:34:40+01:00Tagging data as implicit feedback for learning-to-rankhttps://puma.uni-kassel.de/bibtex/2e5a4b67ed6173e9645aab321019efd74/jaeschkejaeschke2012-03-14T11:49:48+01:002011 bookmarking folksonomy letor myown ranking social tagging <span class="authorEditorList"><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Beate Navarro Bullock" itemprop="url" href="/author/Beate%20Navarro%20Bullock"><span itemprop="name">B. Navarro Bullock</span></a></span>, <span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Robert Jäschke" itemprop="url" href="/author/Robert%20J%c3%a4schke"><span itemprop="name">R. Jäschke</span></a></span>, und <span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Andreas Hotho" itemprop="url" href="/author/Andreas%20Hotho"><span itemprop="name">A. Hotho</span></a></span>. </span><span itemtype="http://schema.org/Book" itemscope="itemscope" itemprop="isPartOf"><em><span itemprop="name">Proceedings of the ACM WebSci Conference</span>, </em></span><em>Seite <span itemprop="pagination">1--4</span>. </em><em>New York, NY, USA, </em><em>ACM, </em>(<em><span>Juni 2011<meta content="Juni 2011" itemprop="datePublished"/></span></em>)Wed Mar 14 11:49:48 CET 2012New York, NY, USAProceedings of the ACM WebSci Conferencejun1--4Tagging data as implicit feedback for learning-to-rank20112011 bookmarking folksonomy letor myown ranking social tagging Learning-to-rank methods automatically generate ranking functions which can be used for ordering unknown resources according to their relevance for a specific search query. The training data to construct such a model consists of features describing a document-query-pair as well as relevance scores indicating how important the document is for the query. In general, these relevance scores are derived by asking experts to manually assess search results or by exploiting user search behaviour such as click data. The human evaluation of ranking results gives explicit relevance scores, but it is expensive to obtain. Clickdata can be logged from the user interaction with a search engine, but the feedback is noisy. In this paper, we want to explore a novel source of implicit feedback for web search: tagging data. Creating relevance feedback from tagging data leads to a further source of implicit relevance feedback which helps improve the reliability of automatically generated relevance scores and therefore the quality of learning-to-rank models.