PUMA publications for /author/Benjamin%20Stewarthttps://puma.uni-kassel.de/author/Benjamin%20StewartPUMA RSS feed for /author/Benjamin%20Stewart2024-03-28T17:23:46+01:00Crowdsourcing for relevance evaluationhttps://puma.uni-kassel.de/bibtex/24a47833e85558b740788607cb79ba795/jaeschkejaeschke2012-09-18T16:50:58+02:00ir crowdsourcing relevance evaluation <span class="authorEditorList"><span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Omar Alonso" itemprop="url" href="/author/Omar%20Alonso"><span itemprop="name">O. Alonso</span></a></span>, <span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Daniel E. Rose" itemprop="url" href="/author/Daniel%20E.%20Rose"><span itemprop="name">D. Rose</span></a></span>, und <span itemtype="http://schema.org/Person" itemscope="itemscope" itemprop="author"><a title="Benjamin Stewart" itemprop="url" href="/author/Benjamin%20Stewart"><span itemprop="name">B. Stewart</span></a></span>. </span><span itemtype="http://schema.org/PublicationIssue" itemscope="itemscope" itemprop="isPartOf"><span itemtype="http://schema.org/Periodical" itemscope="itemscope" itemprop="isPartOf"><span itemprop="name"><em>SIGIR Forum</em></span></span> <em><span itemtype="http://schema.org/PublicationVolume" itemscope="itemscope" itemprop="isPartOf"><span itemprop="volumeNumber">42 </span></span>(<span itemprop="issueNumber">2</span>):
<span itemprop="pagination">9--15</span></em> </span>(<em><span>November 2008<meta content="November 2008" itemprop="datePublished"/></span></em>)Tue Sep 18 16:50:58 CEST 2012New York, NY, USASIGIR Forumnov29--15Crowdsourcing for relevance evaluation422008ir crowdsourcing relevance evaluation Relevance evaluation is an essential part of the development and maintenance of information retrieval systems. Yet traditional evaluation approaches have several limitations; in particular, conducting new editorial evaluations of a search system can be very expensive. We describe a new approach to evaluation called TERC, based on the crowdsourcing paradigm, in which many online users, drawn from a large community, each performs a small evaluation task.