%0 %0 Journal Article %A Alonso, Omar; Rose, Daniel E. & Stewart, Benjamin %D 2008 %T Crowdsourcing for relevance evaluation %E %B SIGIR Forum %C %I ACM %V 42 %6 %N 2 %P 9--15 %& %Y %S %7 %8 November %9 %? %! %Z %@ 0163-5840 %( %) %* %L %M %1 %2 %3 article %4 %# %$ %F alonso2008crowdsourcing %K ir, crowdsourcing, relevance, evaluation %X Relevance evaluation is an essential part of the development and maintenance of information retrieval systems. Yet traditional evaluation approaches have several limitations; in particular, conducting new editorial evaluations of a search system can be very expensive. We describe a new approach to evaluation called TERC, based on the crowdsourcing paradigm, in which many online users, drawn from a large community, each performs a small evaluation task. %Z %U http://doi.acm.org/10.1145/1480506.1480508 %+ %^