%0 Journal Article %1 alonso2008crowdsourcing %A Alonso, Omar %A Rose, Daniel E. %A Stewart, Benjamin %C New York, NY, USA %D 2008 %I ACM %J SIGIR Forum %K crowdsourcing evaluation ir relevance %N 2 %P 9--15 %T Crowdsourcing for relevance evaluation %U http://doi.acm.org/10.1145/1480506.1480508 %V 42 %X Relevance evaluation is an essential part of the development and maintenance of information retrieval systems. Yet traditional evaluation approaches have several limitations; in particular, conducting new editorial evaluations of a search system can be very expensive. We describe a new approach to evaluation called TERC, based on the crowdsourcing paradigm, in which many online users, drawn from a large community, each performs a small evaluation task.