TY - JOUR AU - Alonso, Omar AU - Rose, Daniel E. AU - Stewart, Benjamin T1 - Crowdsourcing for relevance evaluation JO - SIGIR Forum PY - 2008/november VL - 42 IS - 2 SP - 9 EP - 15 UR - http://doi.acm.org/10.1145/1480506.1480508 M3 - 10.1145/1480506.1480508 KW - ir KW - crowdsourcing KW - relevance KW - evaluation L1 - SN - N1 - N1 - AB - Relevance evaluation is an essential part of the development and maintenance of information retrieval systems. Yet traditional evaluation approaches have several limitations; in particular, conducting new editorial evaluations of a search system can be very expensive. We describe a new approach to evaluation called TERC, based on the crowdsourcing paradigm, in which many online users, drawn from a large community, each performs a small evaluation task. ER -