Meeting User Information Needs in Recommender Systems
.
University of Minnesota, Minneapolis, MN, USA, (2006)

In order to build relevant, useful, and effective recommender systems, researchers need to understand why users come to these systems and how users judge recommendation lists. Today, researchers use accuracy-based metrics for judging goodness. Yet these metrics cannot capture users' criteria for judging recommendation usefulness. We need to rethink recommenders from a user's perspective: they help users find new information. Thus, not only do we need to know about the user, we need to know what the user is looking for. In this dissertation, we explore how to tailor recommendation lists not just to a user, but to the user's current information seeking task. We argue that each recommender algorithm has specific strengths and weaknesses, different from other algorithms. Thus, different recommender algorithms are better suited for specific users and their information seeking tasks. A recommender system should, then, select and tune the appropriate recommender algorithm (or algorithms) for a given user/information seeking task combination. To support this, we present results in three areas. First, we apply recommender systems in the domain of peer-reviewed computer science research papers, a domain where users have external criteria for selecting items to consume. The effectiveness of our approach is validated through several sets of experiments. Second, we argue that current recommender systems research in not focused on user needs, but rather on algorithm design and performance. To bring users back into focus, we reflect on how users perceive recommenders and the recommendation process, and present Human-Recommender Interaction theory, a framework and language for describing recommenders and the recommendation lists they generate. Third, we look to different ways of evaluating recommender systems algorithms. To this end, we propose a new set of recommender metrics, run experiments on several recommender algorithms using these metrics, and categorize the differences we discovered. Through Human-Recommender Interaction and these new metrics, we can bridge users and their needs with recommender algorithms to generate more useful recommendation lists.
  • @stephandoerfel
Diese Publikation wurde noch nicht bewertet.

Bewertungsverteilung
Durchschnittliche Benutzerbewertung0,0 von 5.0 auf Grundlage von 0 Rezensionen
    Bitte melden Sie sich an um selbst Rezensionen oder Kommentare zu erstellen.