Crowdsourcing in Article Evaluation
, , und .
ACM WebSci'11, Seite 1--4. (Juni 2011)WebSci Conference 2011.

Qualitative journal evaluation makes use of cumulated content descriptions of single articles. These can either be represented by author-generated keywords, professionally indexed subject headings, automatically extracted terms or by reader-generated tags as used in social bookmarking systems. It is assumed that particularly the users? view on article content differs significantly from the authors? or indexers? perspectives. To verify this assumption, title and abstract terms, author keywords, Inspec subject headings, KeyWords PlusTM and tags are compared by calculating the overlap between the respective datasets. Our approach includes extensive term preprocessing (i.e. stemming, spelling unifications) to gain a homogeneous term collection. When term overlap is calculated for every single document of the dataset, similarity values are low. Thus, the presented study confirms the assumption, that the different types of keywords each reflect a different perspective of the articles? contents and that tags (cumulated across articles) can be used in journal evaluation to represent a reader-specific view on published content.
  • @stephandoerfel
Diese Publikation wurde noch nicht bewertet.

Bewertungsverteilung
Durchschnittliche Benutzerbewertung0,0 von 5.0 auf Grundlage von 0 Rezensionen
    Bitte melden Sie sich an um selbst Rezensionen oder Kommentare zu erstellen.