Zusammenfassung

Qualitative journal evaluation makes use of cumulated content descriptions of single articles. These can either be represented by author-generated keywords, professionally indexed subject headings, automatically extracted terms or by reader-generated tags as used in social bookmarking systems. It is assumed that particularly the users? view on article content differs significantly from the authors? or indexers? perspectives. To verify this assumption, title and abstract terms, author keywords, Inspec subject headings, KeyWords PlusTM and tags are compared by calculating the overlap between the respective datasets. Our approach includes extensive term preprocessing (i.e. stemming, spelling unifications) to gain a homogeneous term collection. When term overlap is calculated for every single document of the dataset, similarity values are low. Thus, the presented study confirms the assumption, that the different types of keywords each reflect a different perspective of the articles? contents and that tags (cumulated across articles) can be used in journal evaluation to represent a reader-specific view on published content.

Beschreibung

Crowdsourcing in Article Evaluation - Web Science Repository

Links und Ressourcen

URL:
BibTeX-Schlüssel:
peters2011crowdsourcing
Suchen auf:

Kommentare und Rezensionen  
(0)

Es gibt bisher keine Rezension oder Kommentar. Sie können eine schreiben!

Tags


Zitieren Sie diese Publikation