Analysis of Music Tagging and Listening Patterns: Do Tags Really Function as Retrieval Aids?.
In:
N. Agarwal, K. Xu und N. Osgood (Herausgeber):
Social Computing, Behavioral-Cultural Modeling, and Prediction, Seiten 141-152.
Springer International Publishing, 2015.
Jared Lorince, Kenneth Joseph und PeterM. Todd.
[doi]
[Kurzfassung]
[BibTeX]
In collaborative tagging systems, it is generally assumed that users assign tags to facilitate retrieval of content at a later time. There is, however, little behavioral evidence that tags actually serve this purpose. Using a large-scale dataset from the social music website Last.fm, we explore how patterns of music tagging and subsequent listening interact to determine if there exist measurable signals of tags functioning as retrieval aids. Specifically, we describe our methods for testing if the assignment of a tag tends to lead to an increase in listening behavior. Results suggest that tagging, on average, leads to only very small increases in listening rates, and overall the data do
Attribute Exploration on the Web.
In: P. Cellier, F. Distel und B. Ganter
(Herausgeber):
Contributions to the 11th International Conference on Formal Concept Analysis, Seiten 19-34.
2013.
Robert Jäschke und Sebastian Rudolph.
[doi]
[Kurzfassung]
[BibTeX]
We propose an approach for supporting attribute exploration by web information retrieval, in particular by posing appropriate queries to search engines, crowd sourcing systems, and the linked open data cloud. We discuss underlying general assumptions for this to work and the degree to which these can be taken for granted.
Creating a searchable web archive.
Foundation for National Scientific Computing, 2012.
Daniel Gomes, David Cruz, João Miranda, Miguel Costa und Simão Fontes.
[doi]
[Kurzfassung]
[BibTeX]
The web became a mass means of publication that has been replacing printed media. However, its information is extremely ephemeral. Currently, most of the information available on the web is less than 1 year old. There are several initiatives worldwide that struggle to archive information from the web before it vanishes. However, search mechanisms to access this information are still limited and do not satisfy their users that demand performance similar to live- web search engines. This paper presents some of the work developed to create an effi�cient and effective searchable web archive service, from data acquisition to user interface design. The results of research were applied in practice to create the Portuguese Web Archive that is publicly available since January 2010. It supports full-text search over 1 billion contents archived from 1996 to 2010. The developed software is available as an open source project.
Text Mining Scientific Papers: a Survey on FCA-based Information Retrieval Research..
In: P. Perner
(Herausgeber):
Industrial Conference on Data Mining - Poster and Industry Proceedings, Seiten 82-96.
IBaI Publishing, 2011.
Jonas Poelmans, Paul Elzinga, Stijn Viaene, Guido Dedene und Sergei O. Kuznetsov.
[doi]
[Kurzfassung]
[BibTeX]
Formal Concept Analysis (FCA) is an unsupervised clustering technique and many scientific papers are devoted to applying FCA in Information Retrieval (IR) research. We collected 103 papers published between 2003-2009 which mention FCA and information retrieval in the abstract, title or keywords. Using a prototype of our FCA-based toolset CORDIET, we converted the pdf-files containing the papers to plain text, indexed them with Lucene using a thesaurus containing terms related to FCA research and then created the concept lattice shown in this paper. We visualized, analyzed and explored the literature with concept lattices and discovered multiple interesting research streams in IR of which we give an extensive overview. The core contributions of this paper are the innovative application of FCA to the text mining of scientific papers and the survey of the FCA-based IR research.
Search engines: information retrieval in practice.
2010.
W. Bruce Croft, Donald Metzler und Trevor Strohman.
[doi]
[BibTeX]
An Overview of Learning to Rank for Information Retrieval..
In: M. Burgin, M. H. Chowdhury, C. H. Ham, S. A. Ludwig, W. Su und S. Yenduri
(Herausgeber):
CSIE (3), Seiten 600-606.
IEEE Computer Society, 2009.
Xishuang Dong, Xiaodong Chen, Yi Guan, Zhiming Yu und Sheng Li.
[doi]
[BibTeX]
Citation context analysis for information retrieval.
University of Cambridge, 2009. Nummer 744.
Anna Ritchie.
[doi]
[Kurzfassung]
[BibTeX]
This thesis investigates taking words from around citations to scientific papers in order to create an enhanced document representation for improved information retrieval. This method parallels how anchor text is commonly used in Web retrieval. In previous work, words from citing documents have been used as an alternative representation of the cited document but no previous experiment has combined them with a full-text document representation and measured effectiveness in a large scale evaluation. The contributions of this thesis are twofold: firstly, we present a novel document representation, along with experiments to measure its effect on retrieval effectiveness, and, secondly, we document the construction of a new, realistic test collection of scientific research papers, with references (in the bibliography) and their associated citations (in the running text of the paper) automatically annotated. Our experiments show that the citation-enhanced document representation increases retrieval effectiveness across a range of standard retrieval models and evaluation measures. In Chapter 2, we give the background to our work, discussing the various areas from which we draw together ideas: information retrieval, particularly link structure analysis and anchor text indexing, and bibliometrics, in particular citation analysis. We show that there is a close relatedness of ideas between these areas but that these ideas have not been fully explored experimentally. Chapter 3 discusses the test collection paradigm for evaluation of information retrieval systems and describes how and why we built our test collection. In Chapter 4, we introduce the ACL Anthology, the archive of computational linguistics papers that our test collection is centred around. The archive contains the most prominent publications since the beginning of the field in the early 1960s, consisting of one journal plus conferences and workshops, resulting in over 10,000 papers. Chapter 5 describes how the PDF papers are prepared for our experiments, including identification of references and citations in the papers, once converted to plain text, and extraction of citation information to an XML database. Chapter 6 presents our experiments: we show that adding citation terms to the full-text of the papers improves retrieval effectiveness by up to 7.4%, that weighting citation terms higher relative to paper terms increases the improvement and that varying the context from which citation terms are taken has a significant effect on retrieval effectiveness. Our main hypothesis that citation terms enhance a full-text representation of scientific papers is thus proven. There are some limitations to these experiments. The relevance judgements in our test collection are incomplete but we have experimentally verified that the test collection is, nevertheless, a useful evaluation tool. Using the Lemur toolkit constrained the method that we used to weight citation terms; we would like to experiment with a more realistic implementation of term weighting. Our experiments with different citation contexts did not conclude an optimal citation context; we would like to extend the scope of our investigation. Now that our test collection exists, we can address these issues in our experiments and leave the door open for more extensive experimentation.
Logsonomy - Social Information Retrieval with Logdata.
In:
HT '08: Proceedings of the nineteenth ACM conference on Hypertext and hypermedia, Seiten 157-166.
ACM, New York, NY, USA, 2008.
Beate Krause, Robert Jäschke, Andreas Hotho und Gerd Stumme.
[doi]
[Kurzfassung]
[BibTeX]
Social bookmarking systems constitute an established part of the Web 2.0. In such systems users describe bookmarks by keywords called tags. The structure behind these social systems, called folksonomies, can be viewed as a tripartite hypergraph of user, tag and resource nodes. This underlying network shows specific structural properties that explain its growth and the possibility of serendipitous exploration. Today’s search engines represent the gateway to retrieve information from the World Wide Web. Short queries typically consisting of two to three words describe a user’s information need. In response to the displayed results of the search engine, users click on the links of the result page as they expect the answer to be of relevance. This clickdata can be represented as a folksonomy in which queries are descriptions of clicked URLs. The resulting network structure, which we will term logsonomy is very similar to the one of folksonomies. In order to find out about its properties, we analyze the topological characteristics of the tripartite hypergraph of queries, users and bookmarks on a large snapshot of del.icio.us and on query logs of two large search engines. All of the three datasets show small world properties. The tagging behavior of users, which is explained by preferential attachment of the tags in social bookmark systems, is reflected in the distribution of single query words in search engines. We can conclude that the clicking behaviour of search engine users based on the displayed search results and the tagging behaviour of social bookmarking users is driven by similar dynamics.
Logsonomy - Social Information Retrieval with Logdata.
In:
HT '08: Proceedings of the Nineteenth ACM Conference on Hypertext and Hypermedia, Seiten 157-166.
ACM, New York, NY, USA, 2008.
Beate Krause, Robert Jäschke, Andreas Hotho und Gerd Stumme.
[doi]
[Kurzfassung]
[BibTeX]
Social bookmarking systems constitute an established
part of the Web 2.0. In such systems
users describe bookmarks by keywords
called tags. The structure behind these social
systems, called folksonomies, can be viewed
as a tripartite hypergraph of user, tag and resource
nodes. This underlying network shows
specific structural properties that explain its
growth and the possibility of serendipitous
exploration.
Today’s search engines represent the gateway
to retrieve information from the World Wide
Web. Short queries typically consisting of
two to three words describe a user’s information
need. In response to the displayed
results of the search engine, users click on
the links of the result page as they expect
the answer to be of relevance.
This clickdata can be represented as a folksonomy
in which queries are descriptions of
clicked URLs. The resulting network structure,
which we will term logsonomy is very
similar to the one of folksonomies. In order
to find out about its properties, we analyze
the topological characteristics of the tripartite
hypergraph of queries, users and bookmarks
on a large snapshot of del.icio.us and
on query logs of two large search engines.
All of the three datasets show small world
properties. The tagging behavior of users,
which is explained by preferential attachment
of the tags in social bookmark systems, is
reflected in the distribution of single query
words in search engines. We can conclude
that the clicking behaviour of search engine
users based on the displayed search results
and the tagging behaviour of social bookmarking
users is driven by similar dynamics.
Introduction to Information Retrieval.
2008.
Christopher D. Manning, Prabhakar Raghavan und Hinrich Schütze.
[BibTeX]
Introduction to Information Retrieval.
2008.
Christopher D. Manning, Prabhakar Raghavan und Hinrich Schütze.
[doi]
[Kurzfassung]
[BibTeX]
"Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students in computer science. Based on feedback from extensive classroom experience, the book has been carefully structured in order to make teaching more natural and effective. Slides and additional exercises (with solutions for lecturers) are also available through the book's supporting website to help course instructors prepare their lectures." -- Publisher's description.
Introduction to Information Retrieval.
2008.
Christopher D. Manning, Prabhakar Raghavan und Hinrich Schütze.
[doi]
[Kurzfassung]
[BibTeX]
"Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students in computer science. Based on feedback from extensive classroom experience, the book has been carefully structured in order to make teaching more natural and effective. Slides and additional exercises (with solutions for lecturers) are also available through the book's supporting website to help course instructors prepare their lectures." -- Publisher's description.
Comprehensive Survey on Distance/Similarity Measures between Probability Density Functions.
International Journal of Mathematical Models and Methods in Applied Sciences, 1(4):300-307, 2007.
Sung-Hyuk Cha.
[doi]
[Kurzfassung]
[BibTeX]
Distance or similarity measures are essential to solve many pattern recognition problems such as classification, clustering, and retrieval problems. Various distance/similarity measures that are applicable to compare two probability density functions, pdf in short, are reviewed and categorized in both syntactic and semantic relationships. A correlation coefficient and a hierarchical clustering technique are adopted to reveal similarities among numerous distance/similarity measures.
Information Retrieval in Folksonomies: Search and Ranking.
In: Y. Sure und J. Domingue
(Herausgeber):
The Semantic Web: Research and Applications, Band 4011, Reihe LNAI, Seiten 411-426.
Springer, Heidelberg, 2006.
Andreas Hotho, Robert J?schke, Christoph Schmitz und Gerd Stumme.
[BibTeX]
Information Retrieval in Folksonomies: Search and Ranking.
In: Y. Sure und J. Domingue
(Herausgeber):
The Semantic Web: Research and Applications, Band 4011, Reihe LNAI, Seiten 411-426.
Springer, Heidelberg, 2006.
Andreas Hotho, Robert Jäschke, Christoph Schmitz und Gerd Stumme.
[BibTeX]
FooCA: web information retrieval with formal concept analysis.
2006.
Bjoern Koester.
[doi]
[Kurzfassung]
[BibTeX]
This book deals with Formal Concept Analysis (FCA) and its application to Web Information Retrieval. It explains how Web search results retrieved by major Web search engines such as Google or Yahoo can be conceptualized leading to a human-oriented form of representation. A generalization of Web search results is conducted, leading to an FCA-based introduction of FooCA. FooCA is an application in the field of Conceptual Knowledge Processing and supports the idea of a holistic representation of Web Information Retrieval.
A taxonomy of web search.
SIGIR Forum, 36(2):3-10, 2002.
Andrei Broder.
[doi]
[Kurzfassung]
[BibTeX]
Classic IR (information retrieval) is inherently predicated on users searching for information, the so-called "information need". But the need behind a web search is often not informational -- it might be navigational (give me the url of the site I want to reach) or transactional (show me sites where I can perform a certain transaction, e.g. shop, download a file, or find a map). We explore this taxonomy of web searches and discuss how global search engines evolved to deal with web-specific needs.
Optimizing search engines using clickthrough data.
In:
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, Seiten 133-142.
ACM, New York, NY, USA, 2002.
Thorsten Joachims.
[doi]
[Kurzfassung]
[BibTeX]
This paper presents an approach to automatically optimizing the retrieval quality of search engines using clickthrough data. Intuitively, a good information retrieval system should present relevant documents high in the ranking, with less relevant documents following below. While previous approaches to learning retrieval functions from examples exist, they typically require training data generated from relevance judgments by experts. This makes them difficult and expensive to apply. The goal of this paper is to develop a method that utilizes clickthrough data for training, namely the query-log of the search engine in connection with the log of links the users clicked on in the presented ranking. Such clickthrough data is available in abundance and can be recorded at very low cost. Taking a Support Vector Machine (SVM) approach, this paper presents a method for learning retrieval functions. From a theoretical perspective, this method is shown to be well-founded in a risk minimization framework. Furthermore, it is shown to be feasible even for large sets of queries and features. The theoretical results are verified in a controlled experiment. It shows that the method can effectively adapt the retrieval function of a meta-search engine to a particular group of users, outperforming Google in terms of retrieval quality after only a couple of hundred training examples.
Indexing and retrieval of scientific literature.
In:
Proceedings of the eighth international conference on Information and knowledge management, Seiten 139-146.
ACM, New York, NY, USA, 1999.
Steve Lawrence, Kurt Bollacker und C. Lee Giles.
[doi]
[Kurzfassung]
[BibTeX]
<par>The web has greatly improved access to scientific literature. However, scientific articles on the web are largely disorganized, with research articles being spread across archive sites, institution sites, journal sites, and researcher homepages. No index covers all of the available literature, and the major web search engines typically do not index the content of Postscript/PDF documents at all. This paper discusses the creation of digital libraries of scientific literature on the web, including the efficient location of articles, full-text indexing of the articles, autonomous citation indexing, information extraction, display of query-sensitive summaries and citation context, hubs and authorities computation, similar document detection, user profiling, distributed error correction, graph analysis, and detection of overlapping documents. The software for the system is available at no cost for non-commercial use.</par>
Computers and Social History: Building a Database from Mediaeval Tax Registers for improved Information Retrieval in Göttingen.
In:
G. Lock und J. Moffet (Herausgeber):
CAA 91, Computer Applications and Quantitative Methods in Archaeology (BAR International Series S 577),, Seiten 29-38..
London, 1992.
Helge Steenweg.
[BibTeX]