TY - JOUR AU - Börner, Katy AU - Klavans, Richard AU - Patek, Michael AU - Zoss, Angela M. AU - Biberstine, Joseph R. AU - Light, Robert P. AU - Larivière, Vincent AU - Boyack, Kevin W. T1 - Design and Update of a Classification System: The UCSD Map of Science JO - PLoS ONE PY - 2012/07 VL - 7 IS - 7 SP - EP - UR - http://dx.doi.org/10.1371%2Fjournal.pone.0039464 M3 - 10.1371/journal.pone.0039464 KW - classification KW - gaw KW - map KW - science KW - scientometrics KW - sota KW - tool L1 - SN - N1 - N1 - AB - Global maps of science can be used as a reference system to chart career trajectories, the location of emerging research frontiers, or the expertise profiles of institutes or nations. This paper details data preparation, analysis, and layout performed when designing and subsequently updating the UCSD map of science and classification system. The original classification and map use 7.2 million papers and their references from Elsevier’s Scopus (about 15,000 source titles, 2001–2005) and Thomson Reuters’ Web of Science (WoS) Science, Social Science, Arts & Humanities Citation Indexes (about 9,000 source titles, 2001–2004)–about 16,000 unique source titles. The updated map and classification adds six years (2005–2010) of WoS data and three years (2006–2008) from Scopus to the existing category structure–increasing the number of source titles to about 25,000. To our knowledge, this is the first time that a widely used map of science was updated. A comparison of the original 5-year and the new 10-year maps and classification system show (i) an increase in the total number of journals that can be mapped by 9,409 journals (social sciences had a 80% increase, humanities a 119% increase, medical (32%) and natural science (74%)), (ii) a simplification of the map by assigning all but five highly interdisciplinary journals to exactly one discipline, (iii) a more even distribution of journals over the 554 subdisciplines and 13 disciplines when calculating the coefficient of variation, and (iv) a better reflection of journal clusters when compared with paper-level citation data. When evaluating the map with a listing of desirable features for maps of science, the updated map is shown to have higher mapping accuracy, easier understandability as fewer journals are multiply classified, and higher usability for the generation of data overlays, among others. ER - TY - CHAP AU - Rula, Anisa AU - Palmonari, Matteo AU - Harth, Andreas AU - Stadtmüller, Steffen AU - Maurino, Andrea A2 - Cudré-Mauroux, Philippe A2 - Heflin, Jeff A2 - Sirin, Evren A2 - Tudorache, Tania A2 - Euzenat, Jérôme A2 - Hauswirth, Manfred A2 - Parreira, JosianeXavier A2 - Hendler, Jim A2 - Schreiber, Guus A2 - Bernstein, Abraham A2 - Blomqvist, Eva T1 - On the Diversity and Availability of Temporal Information in Linked Open Data T2 - The Semantic Web – ISWC 2012 PB - Springer CY - Berlin/Heidelberg PY - 2012/ VL - 7649 IS - SP - 492 EP - 507 UR - http://dx.doi.org/10.1007/978-3-642-35176-1_31 M3 - 10.1007/978-3-642-35176-1_31 KW - data KW - diversity KW - gaw KW - linked KW - lod KW - open KW - temporal KW - time L1 - SN - 978-3-642-35175-4 N1 - N1 - AB - An increasing amount of data is published and consumed on the Web according to the Linked Data paradigm. In consideration of both publishers and consumers, the temporal dimension of data is important. In this paper we investigate the characterisation and availability of temporal information in Linked Data at large scale. Based on an abstract definition of temporal information we conduct experiments to evaluate the availability of such information using the data from the 2011 Billion Triple Challenge (BTC) dataset. Focusing in particular on the representation of temporal meta-information, i.e., temporal information associated with RDF statements and graphs, we investigate the approaches proposed in the literature, performing both a quantitative and a qualitative analysis and proposing guidelines for data consumers and publishers. Our experiments show that the amount of temporal information available in the LOD cloud is still very small; several different models have been used on different datasets, with a prevalence of approaches based on the annotation of RDF documents. ER - TY - CONF AU - Krafft, Dean B. AU - Cappadona, Nicholas A. AU - Caruso, Brian AU - Corson-Rikert, Jon AU - Devare, Medha AU - Lowe, Brian J. AU - Collaboration, VIVO A2 - T1 - VIVO: Enabling National Networking of Scientists T2 - WebSci10: Extending the Frontiers of Society On-Line PB - CY - PY - 2010/ M2 - VL - IS - SP - EP - UR - http://journal.webscience.org/316/ M3 - KW - gaw KW - network KW - research KW - science KW - university KW - vivo L1 - SN - N1 - N1 - AB - The VIVO project is creating an open, Semantic Web-based network of institutional ontology-driven databases to enable national discovery, networking, and collaboration via information sharing about researchers and their activities. The project has been funded by NIH to implement VIVO at the University of Florida, Cornell University, and Indiana University Bloomington together with four other partner institutions. Working with the Semantic Web/Linked Open Data community, the project will pilot the development of common ontologies, integration with institutional information sources and authentication, and national discovery and exploration of networks of researchers. Building on technology developed over the last five years at Cornell University, VIVO supports the flexible description and interrelation of people, organizations, activities, projects, publications, affiliations, and other entities and properties. VIVO itself is an open source Java application built on W3C Semantic Web standards, including RDF, OWL, and SPARQL. To create researcher profiles, VIVO draws on authoritative information from institutional databases, external data sources such as PubMed, and information provided directly by researchers themselves. While the NIH-funded project focuses on biomedical research, the current Cornell implementation of VIVO supports the full range of disciplines across the university, from music to mechanical engineering to management. There are many ways a person?s expertise may be discovered, through grants, presentations, courses and news releases, as well as through research statements or publications listed on their profile--resulting in the creation of implicit groups or networks of people based on a number of pre-identified, shared characteristics. In addition to formal authoritative information and relationships, VIVO can also support the creation of personal work groups and associated properties to represent the informal relationships evolving around collaboration. ER - TY - CONF AU - Van de Sompel, Herbert AU - Sanderson, Robert AU - Nelson, Michael L. AU - Balakireva, Lyudmila L. AU - Shankar, Harihar AU - Ainsworth, Scott A2 - T1 - An HTTP-Based Versioning Mechanism for Linked Data T2 - Proceedings of Linked Data on the Web (LDOW2010) PB - arXiv CY - PY - 2010/ M2 - VL - IS - 1003.3661 SP - EP - UR - http://arxiv.org/abs/1003.3661 M3 - KW - data KW - gaw KW - http KW - linked KW - lod KW - open KW - temporal KW - time KW - version L1 - SN - N1 - N1 - AB - Dereferencing a URI returns a representation of the current state of the resource identified by that URI. But, on the Web representations of prior states of a resource are also available, for example, as resource versions in Content Management Systems or archival resources in Web Archives such as the Internet Archive. This paper introduces a resource versioning mechanism that is fully based on HTTP and uses datetime as a global version indicator. The approach allows "follow your nose" style navigation both from the current time-generic resource to associated time-specific version resources as well as among version resources. The proposed versioning mechanism is congruent with the Architecture of the World Wide Web, and is based on the Memento framework that extends HTTP with transparent content negotiation in the datetime dimension. The paper shows how the versioning approach applies to Linked Data, and by means of a demonstrator built for DBpedia, it also illustrates how it can be used to conduct a time-series analysis across versions of Linked Data descriptions. ER - TY - JOUR AU - Aguillo, Isidro T1 - Measuring the institution's footprint in the web JO - Library Hi Tech PY - 2009/ VL - 27 IS - 4 SP - 540 EP - 556 UR - http://www.emeraldinsight.com/journals.htm?articleid=1812469&show=abstract M3 - 10.1108/073788309 KW - gaw KW - institution KW - science KW - scientometrics KW - university KW - web L1 - SN - N1 - N1 - AB - Purpose – The purpose of this paper is to provide an alternative, although complementary, system for the evaluation of the scholarly activities of academic organizations, scholars and researchers, based on web indicators, in order to speed up the change of paradigm in scholarly communication towards a new fully electronic twenty-first century model. Design/methodology/approach – In order to achieve these goals, a new set of web indicators has been introduced, obtained mainly from data gathered from search engines, the new mediators of scholarly communication. Findings – It was found that three large groups of indicators are feasible to obtain and relevant for evaluation purposes: activity (web publication); impact (visibility) and usage (visits and visitors). As a proof of concept, a Ranking Web of Universities has been built with Webometrics data. There are two relevant findings: ranking results are similar to those obtained by other bibliometric-based rankings; and there is a concerning digital divide between North American and European universities, which appear in lower positions when compared with their USA and Canada counterparts. Research limitations/implications – Cybermetrics is still an emerging discipline, so new developments should be expected when more empirical data become available. Practical implications – The proposed approach suggests the publication of truly electronic journals, rather than digital versions of printed articles. Additional materials, such as raw data and multimedia files, should be included along with other relevant information arising from more informal activities. These repositories should be Open Access, available as part of the public web, indexed by the main commercial search engines. It is expected that these actions could generate larger web-based audiences, reduce the costs of publication and access and allow third parties to take advantage of the knowledge generated, without sacrificing peer review, which should be extended (pre- and post-) and expanded (closed and open). Originality/value – A full taxonomy of web indicators is introduced for describing and evaluating research activities, academic organizations and individual scholars and scientists. Previous attempts for building such classification were incomplete and did not take into account feasibility and efficiency. ER - TY - JOUR AU - La Rowe, Gavin AU - Ambre, Sumeet AU - Burgoon, John AU - Ke, Weimao AU - Börner, Katy T1 - The Scholarly Database and its utility for scientometrics research JO - Scientometrics PY - 2009/ VL - 79 IS - 2 SP - 219 EP - 234 UR - http://dx.doi.org/10.1007/s11192-009-0414-2 M3 - 10.1007/s11192-009-0414-2 KW - analysis KW - database KW - dataset KW - gaw KW - science KW - scientometrics KW - sdb KW - sota L1 - SN - N1 - N1 - AB - The Scholarly Database aims to serve researchers and practitioners interested in the analysis, modelling, and visualization of large-scale data sets. A specific focus of this database is to support macro-evolutionary studies of science and to communicate findings via knowledge-domain visualizations. Currently, the database provides access to about 18 million publications, patents, and grants. About 90% of the publications are available in full text. Except for some datasets with restricted access conditions, the data can be retrieved in raw or pre-processed formats using either a web-based or a relational database client. This paper motivates the need for the database from the perspective of bibliometric/scientometric research. It explains the database design, setup, etc., and reports the temporal, geographical, and topic coverage of data sets currently served via the database. Planned work and the potential for this database to become a global testbed for information science research are discussed at the end of the paper. ER - TY - CHAP AU - Pieper, Dirk AU - Wolf, Sebastian A2 - Dirk, Lewandowski T1 - Wissenschaftliche Dokumente in Suchmaschinen T2 - Handbuch Internet-Suchmaschinen: Nutzerorientierung in Wissenschaft und Praxis PB - Akademische Verlagsgesellschaft AKA CY - PY - 2009/ VL - IS - SP - 356 EP - 374 UR - http://eprints.rclis.org/12746/ M3 - KW - engine KW - gaw KW - publication KW - research KW - science KW - search L1 - SN - N1 - N1 - AB - Dieser Beitrag untersucht, in welchem Umfang Dokumente von Dokumentenservern wissenschaftlicher Institutionen in den allgemeinen Suchmaschinen Google und Yahoo nachgewiesen sind und inwieweit wissenschaftliche Suchmaschinen für die Suche nach solchen Dokumenten besser geeignet sind. Dazu werden die fünf Suchmaschinen BASE, Google Scholar, OAIster, Scientific Commons und Scirus überblickartig beschrieben und miteinander verglichen. Hauptaugenmerk wird dabei auf die unterschiedlichen Inhalte, Suchfunktionen und Ausgabemöglichkeiten gelegt, mit Hilfe eines Retrievaltests wird speziell die Leistungsfähigkeit der Suchmaschinen beim Auffinden von Dokumenten, deren Volltexte im Sinne des Open Access direkt und ohne Beschränkungen aufrufbar sind, untersucht. ER -