All Events Related to: Timothy Cole


E-Research Roundtable: Workset Creation for Scholarly Analysis (WCSA) Project Update
2015-03-11
The HathiTrust Research Center (HTRC) enables computational access to the HathiTrust corpus, a digital library of millions of books and other materials digitized by the Google Books project and other mass-digitization efforts. “Workset Creation for Scholarly Analysis: Prototyping Project” (WCSA) is a joint effort between CIRSS and HTRC that seeks to address three sets of tightly intertwined research questions regarding 1) enriching the metadata in the HathiTrust corpus, 2) augmenting string-based metadata with URIs to leverage discovery and sharing through external services, and 3) formalizing the notion of collections and worksets in the context of the HathiTrust Research Center to help scholars select and gather appropriate materials for analysis from within and beyond the 14.2 million volumes of the HathiTrust corpus. This ERRT will discuss lessons learned from four external projects that were awarded 9-month prototyping grants as part of the WCSA project as well as how experiences with the WCSA prototyping projects, user studies, and metadata analysis have informed ongoing development of a formal workset model for HTRC.Read more




E-Research Roundtable: W3C activity update: Web Annotation Working Group
2014-11-05
 In August of this year the World Wide Consortium (W3C) chartered the Web Annotation Working Group (http://www.w3.org/annotation/). Tim Cole and Jacob Jett are members of the Working Group, representing UIUC. Building on the prior work of the Open Annotation Community Group, the new Web Annotation Working Group has been tasked with developing W3 Recommendations describing: an  Abstract Annotation Data Model, the Data Model Vocabulary, Serializations of the Data Model, an HTTP API for annotation services, a client-side API for implementers, and (in collaboration with the Web Apps Working Group) a technical approach to more robust target anchoring. The Working Group held its first face-to-face working meeting on October 28th in conjunction with the W3C Technical Plenary and Advisory Committee meeting in Santa Clara, CA. This ERRT session will open with a presentation on the progress towards the first public working draft of the data model, vocabulary and serialization specification, which is now expected to be released before the end of this calendar year. I'll review use cases informing the work of the WG, talk about resolved and pending issues (https://www.w3.org/annotation/track/issues, https://github.com/w3c/web-annotation/issues), and discuss W3C process.  Ample time will be provided for questions and follow-on discussion. This is only one of several W3C Working Groups in which UIUC is not involved. As illustrated by our work as members of the Web Annotation Working Group, the University's membership in the W3C (which we joined in April of this year), provides opportunities for greater input into and involvement in the W3C standards making process.
 
Bio: Tim Cole is the Mathematics and Digital Content Access Librarian and a CIRSS affiliated faculty member. His research and publications focus on metadata, digital library interoperability and linked open data.  He was PI of the Open Annotation Collaboration project (2009-2013) and PI/co-PI of the IMLS Digital Collections and Content project (2002-2012). He is currently co-PI for the HathiTrust Research Center Workset Creation for Scholarly Analysis project and serves as the W3C Advisory Committee Representative for UIUC.Read more




E-Research Roundtable: Validation of Open Annotation RDF
2014-02-12
The W3C Open Annotation (OA) Community Group data model and ontology builds on RDF and other W3C Semantic Web standards. As new OA-based applications come on line and in anticipation of the OA specification moving from a Community Draft into the formal W3C Recommendations track, being able to validate conformance to the OA data model and ontology is becoming increasingly important to its successful uptake. Annotation tools and services need a way to validate that annotation descriptions being exchanged meet the OA data model requirements. The LoreStore OA validation service developed by Open Annotation Collaboration partner Queensland University has become an infrastructure cornerstone within the OA community. LoreStore makes use of SPARQL to validate conformance to OA specs, linking any warnings or errors found directly to the relevant section of the spec. The LoreStore OA Validator was included in last fall's W3C RDF Validation Workshop as an exemplar of the current state-of-the-art for validation of conformance to RDF-based data models. Join us for a presentation and demonstration of the LoreStore OA validation service followed by a discussion of the W3C's current thinking re RDF validation more broadly.

Links:

Demo instance of LoreStore OA Validator:
http://austese.net/lorestore/validate.html

W3C RDF Validation Workshop:
http://www.w3.org/2012/12/rdf-val/

IBM's OSLC Resource Shape (a potential starting point for a W3C RDF Validation WG): http://www.ibm.com/developerworks/rational/library/linked-data-oslc-resource-shapes/ Read more




E-Research Roundtable: The Prototype Open Emblem Book Portal: Leveraging the Emblem Community's Spine Metadata Schema
2011-10-05
In 2003 Stephen Rawles (Glasgow University Centre for Emblem Studies) outlined an approach for creating metadata records for digitized emblem books in a Web-published paper entitled, A Spine of Information Headings for Emblem-Related Electronic Resources [1]. As compared to many other classes of retrospectively digitized texts, digitized emblem books offer added challenges for description. A genre of European literature popular between 1530 and 1750, emblems unite three elements—a motto, a picture, and poetry. These three components create puzzles that carry metaphors and messages for readers. Individual books may contain only a handful of emblems or may contain more than 1,000 emblems. To support scholarship, emblems (as well as emblem books) need to be discoverable, retrievable and citable individually, further complicating issues of descriptive granularity. Rawles’s paper became the foundation for the Spine metadata XML schema [2] created by Thomas Stäcker of the Herzog August Bibliothek (Wolfenbüttel, Germany), with subsequent modifications and additions by Tim Cole and Myung-Ja Han. As part of a NEH/DFG funded grant project [3] (Mara Wade, U.S. PI), Cole, Han, and Jordan Vannoy have created a functioning prototype of a new Open Emblem Book Portal [4]. The new design leverages unique features of the Spine schema and is intended to be responsive to the evolving needs of the Emblem Studies community. Scholars expect more of digital libraries today than in years past. For digitized special collections materials such as our digitized emblem books collection, this has required the UIUC Library to reexamine our digital content processing workflows and rethink how we provide access to such digitized special collections content. For this project, our workflows have become more in keeping with standard Semantic Web and Linked Data principles, and now make use of globally-scoped, persistent and precise identifiers for our digitized emblem resources. This roundtable will start off with a reprise of a 20-minute presentation given by Cole and Han at the recent triennial meeting of the Society of Emblem Studies in Glasgow. Cole, Han and Vannoy will then lead an in-depth discussion of the Spine schema and the design choices implemented in the Open Emblem Book Portal to date.

Abstract Notes:

[1] http://www.ces.arts.gla.ac.uk/html/spine.htm
[2] http://diglib.hab.de/rules/schema/emblem/emblem-1-2.xsd
[3] http://emblematica.grainger.illinois.edu
[4] http://emblematica.grainger.illinois.edu/OEBP/UI/SearchForm

Resources:

Digital Collections and Management of Knowledge: Renessance Emblem Literature as a case study for the digitization of rare texts and images. 2004. Mara R. Wade (ed.), DigiCULT. Available online: http://www.digicult.info/downloads/dc_emblemsbook_lowres.pdf

Iconclass, a multilingual classification system for cultural content: http://www.iconclass.org/ and http://www.iconclass.org/rkd/9/

Sample metadata record in Spine and METS:

Spine metadata record

METS metadata recordRead more




E-Research Roundtable: Using Pliny to Annotate Digital Resources
2009-07-01
For our first roundtable related to the new Open Annotation Collaboration Mellon-funded grant project, we will examine John Bradley's PLINY annotation tool. In particular we will discuss how and to what extent PLINY can be used to perform some of the scholarly functions described in Renear, Allen H.; DeRose, Steve J.; Mylonas, Elli; van Dam, Andries (1999) _An Outline for a Functional Taxonomy of Annotation_. Yan Wang and Tim Cole will lead the discussion which will include demonstrations of PLINY.

Resources:  Reading 1; Reading 2; Reading 3; Reading 4 Reading 5
Read more