In part one of this two-part series, we saw how the open source Snowman static web site generator can generate websites with data from a SPARQL endpoint. I showed how I created a sample website project with its snowman new
command and then reconfigured the project to retrieve a list of artists from the Rhizome ArtBase endpoint, a repository of data about digital artworks since 1999. Here in part two I will build on that to add lists of artists’ works with links to Rhizome pages about…
Snowman is an open-source project that generates static web sites from data served up by SPARQL endpoints. The history of the web is full of sites generated from relational database back ends, so it’s nice to see this significant step toward doing it with RDF data.
I recently worked on a project where we had a huge amount of RDF and no clue what was in there apart from what we saw by looking at random triples. I developed a few SPARQL queries to give us a better idea of the dataset’s content and structure and these queries are generic enough that I thought that they could be useful to other people.
In my last posting I described Carnegie Mellon University’s Index of Digital Humanities Conferences project, which makes over 60 years of Digital Humanities research abstracts and relevant metadata available on both the project’s website and as a file of zipped CSV that they update often. I also described how I developed scripts to convert all that CSV to some pretty nice RDF and made the scripts available on github. I finished with a promise to follow up by showing some of the…
I think that RDF has been very helpful in the field of Digital Humanities for two reasons: first, because so much of that work involves gaining insight from adding new data sources to a given collection, and second, because a large part of this data is metadata about manuscripts and other artifacts. RDF’s flexibility supports both of these very well, and several standard schemas and ontologies have matured in the Digital Humanities community to help coordinate the different data sets.
Much of the original point of the web was not just linking from one page to another but also saving and managing links, ideally with some metadata. Because of this, all browsers give you some way to save a link to a web page as a bookmark, and they typically let you sort these into a hierarchical arrangement of folders.
When I saw “Add support for scripting languages other than JavaScript” in the Jena release 4.0.0 release notes my first reaction was “What? I can run the arq
command line SPARQL processor and call my own functions that I wrote in JavaScript?”