When I wrote about my first deep dive into Knowledge Graphs, I mentioned that although the term was around well before 2012, the idea of a Knowledge Graph was blessed as an official Google thing that year when one of their engineering SVPs published the article Introducing the Knowledge Graph: things, not strings. This blessing gave some focus to many members of the graph database community because they could say that what they had been doing was similar, if not the same, as what Google was…
Lately I’ve been thinking about some aspects of RDF technology that I have taken for granted as basic building blocks of dataset design but that Knowledge Graph fans who are new to RDF may not be fully aware of—especially when they compare RDF to alternative ways to build knowledge graphs. A key building block is the ability to link independently created knowledge graphs.
For several years I thought of “knowledge graphs” as the buzzphrase that had partially replaced “Linked Data”, which was the buzzphrase that had partially replaced “Semantic Web”. In a 2012 blog entry I explained how Hadoop and the new-at-the-time NoSQL databases had convinced me that even if a technology has a funny name, selling it based on the problems it solves makes more sense and ages better than selling a buzz phrase vision and then, if that goes well,…
Something that happens to me now and then: I’ll hear that an organization with a lot of interesting data (science, music, whatever) makes the data available on a SPARQL endpoint. I send my browser to the URL listed as the SPARQL endpoint and I see a web form. I enter a simple query on the web form to retrieve a few random triples, click the form’s button, and the results of my query appear. Then I enter fancier queries to explore the endpoint’s data.