I've been experimenting with SPARQL for some time and was lucky enough to have had some training at work on it, but on several occasions when reading Bob DuCharme's Learning SPARQL I found out something that this very powerful language could do that was new to me. The book provides quite a detailed overview of the capabilities of the language and takes the reader right from their first steps in constructing a query through to using it as a data source for programs. The capabilities of both SPARQL 1.0 and 1.1 are covered, with warnings when commands only work in 1.1. If you are looking to take your first steps in learning SPARQL, or maybe you are someone who can already write queries and would like to enhance your skillset, perhaps exploring topics such as creating, updating and validating data then you may well find this book very useful.
I am very pleased to announce that for the first time my name will be appearing as a co-author of an academic paper! Consuming Linked Data within a Large Educational Organization was written by Fouad Zablith, Mathieu d'Aquin, Stuart Brown and myself and is a full research paper which has been accepted for the Second International Workshop on Consuming Linked Data (COLD 2011) which will be held in Bonn, Germany on October 23rd. The paper discusses the findings of the Lucero project which investigated the uses of Linked Data in educational institutions. My contribution was mainly the use case seen in an earlier post on this site: An HTML5 Leanback TV webapp that brings SPARQL to your living room but the paper is a much wider in scope than my blog post discussing not only use cases but the role of Linked Data principles to avoid the problems of data "silos" often found in large organisations. The paper can be obtained free-of-charge.
The Scripting Layer for Android (SL4A) and the new SL4A Tablet Remix have a lot of powerful features and interestingly can be used to consume data from a variety of sources both online and offline. The ability to work with some data sources, such as simple relational databases is built in, but thanks to the ability to add additional code libraries to this environment we also get the opportunity to work with non-relational databases and even Linked Data. In this article I will quickly show you how to work with three different types of data source using Python in SL4A: a relational database in the form of a SQLite file, a non-relational database in the form of CouchDB and Linked Data generated from Wikipedia which we will interrogate using the SPARQL language.
When you are sat on the sofa at the end of the day relaxing and watching TV, maybe eating food and not in the mood to have to keep constantly making decisions about what to watch you might not think that you are in a situation where Linked Data and SPARQL queries could be useful. Yet the flexibility of the data that can be obtained from data sources supporting these technologies makes them ideal candidates to power a Leanback TV experience. With the right query it is possible to curate a collection of video podcasts that can be played one after each other to keep the TV viewer happy. They still have control, they can still go to any podcast in the collection, but they are not faced with a decision every ten minutes about what to watch, allowing them to relax and discover new content.
In a recent poll on this site I asked "Do you have, or are you planning to learn, any skills related to Linked Data?". Interestingly 60% of respondents (there were 101 votes) said yes, so I thought I should finally get round to writing up a demonstration app that uses Linked Data to provide the information and jQuery Mobile to provide the looks (and more) for a mobile podcast by subject explorer. The site is written using PHP and was developed quite quickly. Again I will be using the Open University's Linked Data store, but the site could easily be adapted to use other stores, maybe even more than one store. Thanks to the use of jQuery Mobile it would even be possible to take the site and embed it in a thin app on the phone to make it look a bit like a native app. Of course the site is a bit rough and ready and I am sure there are thousands of ways to improve it, so experiment and let me know how you get on in the comments.
A little while ago I started reading up on Notube, an EU funded project that aims to explore how technology such as Linked Data can be used with televisions to (amongst other aims) produce personalised content. Inspired by this idea I started thinking about a small example that would build upon my earlier blog post How to use Linked Data on the Samsung Internet@TV platform to produce a personalised view of Open University Podcasts. In order for the example to be useful it would need to use data for the personalisation that was easy for the user to supply using just a remote control. I've got as far as producing a simple prototype that hopefully shows some of the potential of this technology.
A real advantage of Internet powered TV is the opportunity for personalisation and customisation to make it a more compelling and meaningful experience for the viewer, but to support this it helps to have a flexible solution to query the data about what is on offer. Linked Data could be that flexible solution as it makes it possible to send a quite complex query, possibly generated on the fly to a data store. With this in mind I have been experimenting with consuming linked data on a cheap and cheerful blu-ray player that supports the Samsung Internet@TV platform. Using a web developer skill set it is possible to build a web application that runs on the device that has the ability to pass a query directly to a SPARQL endpoint and parse the results.
Many organisations are offering rich Linked Data stores now that you can interrogate with the SPARQL language. This data might be interesting for the mobile app developer to work with so it would be great to be able to experiment with this data in Google App Inventor for Android applications. At the moment you cannot do this directly as App Inventor only offers quite limited functionality to interact with the web, however with the help of a server side "bridging script" we can close that divide and send a SPARQL query from inside the application and deal with the results we get back.
Extracting data from the web to use in our computer programs has always been a challenge. Many developers will be familiar with techniques such as Web Scraping, trying to parse a human readable web page and extract data and might dream of more reliable ways to query different sources for data in a standardised way. Linked Data is a proposed answer to this issue that seems to be gaining some momentum with data being exposed in this format by organisations such as the British Govenment and my own employer The Open University. So how do we query these resources and get the data into our PHP scripts?