JQbus: social graph query with XMPP/SPARQL

Righto, it’s about time I wrote this one up. One of my last deeds at W3C before leaving at the end of 2005, was to begin the specification of an XMPP binding of the SPARQL querying protocol. For the acronym averse, a quick recap. XMPP is the name the IETF give to the Jabber messaging technology. And SPARQL is W3C’s RDF-based approach to querying mixed-up Web data. SPARQL defines a textual query language, an XML result-set format, and a JSON version for good measure. There is also a protocol for interacting with SPARQL databases; this defines an abstract interface, and a binding to HTTP. There is as-yet no official binding to XMPP/Jabber, and existing explorations are flawed. But I’ll argue here, the work is well worth completing.

jqbus diagram

So what do we have so far? Back in 2005, I was working in Java, Chris Schmidt in Python, and Steve Harris in Perl. Chris has a nice writeup of one of the original variants I’d proposed, which came out of my discussions with Peter St Andre. Chris also beat me in the race to have a working implementation, though I’ll attribute this to Python’s advantages over Java ;)

I won’t get bogged down in the protocol details here, except to note that Peter advised us to use IQ stanzas. That existing work has a few slight variants on the idea of sending a SPARQL query in one IQ packet, and returning all the results within another, and that this isn’t quite deployable as-is. When the result set is too big, we can run into practical (rather than spec-mandated) limits at the server-to-server layers. For example, Peter mentioned that jabber.org had a 65k packet limit. At SGFoo last week, someone suggested sending the results as an attachment instead; apparently this one of the uncountably many extension specs produced by the energetic Jabber community. The 2005 work was also somewhat partial, and didn’t work out the detail of having a full binding (eg. dealing with default graphs, named graphs etc).

That said, I think we’re onto something good. I’ll talk through the Java stuff I worked on, since I know it best. The code uses Ignite Online’s Smack API. I have published rough Java code that can communicate with instances of itself across Jabber. This was last updated July 2007, when I fixed it up to use more recent versions of Smack and Jena. I forget if the code to parse out query results from the responses was completed, but it does at least send SPARQL XML results back through the XMPP network.

sparql jabber interaction

sparql jabber interaction

So why is this interesting?

  • SPARQLing over XMPP can cut through firewalls/NAT and talk to data on the desktop
  • SPARQLing over XMPP happens in a social environment; queries are sent from and to Jabber addresses, while roster information is available which could be used for access control at various levels of granularity
  • XMPP is well suited for async interaction; stored queries could return results days or weeks later (eg. job search)
  • The DISO project is integrating some PHP XMPP code with WordPress; SparqlPress is doing same with SPARQL

Both SPARQL and XMPP have mechanisms for batched results, it isn’t clear which, if either, to use here.

XMPP also has some service discovery mechanisms; I hope we’ll wire up a way to inspect each party on a buddylist roster, to see who has SPARQL data available. I made a diagram of this last summer, but no code to go with it yet. There is also much work yet to do on access control systems for SPARQL data, eg. using oauth. It is far from clear how to integrate SPARQL-wide ideas on that with the specific possibilities offered within an XMPP binding. One idea is for SPARQL-defined FOAF groups to be used to manage buddylist rosters, “friend groups”.

Where are we with code? I have a longer page in FOAF SVN for the Jqbus Java stuff, and a variant on this writeup (includes better alt text for the images and more detail). The Java code is available for download. Chris’s Python code is still up on his site. I doubt any of these can currently talk to each other properly, but they show how to deal with XMPP in different languages, which is useful in itself. For Perl people, I’ve uploaded a copy of Steve’s code.

The Java stuff has a nice GUI for debugging, thanks to Smack. I just tried a copy. Basically I can run a client and a server instance from the same filetree, passing it my LiveJournal and Google Talk jabber account details. The screenshot here shows the client on the left having the XML-encoded SPARQL results, and the server on the right displaying the query that arrived. That’s about it really. Nothing else ought to be that different from normal SPARQL querying, except that it is being done in an infrastructure that is more socially-grounded and decentralised than the classic HTTP Web service model.

JQbus debug

Graph URIs in SPARQL: Using UUIDs as named views

I’ve been using the SPARQL query language to access a very ad-hoc collection of personal and social graph data, and thanks to Bengee’s ARC system this can sit inside my otherwise ordinary WordPress installation. At the moment, everything in there is public, but lately I’ve been discussing oauth with a few folk as a way of mediating access to selected subsets of that data. Which means the data store will need some way of categorising the dozens of misc data source URIs. There are a few ways to do this; here I try a slightly non-obvious approach.

Every SPARQL store can have many graphs inside, named by URI, plus optionally a default graph. The way I manage my store is a kind of structured chaos, with files crawled from links in my own data and my friends. One idea for indicating the structure of this chaos is to keep “table of contents” metadata in the default graph. For example, I might load up <http://danbri.org/foaf.rdf> into a SPARQL graph named with that URI. And I might load up <http://danbri.org/evilfoaf.rdf> into another graph, also using the retrieval URI to identify the data within my SPARQL store. Now, two points to make here: firstly, that the SPARQL spec does not mandate that we do things this way. An application that for example wanted to keep historical versions of a FOAF or RSS of schema document, could keep the triples from each version in a different named graph. Perhaps these might be named with UUIDs, for example. The second point, is that there are many different “meta” claims we want to store about our datasets. And that mixing them all into the store-wide “default graph” could be rather limiting, especially if we mightn’t want to unconditionally believe even those claims.

In the example above for example, I have data from running a PGP check against foaf.rdf (which passed) and evilfoaf.rdf (which doesn’t pass a check against my pgp identity). Now where do I store the results of this PGP checking? Perhaps the default graph, but maybe that’s messy. The idea I’m playing with here is that UUIDs are reasonable identifiers, and that perhaps we’ll find ourselves sharing common UUIDs across stores.

Go back to my sent-mail FOAF crawl example from yesterday. How far did I get? Well the end result was a list of URLs which I looped through, and loaded into my big chaotic SPARQL store. If I run the following query, I get a list of all the data graphs loaded:

SELECT DISTINCT ?g WHERE { GRAPH ?g { ?s ?p ?o . } }

This reveals 54 URLs, basically everything I’ve loaded into ARC in the last month or so. Only 30 of these came from yesterday’s hack, which used Google’s new Social Graph API to allow me to map from hashed mailbox IDs to crawlable data URIs. So today’s game is to help me disentangle the 30 from the 54, and superimpose them on each other, but not always mixed with every other bit of information in the store. In other words, I’m looking for a flexible, query-based way of defining views into my personal data chaos.

So, what I tried. I took the result of yesterday’s hack, a file of data URIs called urls.txt. Then I modified my commandline dataloader script (yeah yeah this should be part of wordpress). My default data loader simply takes each URI, gets the data, and shoves it into the store under a graph name which was the URI used for retrieval. What I did today is, additionally, make a “table of contents” overview graph. To avoid worrying about names, I generated a UUID and used that. So there is a graph called <uuid:420d9490-d73f-11dc-95ff-0800200c9a66> which contains simple asserts of the form:

<http://www.advogato.org/person/benadida/foaf.rdf> a <http://xmlns.com/foaf/0.1/Document> .
<http://www.w3c.es/Personal/Martin/foaf.rdf> a <http://xmlns.com/foaf/0.1/Document> .
<http://www.iandickinson.me.uk/rdf/foaf.rdf> a <http://xmlns.com/foaf/0.1/Document> . # etc

…for each of the 30 files my crawler loaded into the store.

This lets us use <uuid:420d9490-d73f-11dc-95ff-0800200c9a66> as an indirection point for information related to this little mailbox crawler hack. I don’t have to “pollute” the single default graph with this data. And because the uuid: was previously meaningless, it is something we might decided makes sense to use across data visibility boundaries, ie. you might use the same UUID in your own SPARQL store, so we can share queries and app logic.

Here’s a simple query. It says, “ask the mailbox crawler table of contents graph (which we call uuid:320d9etc…) for all things it knows about that are a Document”. Then it says “ask each of those documents, for everything in it”. And then the SELECT clause returns all the property URIs. This gives a first level overview of what’s in the Web of data files found by the crawl. Query was:

PREFIX : <http://xmlns.com/foaf/0.1/>
GRAPH <uuid:420d9490-d73f-11dc-95ff-0800200c9a66> { ?crawled a :Document . }
GRAPH ?crawled { ?s ?p ?o . }


I’ll just show the first page full of properties it found; for the rest see link to the complete set. Since W3C’s official SPARQL doesn’t have aggregates, we’d need to write application code (or use something like the SPARQL+ extensions) to get property usage counts. Here are some of the properties that were found in the data:






















So my little corner of the Web includes properties that extend FOAF documents to include blood types, countries that have been visited, language skills that people have, music information, and even drinking habits. But remember that this comes from my corner of the Web – people I’ve corresponded with – and probably isn’t indicative of the wider network. But that’s what grassroots decentralised data is all about. The folk who published this data didn’t need to ask permission of any committe to do so, they just mixed in what they wanted to say, alongside terms more widely used like foaf:Person, foaf:name. This is the way it should be: ask forgiveness, not permission, from the language lawyers and standardistas.

Ok, so let’s dig deeper into the messy data I crawled up from my sentmail contacts?

Here’s one that finds some photos, either using FOAF’s :img or :depiction properties:

PREFIX : <http://xmlns.com/foaf/0.1/>
GRAPH <uuid:420d9490-d73f-11dc-95ff-0800200c9a66> { ?crawled a :Document . }
GRAPH ?crawled {
{ ?x :depiction ?y1 } UNION { ?x :img ?y2 } .

Here’s another that asks the crawl results for names and homepages it found:

PREFIX : <http://xmlns.com/foaf/0.1/>
SELECT DISTINCT * WHERE { GRAPH <uuid:420d9490-d73f-11dc-95ff-0800200c9a66> { ?crawled a :Document . }
GRAPH ?crawled { { [ :name ?n; :homepage ?h ] } }

To recap, the key point here is that social data in a SPARQL store will be rather chaotic. Information will often be missing, and often be extended. It will come from a variety of parties, some of whom you trust, some of whom you don’t know, and a few of whom will be actively malicious. Later down the line, subsets of the data will need different permissioning: if I export a family tree from ancestry.co.uk, I don’t want everyone to be able to do a SELECT for mother’s maiden name and my date of birth.

So what I suggest here, is that we can use UUID-named graphs as an organizing structure within an otherwise chaotic SPARQL environment. The demo here shows how one such graph can be used as a “table of contents” for other graphs associated with a particular app — in this case, the Google-mediated sentmail crawling app I made yesterday. Other named views might be: those data files from colleagues, those files that are plausibly PGP-signed, those that contain data structured according to some particular application need (eg. calendar, addressbook, photos, …).

Outbox FOAF crawl: using the Google SG API to bring context to email

OK here’s a quick app I hacked up on my laptop largely in the back of a car, ie. took ~1 hour to get from idea to dataset.

Idea is the usual FOAFy schtick about taking an evidential rather than purely claim-based approach to the ‘social graph’.

We take the list of people I’ve emailed, and we probe the social data Web to scoop up user profile etc descriptions of them. Whether this is useful will depend upon how much information we can scoop up, of course. Eventually oauth or similar technology should come into play, for accessing non-public information. For now, we’re limited to data shared in the public Web.

Here’s what I did:

  1. I took the ‘sent mail’ folder on my laptop (a Mozilla Thunderbird installation).
  2. I built a list of all the email addresses I had sent mail to (in Cc: or To: fields). Yes, I grepped.
    • working assumption: these are humans I’m interacting with (ie. that I “foaf:know”)
    • I could also have looked for X-FOAF: headers in mail from them, but this is not widely deployed
    • I threw away anything that matched some company-related mail hosts I wanted to exclude.
  3. For each email address, scrubbed of noise, I prepend mailto: and generate a SHA1 value.
  4. The current Google SG API can be consulted about hashed mailboxes using URIs in the unregistered sgn: space, for example my hashed gmail address (danbrickley@gmail.com) gives sgn://mboxsha1/?pk=6e80d02de4cb3376605a34976e31188bb16180d0 … which can be dropped into the Google parameter playground. I’m talking with Brad Fitzpatrick about how best these URIs might be represented without inventing a new URI scheme, btw.
  5. Google returns a pile of information, indicating connections between URLs, mailboxes, hashes etc., whether verified or merely claimed.
  6. I crudely ignore all that, and simply parse out anything beginning with http:// to give me some starting points to crawl.
  7. I feed these to an RDF FOAF/XFN crawler attached to my SparqlPress-augmented WordPress blog.

With no optimisation, polish or testing, here (below) is the list of documents this process gave me. I have not yet investigated their contents.

The goal here is to find out more about the people who are filling my mailbox, so that better UI (eg. auto-sorting into folders, photos, task-clustering etc) could help make email more contextualised and sanity-preserving.  The next step I think is to come up with a grouping and permissioning model for this crawled FOAF/RDF in the WordPress SPARQL store, ie. a way of asking queries across all social graph data in the store that came from this particular workflow. My store has other semi-personal data, such as the results of crawling the groups of people who have successfuly commented in blogs and wikis that I run. I’m also looking at a Venn-diagram UI for presenting these different groups in a way that allows other groups (eg. “anyone I send mail to OR who commented in my wiki”) to be defined in terms of these evidence-driven primitive sets. But that’s another story.

FOAF files located with the help of the Google API:






























For a quick hack, I’m really pleased with this. The sent-mail folder I used isn’t even my only one. A rewrite of the script over IMAP (and proprietary mail host APIs) could be quite fun.

SPARQL results in spreadsheets

I made a little progress with SPARQL and spreadsheets. On Kingsley Idehen’s advice, revisited OpenOffice (NeoOffice on MacOSX) and used the HTML table import utility.

It turns out that if I have an HTTP URL for a GET query to ARC‘s SPARQL endpoint, and I pass in the non-standard format=htmltab parameter, I get an HTML table which can directly be understood by NeoOffice. I’ll write this up properly another time, but in brief, I hacked the HTML form generator for the endpoint to use GET instead of POST, making it easier to get URLs associated with SPARQL queries. Then in NeoOffice, under “Insert” / “Link to External Data…”, paste in that URL, hit return, select the appropriate HTML table … and the data should import.

Here’s some fragments of FOAF and related info in NeoOffice (from a query that asked for the data source URI, a name and a homepage URI):

Open your data?

I tried this on a version of Excel for MacOSX (an old one, 2001), but failed to get HTML import working. Instead I just copied the info over from NeoOffice, since I wanted to try learning Excel’s Pivot and charting tools. Here is a quick example derrived from an Excel analysis of the RDF properties that appear in the different graphs in my SPARQL store. The imported data was just two columns a GRAPH URI, ?g, and a property URI ?property:

PREFIX : <http://xmlns.com/foaf/0.1/>
SELECT DISTINCT ?g ?property
GRAPH ?g { ?thing ?property ?value . }
ORDER BY ?property

Excel (and NeoOffice) offer Pivot and charting tools (similar to simplified version of what you’ll find under the Business Intelligence banner elsewhere) that take simple flat tables like this and (as they say) slice, dice, count and summarise. Here’s an unreadable chart I managed to make it generate:

SPARQL stats

Am still learning the ropes here… but I like the idea of combining desktop data analysis and reports with an ARC SPARQL store attached to my blog, and populated with data crawled from my web2ish ‘network neighbourhood’ (ie. FOAF, RSS, XFN, SIOC etc). Work in spare-time progress.

Shadows on the Web

This is a quick note, inspired by the recent burst of posts passing through Planet RDF about RDF, WebArch and a second “shadow” Web. Actually it’s not about that thread at all, except to note that Ian Davis asks just the right kind of questions when thinking about the WebArch claim that the Web ships with a hardcoded, timeless and built-in ontology, carving up the Universe between “information resources” and “non-information resources”. Various Talis folk are heading towards Bristol this week, so I expect we’ll pick up this theme offline again shortly! (Various other Talis folk – I’m happy to be counted as Talis person, even if they choose the worst picture of me for their blog :).

Anyhow, I made my peace with the TAG’s http-range-14 resolution long ago, and prefer the status quo to a situation where “/”-terminated namespaces are treated as risky and broken (as was the case pre-2005). But RDFa brings the “#” WebArch mess back to the forefront, since RDF and HTML can be blended within the same environment. Perhaps – reluctantly – we do need to revisit this perma-thread one last time. But not today! All I wanted to write about right now is the “shadow” metaphor. It crops up in Ian’s posts, and he cites Rob McCool’s writings. Since I’m unequiped with an IEEE login, I’m unsure where the metaphor originated. Ian’s usage is in terms of RDF creating a redundant and secondary structure that ordinary Web users don’t engage with, ie. a “shadow of the real thing”. I’m not going to pursue that point here, except to say I have sympathies, but am not too worried. GRDDL, RDFa etc help.

Instead, I’m going to suggest that we recycle the metaphor, since (when turned on its head) it gives an interesting metaphorical account of what the SW is all about. Like all 1-line metaphorical explanations of complex systems, the real value comes in picking it apart, and seeing where it doesn’t quote hold:

‘Web documents are the shadows cast on the Web by things in the world.

What do I mean here? Perhaps this is just pretentious, I’m not sure :) Let’s go back to the beginnings of the SW effort to picture this. In 1994 TimBL gave a Plenary talk at the first International WWW Conference. Amongst other things, he announced the creation of W3C, and described the task ahead of us in the Semantic Web community. This was two years before we had PICS, and three years before the first RDF drafts. People were all excited about Mosaic, to help date this. But even then, the description was clear, and well illustrated in a series of cartoon diagrams, eg:

TimBL 1994 Web semantics diagram

I’ve always liked these diagrams, and the words that went with them.

So much so that when I had the luxury of my own EU project to play with, they got reworked for us by Liz Turner: we made postcards and tshirts, which Libby delighted in sending to countless semwebbers to say “thanks!”. Here’s the postcard version:

SWAD-Europe postcard

The basic idea is just that Web documents are not intrinsically interesting things; what’s interesting, generally, is what they’re about. Web users don’t generally care much about sequences of unicode characters or bytes; we care about what they mean in our real lives. The objects they’re about, the agreements they describe, the real-world relationships and claims they capture. There is a Web of relationships in the world, describable in countless ways by countless people, and the information we put into the Web is just a pale shadow of that.

The Web according to TimBL, back in ’94:

To a computer, then, the web is a flat, boring world devoid of meaning. This is a pity, as in fact documents on the web describe real objects and imaginary concepts, and give particular relationships between them. For example, a document might describe a person. The title document to a house describes a house and also the ownership relation with a person. [...]

On this thinking, Web documents are the secondary thing; the shadow. What matters is the world and it’s mapping into digital documents, rather than the digital stuff alone. The shadow metaphor breaks down a little, if you think of the light source as something like the Sun, ie. with each real-world entity shadowed by a single (authoritative?) document in the Web. Life’s not like that, and the Web’s not like that either. Objects and relationships in the real world show up in numerous ways on the Web; or sometimes (thankfully) not at all. If Web documents can be thought of as shadows, they’re shadows cast in many lights, many colours, and by multiple independent light sources. Some indistinct, soft and flattering; occasionally frustratingly vague. Others bright, harshly precise and rigorously scientific (and correspondingly expensive and experty to use). But the core story is that it’s the same shared world that we’re seeing in all these different lights, and that the Web and the world are both richer because life is illustrated from multiple perspectives, and because the results can be visible to all.

The Semantic Web is, on this story, not a shadow of the real Web, but a story about how the Web is a shadow of the world. The Semantic Web is, in fact, much more like a 1970s disco than a shadow…

cvs2svn migration tips

Some advice from Garrett Rooney on Subversion. I was asking about moving the historical records for the FOAF project (xmlns.com etc) from dusty old CVS into shiny new Subversion. In particular, from CVS where I own and control the box, to Subversion where I’m a non-root normal mortal user (ie. Dreamhost customer).

The document records are pretty simple (no branches etc.). In fact only the previous versions of the FOAF RDF namespace document are of any historical interest at all. But I wanted to make sure that I could get things out again easily, without owning the Svn repository or begging the Dreamhost sysadmins.

Answer: “…a slightly qualified yes. assuming the subversion server is running svn 1.4 or newer a non-root user can use the svnsync tool to copy the contents (i.e. entire history they have read access to) of one repository into a new one. with servers before 1.4 it’s still possible to extract the information, but it’s more difficult.

And finding the version number? “if they’re letting you access svn via http you can just browse a repository and by default it’ll tell you the server version in the html view of the repository“.

Easily done, we’re on 1.4.2. That’s good.

Q: Any recommendations for importing from CVS?
A: “Converting from cvs to svn is a hellish job, due to the amount of really necessary data that cvs doesn’t record. cvs2svn has to infer a lot of it, and it does a damn good job of it considering what a pain in the ass it is. I’m immediately skeptical of anyone who goes out and writes another conversion tool ;-)

If you don’t have the ability to load a dumpfile into the repository you can load it into a local repos and then user svnsync to copy that into a remote empty repository. svnsync unfortunately won’t let you load into a non-empty repository, if you want to do that you need to use a svnadmin load, which requires direct access to the repository. most hosting sites will give you some way to do that sort of thing though.

Thanks Garrett!

Flock browser RDF: describing accounts

Flock is a mozilla-based browser that emphasises social and “web2″ themes. From a social-network-mobility thread, I’m reminded to take another look at Flock by Ian McKellar’s recent comments…

I wrote a bunch of that code when I was at Flock.

It’s all in RDF, I think it’s currently in a SQLite triple store in the user’s profile directory

I took a look. Seems not to be in SQLite files, at least in my fresh installation. Instead there is a file flock-data.rdf which looks to be the product of Mozilla’s ageing RDF engine. I had to clean things up slightly before I could process it with modern (Redland in this case) tools, since it uses Netscape’s pre-RDFCore datatyping notation:

cat flock-data.rdf | sed -e s/NC:parseType/RDF:datatype/

With that tweak out of the way, I can nose around the data using SPARQL. I’m interested in the “social graph” mobility discussions, and in mapping FOAF usage to Brad Fitzpatrick’s model (see detail in his slides).

The model in the writeup from Brad and David Recordon has nodes (standing roughly for accounts) and “is” relations amongst them where two accounts are known to share an owner, or “claims” relations to record a claim associated with one such account of shared ownership with another.

For example in my Flickr account (username “danbri”) I might claim to own the del.icio.us account (also username “danbri”). However you’d be wise not to believe flickr-me without more proof; this could come from many sources and techniques. Their graph model is focussed on such data.

FOAF by contrast emphasises the human social network, with the node graph being driven by “knows” relationships amongst people. We do have the OnlineAccount construct, which is closer to the kind of nodes we see in the “Thoughts on the Social Graph” paper, although they also include nodes for email, IM and hashed mailbox, I believe. The SIOC spec elaborates on this level, by sub-classing its notion of User from OnlineAccount rather than from Person.

So anyway, I’m looking at transformations between such representations, and FLock seems a nice source of data, since it watches over my browsing and when I use a site it knows to be “social”, it keeps a record of the account in RDF. For now, here’s a quick query to give you an idea of the shape of the data:

PREFIX fl: <http://flock.com/rdf#>
PREFIX nc: <http://home.netscape.com/NC-rdf#>
FROM <flock-data-fixed.rdf>
?x fl:flockType “Account” .
?x nc:Name ?name .
?x nc:URL ?url .
?x fl:serviceId ?serviceId .
?x fl:accountId ?accountId .

Running this with Redland’s “roqet” utility in JSON mode gives:

“head”: {
“vars”: [ "x", "name", "url", "serviceId", "accountId" ]
“results”: {
“ordered” : false,
“distinct” : true,
“bindings” : [
"x" : { "type": "uri", "value": "urn:flock:ljdanbri" },
"name" : { "type": "literal", "value": "danbri" },
"url" : { "type": "literal", "value": "http://www.livejournal.com/portal" },
"serviceId" : { "type": "literal", "value": "@flock.com/people/livejournal;1" },
"accountId" : { "type": "literal", "value": "danbri" }
"x" : { "type": "uri", "value": "urn:typepad:service:danbri" },
"name" : { "type": "literal", "value": "danbri" },
"url" : { "type": "literal", "value": "http://www.typepad.com" },
"serviceId" : { "type": "literal", "value": "@flock.com/blog/typepad;1" },
"accountId" : { "type": "literal", "value": "danbri" }
"x" : { "type": "uri", "value": "urn:flock:flickr:account:35468151816@N01" },
"name" : { "type": "literal", "value": "danbri" },
"url" : { "type": "literal", "value": "http://www.flickr.com/people/35468151816@N01/" },
"serviceId" : { "type": "literal", "value": "@flock.com/?photo-api-flickr;1" },
"accountId" : { "type": "literal", "value": "35468151816@N01" }
"x" : { "type": "uri", "value": "urn:wordpress:service:danbri" },
"name" : { "type": "literal", "value": "danbri" },
"url" : { "type": "literal", "value": "http://www.wordpress.com" },
"serviceId" : { "type": "literal", "value": "@flock.com/people/wordpress;1" },
"accountId" : { "type": "literal", "value": "danbri" }
"x" : { "type": "uri", "value": "urn:flock:youtube:modanbri" },
"name" : { "type": "literal", "value": "modanbri" },
"url" : { "type": "literal", "value": "http://www.youtube.com/profile?user=modanbri" },
"serviceId" : { "type": "literal", "value": "@flock.com/?photo-api-youtube;1" },
"accountId" : { "type": "literal", "value": "modanbri" }
"x" : { "type": "uri", "value": "urn:delicious:service:danbri" },
"name" : { "type": "literal", "value": "danbri" },
"url" : { "type": "literal", "value": "http://del.icio.us/danbri" },
"serviceId" : { "type": "literal", "value": "@flock.com/delicious-service;1" },
"accountId" : { "type": "literal", "value": "danbri" }

You can see there are several bits of information to squeeze in here. Which reminds me to chase up the “accountHomepage” issue in FOAF. They sometimes use a generic URL, eg. http://www.livejournal.com/portal, while other times an account-specific one, eg. http://del.icio.us/danbri. They also distinguish an nc:Name property of the account from a fl:accountId, allowing Flickr’s human readable account names to be distinguished from the generated ID you’re originally assigned. The fl:serviceId is an internal software service identifier it seems, following Mozilla conventions.

Last little experiment: a variant of the above query, but using CONSTRUCT instead of SELECT, to transform into FOAF’s idiom for representing accounts:

?x a foaf:OnlineAccount .
?x foaf:name ?name .
?x foaf:accountServiceHomepage ?url .
?x foaf:accountName ?accountId .

Seems to work… There’s load of other stuff in flock-data.rdf too, but I’ve not looked around much. For eg. you can search tagged URLs –

WHERE {[fl:tag "funny"; nc:URL ?url]}

An OpenID roster

I recently added OpenID support to my WordPress blog installation, and asked folks to test it out by leaving OpenID-authenticated comments. Seems to be working OK. I had a couple of issues with over-zealous spam filters, and people have reported a few glitches with specific providers (see comments for detail). OpenID doesn’t remove the need for spam filters btw, since anyone can run an OpenID server.

I said I wanted to see about a machine-readable export of the collected ID list. While I’ve not got to that, SQL access is simple, so I’ll show that here for now. Perhaps someone else has time to hack on the Web-export problem?

mysql> select user_id, url from wp_openid_identities;

| user_id | url                                                |
|      46 | http://danbri.org/                                 |
|      47 | http://piercarlos.myopenid.com/                    |
|      48 | http://claimid.com/nslater                         |
|      49 | http://nmg.livejournal.com/                        |
|      50 | http://ivanherman.pip.verisignlabs.com/            |
|      51 | http://simone.pip.verisignlabs.com/                |
|      52 | http://tommorris.org/                              |
|      53 | http://mijnopenid.nl/is/joe                        |
|      54 | http://dagoneye.it/me.html                         |
|      55 | http://kidehen.idehen.net/dataspace/person/kidehen |
|      56 | http://www.wasab.dk/morten/                        |
|      57 | http://aberingi.pip.verisignlabs.com/              |
|      59 | http://decafbad.com/                               |
|      58 | http://dltj.org/                                   |
|      60 | http://openid.aol.com/koaliemoon                   |
|      61 | http://klokie.com/                                 |
|      62 | http://petenixey.myopenid.com/                     |
17 rows in set (0.02 sec)

I’m calling this list an “openid roster”, by analogy with XMPP/Jabber. I’m not sure what associations the word has in other’s minds; from a quick search, it was originally used in military contexts first, but is now a general purpose term for a list of people. So, … here is an OpenID roster derrived from my blog comments system.

FOAF vocabulary management details

I have an action from the Vocabulary Management task force of the W3C SW Best Practices Working Group to document the way the FOAF namespace currently works. Here goes.

The FOAF RDF vocabulary is named by the URI http://xmlns.com/foaf/0.1/ and has information available at that URI in both machine-friendly and human-friendly form. When that URI is dereferenced by HTTP, the default representation is currently HTML-based.

Aside: I say based because the markup, while basically being XHTML, also includes incline RDF/XML describing the FOAF vocabulary. The HTML TF of SWBPD WG is looking at ways of making such documents validatable; I hope a future version will be valid in some way, as well as presentable in mainstream browsers. This might be via RDF/A in XHTML2, RDF/A back-ported to an early XHTML variant, a GRDDL-based transform, or perhaps a CDF document if they’re deployable in legacy browsers.

Backing up to the big picture: we want to make it easy for both people and machines to find out what they need. So the basic idea is that there’s an HTML document describing FOAF for humans, and a set of RDF statements describing FOAF for machines. The RDF statements are available in several ways. As embedded RDF/XML in the HTML page. As a separate index.rdf document (this should be LINK REL’d from the former), and as a content-negotiable representation of the main URI, available to clients that send an “Accept: application/rdf+xml” header.

In addition to this, each FOAF term (assuming the Apache .htaccess is up to date; this might not be currently true) is redirected by the Web server to the main FOAF namespace URI. For example, http://xmlns.com/foaf/0.1/Person. This should in the future be done with a 303 HTTP redirection code, to be consistent with the recent W3C TAG decision on http-range-14 (apologies for the jargon and lack of links to explanation). Because FOAF term URIs don’t contain a # [edit: I wrote / previously; thanks mortenf], note that the redirection can’t point down into a sub-section of the HTML document. To make the HTML document more usable, we should probably put a better table of contents for terms nearer to the top of the page, to allow someone to navigate quickly to the description of that term.

One last point: each term defined in FOAF is accompanied not only by a label and comment (standard from RDFS); it also has a chunk of HTML markup, with cross-references to other terms. At the moment, this is not made available in any machine-readable form. There might be some scope here for common practice with SKOS and other vocabularies?

Sorry for the hasty writeup, I just wanted to get this recorded as a starting point…