Site recovery

Busy sysadmin week. The main FOAF site is back, now hosted on Amazon EC2. Thanks to Stephane Corlosquet for all the time he spent fixing up the Drupal installation, after the recent server compromise. I’ve also moved over danbri.org (well, DNS is propagating), and migrated my blog into a completely fresh WordPress installation. The FOAF namespace site and Subversion server are safe, and not yet migrated to new hosting. Various documents from danbri.org are still offline while I scrub all the HTML, .js, .php etc for mischief. The old rdfweb.org site is also offline. I’d rather move slowly and carefully than mess up this process. This is a test post from the new WordPress to see if it works. Note that I’ve stripped all plugins and addons and will be much more conservative with trying extensions in the future. In particular, OpenID-based commenting isn’t working right now, but it’s on the todo list. One of the most disconcerting things about being hacked is when the site is also your OpenID. I’m wondering how to better partition things in the future; perhaps using id.danbri.org might give some more options?

OpenSocial schema extraction: via Javascript to RDF/OWL

OpenSocial’s API reference describes a number of classes (‘Person’, ‘Name’, ‘Email’, ‘Phone’, ‘Url’, ‘Organization’, ‘Address’, ‘Message’, ‘Activity’, ‘MediaItem’, ‘Activity’, …), each of which has various properties whose values are either strings, references to instances of other classes, or enumerations. I’d like to make them usable beyond the confines of OpenSocial, so I’m making an RDF/OWL version. OpenSocial’s schema is an attempt to provide an overarching model for much of present-day mainstream ‘social networking’ functionality, including dating, jobs etc. Such a broad effort is inevitably somewhat open-ended, and so may benefit from being linked to data from other complementary sources.

With a bit of help from the shindig-dev list, #opensocial IRC, and Kevin Brown and Kevin Marks, I’ve tracked down the source files used to represent OpenSocial’s data schemas: they’re in the opensocial-resources SVN repository on code.google.com. There is also a downstream copy in the Apache Shindig SVN repo (I’m not very clear on how versioning and evolution is managed between the two). They’re Javascript files, structured so that documentation can be generated via javadoc. The Shindig-PHP schema diagram I posted recently is a representation of this schema.

So – my RDF version. At the moment it is merely a list of classes and their properties (expressed using via rdfs:domain), written using RDFa/HTML. I don’t yet define rdfs:range for any of these, nor handle the enumerated values (opensocial.Enum.Smoker, opensocial.Enum.Drinker, opensocial.Enum.Gender, opensocial.Enum.LookingFor, opensocial.Enum.Presence) that are defined in enum.js.

The code is all in the FOAF SVN, and accessible via “svn co http://svn.foaf-project.org/foaftown/opensocial/vocab/”. I’ve also taken the liberty of including a copy of the OpenSocial *.js files, and Mozilla’s Rhino Javascript interpreter js.jar in there too, for self-containedness.

The code in schemarama.js will simply generate an RDFA/XHTML page describing the schema. This can be checked using the W3C validator, or converted to RDF/XML with the pyRDFa service at W3C.

I’ve tested the output using the OwlSight/pellet service from Clark & Parsia, and with Protege 4. It’s basic but seems OK and a foundation to build from. Here’s a screenshot of the output loaded into Protege (which btw finds 10 classes and 99 properties).

An example view from protege, showing the class browser in one panel, and a few properties of Person in another.

OK so why might this be interesting?

  • Using OpenSocial-derrived vocabulary, OpenSocial-exported data in other contexts
    • databases (queryable via SPARQL)
    • mixed with FOAF
    • mixed with Microformats
    • published directly in RDFa/HTML
  • Mapping OpenSocial terms with other contact and social network schemas

This suggests some goals for continued exploration:

It should be possible to use “OpenSocial markup” in an ordinary homepage or blog (HTML or XHTML), drawing on any of the descriptive concepts they define, through using RDFa’s markup notation. As Mark Birbeck pointed out recently, RDFa is an empty vessel – it does not define any descriptive vocabulary. Instead, the RDF toolset offers an environment in which vocabulary from multiple independent sources can be mixed and merged quite freely. The hard work of the OpenSocial team in analysing social network schemas and finding commonalities, or of the Microformats scene in defining simple building-block vocabularies … these can hopefully be combined within a single environment.

Visual SPARQL query tools

Quick links – thinking about tools that allow graphical SPARQL query authoring…

OpenLink Virtuoso: InteractiveSparqlQueryBuilder (in HTML/CSS/.js). Pictured below; extensive documentation and screenshots linked from their main page.

…an ancestor of which was Damian Steer’s RDFAuthor tool for MacOSX, which could generate Squish (a SPARQL precursor) and query services over the ‘array of hashtables’ SOAP-for-rdf-query non spec that Libby Miller and I had implementations of. From the RDFAuthor tutorial:

The old Maryland BINPIQ SHOE knowledgebase query applet is the grandaddy of them all. Sadly I don’t have any screenshots and the applet itself seems to be coderotted. [...] Ah, but here I find an email I wrote about it 8 years ago(!), which has screenshots:

SemanticSoft from Moldova also have some visual SPARQL UI:

No real conclusion here. I just found myself looking around some of these links, and thought I’d share them. I’m sure there’s a lot more related work out there (eg. NIGHTLIGHT from folk at Southampton Uni), and that the rise of fancy HTML-based UIs and JSON for data access makes for an ever-more interesting environment for zero-install graphical query tools.

One thing I remember about the old Maryland applets: as their representational language became more expressive (moving from binary to n-ary), the graphical query UI became somewhat less intuitive. Now since SPARQL itself adds some concepts not in the underlying target language (ie. RDF doesn’t have named graphs, optionals etc), the ability to make a graphical query UI that exploits the “it’s just an RDF graph with bits labelled as missing” (per Guha’s original proposal) perhaps gets a bit strained. In particular, how might named graphs best be represented in visual editors?

Yahoo: RDF and the Monkey

From the Yahoo developer network blog,

Besides the existing support for microformats, we have already shared our plans for supporting other standards for embedding metadata into HTML. Today we are announcing the availability of eRDF metadata for SearchMonkey applications, which will soon be followed by support for RDFa. SearchMonkey applications can make direct use of the eRDF data by choosing the com.yahoo.rdf.erdf data source, while RDFa data will appear under com.yahoo.rdf.rdfa. Nothing changes in the way applications are created: as SearchMonkey applications have already been built on a triple-based model, the same applications can work on both microformat, eRDF or RDFa data.

Very cool. Good news for microformats, good news for RDF. Now to find which spam-trap my SearchMonkey account info got lost in…

“Stuff I’ve been thinking about” (SocialNetworkPortability WebCamp) – my slides

I’m in Cork, mainly for the excellent Social Network Portability event on Sunday, but am also staying through Blogtalk’08 which has been great. I’ve uploaded my slides from my talk (slideshare in Flash, included inline here, or a pdf). I have some rough speaking notes too,  maybe I’ll get those online. I have no idea how they relate to whatever actually came out of my mouth during the talk :) Apologies to those without PDF or Flash. I haven’t tried Keynote’s HTML output yet.

Basically much of what I was getting at in the talk, and my thoughts are only just congealing on this … is that the idea of a ‘claim’ is a useful bridge between Semantic Web and Social Networking concerns. Also that it helps us understand how technologies fit together. FOAF defines a dictionary of terms for making claims, as does xfn, hCard. RDF/XML, Microformats, RDFa, GRDDL define textual notations for publishing documents that encode claims, and SPARQL gives us a way of asking questions about the claims made in different documents.

Blood money

It’s in the Daily Mail, so it must be true:

Motorists will be targeted by a new generation of road cameras which work out how many people are in a car by measuring the amount of bodily fluid it contains.

The latest snooping device on the nation’s roads aims to penalise lone drivers who abuse car-sharing lanes, and is part of a Government effort to combat congestion at busy times.

The cameras work by sending an infrared beam through the windscreen of vehicles which detects the unique make-up of blood and water content in human skin.

Coincidentally enough, today’s hackery:

 danbri> add ds danbri http://danbri.org/words/sparqlpress/sparql

jamendo> store added

danbri> add template “FIND BLOOD DONORS” “select ?n ?bt where { ?x <http://xmlns.com/foaf/0.1/nick> ?n  . ?x <http://kota.s12.xrea.com/vocab/uranaibloodtype> ?bt . }”

danbri> FIND BLOOD DONORS

jamendo> “danbri” “A+”

This makes use of the Uranai FOAF extension for recording one’s blood type.

Trying xOperator via Yves’s installation; the above dialog conducted purely through Jabber IM. Nice to see XMPP+SPARQL explorations gaining traction. I really like the extra value-adding layers added by the xOperator text chat UI, and the fact that you can wire in HTTP-based SPARQL endpoints too. Similar functionality in IRC comes from Bengee’s sparqlbot (see intro doc). I think the latter has a slightly fancier templating system, but I haven’t really explored all that xOperator can do. I did get the ‘blood’ example working with sparqlbot in IRC too; it seems most people don’t have their bloodtype in their FOAF file. So much for finding handy donors ;)

Blood and XMPP aside, there’s a lot going on here technically. The fact that I can ask Yves’ installation of xOperator to attach to my SparqlPress installation’s database is interesting. At the moment everything in there is public, so ACL is easy. But we could imagine that picture changing. Perhaps I decide that healthcare-related info goes into a special set of named graphs. How could permissioning for this be handled so that Yves’s bot can still provide me with UI? OAuth is probably part of the answer here, although that means bouncing out into http/html UI to some extent at least. And there’s more to say on SPARQL ACL too. Much more, but not right now.

The other technically rich area is on these natural language templates. It is historically rare for public SQL endpoints to be made available, even read only. Some of the reasons for this (local obscure schemas, closed world logic) have gone away with SPARQL, but the biggest reason – ease of writing very expensive queries – is very much still present. Guha and McCool’s Alpiri/TAP work argued that we needed another lighter data interface; they proposed ‘GetData‘. The templates system we see in xOperator and sparqlbot (see also julie, wh4 and foafbot) suggests another take on this problem. Rather than expose all of my personal datastore to random ‘SELECT * WHERE { ?p ?s ?o }’ crawlers, these bots can provide a wrapping layer, allowing in only queries that are much more tightly constrained, as well as being more human-oriented. Hmm I’m suddenly reminded of Metalog; wonder if Massimo is still working on that area.

A nice step here might be if chat-oriented query templates could be shared between xOperator and sparqlbot. Wonder what would be needed to make that happen…

Google Social Graph API, privacy and the public record

I’m digesting some of the reactions to Google’s recently announced Social Graph API. ReadWriteWeb ask whether this is a creeping privacy violation, and danah boyd has a thoughtful post raising concerns about whether the privileged tech elite have any right to experiment in this way with the online lives of those who are lack status, knowledge of these obscure technologies, and who may be amongst the more vulnerable users of the social Web.

While I tend to agree with Tim O’Reilly that privacy by obscurity is dead, I’m not of the “privacy is dead, get over it” school of thought. Tim argues,

The counter-argument is that all this data is available anyway, and that by making it more visible, we raise people’s awareness and ultimately their behavior. I’m in the latter camp. It’s a lot like the evolutionary value of pain. Search creates feedback loops that allow us to learn from and modify our behavior. A false sense of security helps bad actors more than tools that make information more visible.

There’s a danger here of technologists seeming to blame those we’re causing pain for. As danah says, “Think about whistle blowers, women or queer folk in repressive societies, journalists, etc.”. Not everyone knows their DTD from their TCP, or understand anything of how search engines, HTML or hyperlinks work. And many folk have more urgent things to focus on than learning such obscurities, let alone understanding the practical privacy, safety and reputation-related implications of their technology-mediated deeds.

Web technologists have responsibilities to the users of the Web, and while media education and literacy are important, those who are shaping and re-shaping the Web ought to be spending serious time on a daily basis struggling to come up with better ways of allowing humans to act and interact online without other parties snooping. The end of privacy by obscurity should not mean the death of privacy.

Privacy is not dead, and we will not get over it.

But it does need to be understood in the context of the public record. The reason I am enthusiastic about the Google work is that it shines a big bright light on the things people are currently putting into the public record. And it does so in a way that should allow people to build better online environments for those who do want their public actions visible, while providing immediate – and sometimes painful – feedback to those who have over-exposed themselves in the Web, and wish to backpedal.

I hope Google can put a user support mechanism on this. I know from our experience in the FOAF community, even with small scale and obscure aggregators, people will find themselves and demand to be “taken down”. While any particular aggregator can remove or hide such data, unless the data is tracked back to its source, it’ll crop up elsewhere in the Web.

I think the argument that FOAF and XFN are particularly special here is a big mistake. Web technologies used correctly (posh – “plain old semantic html” in microformats-speak) already facilitate such techniques. And Google is far from the only search engine in existence. Short of obfuscating all text inside images, personal data from these sites is readily harvestable.

ReadWriteWeb comment:

None the less, apparently the absence of XFN/FOAF data in your social network is no assurance that it won’t be pulled into the new Google API, either. The Google API page says “we currently index the public Web for XHTML Friends Network (XFN), Friend of a Friend (FOAF) markup and other publicly declared connections.” In other words, it’s not opt-in by even publishers – they aren’t required to make their information available in marked-up code.

The Web itself is built from marked-up code, and this is a thing of huge benefit to humanity. Both microformats and the Semantic Web community share the perspective that the Web’s core technologies (HTML, XHTML, XML, URIs) are properly consumed both by machines and by humans, and that any efforts to create documents that are usable only by (certain fortunate) humans is anti-social and discriminatory.

The Web Accessibility movement have worked incredibly hard over many years to encourage Web designers to create well marked up pages, where the meaning of the content is as mechanically evident as possible. The more evident the meaning of a document, the easier it is to repurpose it or present it through alternate means. This goal of device-independent, well marked up Web content is one that unites the accessibility, Mobile Web, Web 2.0, microformat and Semantic Web efforts. Perhaps the most obvious case is for blind and partially sighted users, but good markup can also benefit those with the inability to use a mouse or keyboard. Beyond accessibility, many millions of Web users (many poor, and in poor countries) will have access to the Web only via mobile phones. My former employer W3C has just published a draft document, “Experiences Shared by People with Disabilities and by People Using Mobile Devices”. Last month in Bangalore, W3C held a Workshop on the Mobile Web in Developing Countries (see executive summary).

I read both Tim’s post, and danah’s post, and I agree with large parts of what they’re both saying. But not quite with either of them, so all I can think to do is spell out some of my perhaps previously unarticulated assumptions.

  • There is no huge difference in principle between “normal” HTML Web pages and XFN or FOAF. Textual markup is what the Web is built from.
  • FOAF and XFN take some of the guesswork out of interpreting markup. But other technologies (javascript, perl, XSLT/GRDDL) can also transform vague markup into more machine-friendly markup. FOAF/XFN simply make this process easier and less heuristic, less error prone.
  • Google was not the first search engine, it is not the only search engine, and it will not be the last search engine. To obsess on Google’s behaviour here is to mistake Google for the Web.
  • Deeds that are on the public record in the Web may come to light months or years later; Google’s opening up of the (already public, but fragmented) Usenet historical record is a good example here.
  • Arguing against good markup practice on the Web (accessible, device independent markup) is something that may hurt underprivileged users (with disabilities, or limited access via mobile, high bandwidth costs etc).
  • Good markup allows content to be automatically summarised and re-presented to suit a variety of means of interaction and navigation (eg. voice browsers, screen readers, small screens, non-mouse navigation etc).
  • Good markup also makes it possible for search engines, crawlers and aggregators to offer richer services.

The difference between Google crawling FOAF/XFN from LiveJournal, versus extracting similar information via custom scripts from MySpace, is interesting and important solely to geeks. Mainstream users have no idea of such distinctions. When LiveJournal originally launched their FOAF files in 2004, the rule they followed was a pretty sensible one: if the information was there in the HTML pages, they’d also expose it in FOAF.

We need to be careful of taking a ruthless “you can’t make an omelete without breaking eggs” line here. Whatever we do, people will suffer. If the Web is made inaccessible, with information hidden inside image files or otherwise obfuscated, we exclude a huge constituency of users. If we shine a light on the public record, as Google have done, we’ll embarass, expose and even potentially risk harm to the people described by these interlinked documents. And if we stick our head in the sand and pretend that these folk aren’t exposed, I predict this will come back to bite us in the butt in a few months or years, since all that data is out there, being crawled, indexed and analysed by parties other than Google. Parties with less to lose, and more to gain.

So what to do? I think several activities need to happen in parallel:

  • Best practice codes for those who expose, and those who aggregate, social Web data
  • Improved media literacy education for those who are unwittingly exposing too much of themselves online
  • Technology development around decentralised, non-public record communication and community tools (eg. via Jabber/XMPP)

Any search engine at all, today, is capable of supporting the following bit of mischief:

Take some starting point a collection of user profiles on a public site. Extract all the usernames. Find the ones that appear in the Web less than say 10,000 times, and on other sites. Assume these are unique userIDs and crawl the pages they appear in, do some heuristic name matching, … and you’ll have a pile of smushed identities, perhaps linking professional and dating sites, or drunken college photos to respectable-new-life. No FOAF needed.

The answer I think isn’t to beat up on the aggregators, it’s to improve the Web experience such that people can have real privacy when they need it, rather than the misleading illusion of privacy. This isn’t going to be easy, but I don’t see a credible alternative.

Flickr (Yahoo) upcoming support for OpenID

According to Simon Willison, Flickr look set to support OpenID by allowing your photostream URL (eg. for me, http://www.flickr.com/photos/danbri/) to serve as an OpenID, ie. something you can type wherever you see “login using OpenID” and be bounced to Flickr/Yahoo to provide credentials instead of remembering yet another password. This is rather good news.

For the portability-minded, it’s worth remembering that OpenID lets you put markup in your own Web page to devolve to such services. So my main OpenID is “danbri.org” , which is a document I control, on a domain that I own. In the HTML header I have the following markup:



<link rel="meta" type="application/rdf+xml" title="FOAF" href="http://danbri.org/foaf.rdf" />
<link rel="openid.server" href="http://www.livejournal.com/openid/server.bml" />
<link rel="openid.delegate" href="http://danbri.livejournal.com/" />

…which is enough to defer the details of being an OpenID provider to LiveJournal (thanks, LiveJournal!). Flickr are about to join the group of sites you can use in this way, it seems.

As an aside, this means that the security of our own websites becomes yet more important. Last summer, DreamHost (my webhosting provider) were compromised, and my own homepage was briefly decorated with viagra spam. Fortunately they didn’t touch seem to touch the OpenID markup, but you can see the risk. That’s the price of portability here. As Simon points out, we’ll probably all have several active OpenIDs, and there’s no need to host your own, just as there’s no need for people who want to publish online to buy and host their own domains or HTML sites.

The Flickr implementation, coupled with their existing API, means we could all offer things like “log into my personal site for family (or friends)” and defer buddylist – and FOAF – management to the well-designed Flickr site, assuming all your friends or family have Flickr accounts. Implementing this in a way that works with other providers (eg. LJ) is left as an excercise for the reader ;)

Imagemap magic

I’ve always found HTML imagemaps to be a curiously neglected technology. They seem somehow to evoke the Web of the mid-to-late 90s, to be terribly ‘1.0’. But there’s glue in the old horse yet…

A client-side HTML imagemap lets you associate links (and via Javascript, behaviour) with regions of an image. As such, they’re a form of image metadata that can have applications including image search, Web accessibility and social networking. They’re also a poor cousin to the Web’s new vector image format, SVG. This morning I dug out some old work on this (much of which from Max, Libby, Jim all of whom btw are currently working at Joost; as am I, albeit part-time).

The first hurdle you hit when you want to play with HTML imagemaps is finding an editor that produces them. The fact that my blog post asking for MacOSX HTML imagemap editors is now top Google hit for “MacOSX HTML imagemap” pretty much says it all. Eventually I found (and paid for) one called YokMak that seems OK.

So the first experiment here, was to take a picture (of me) and make a simple HTML imagemap.

danbri being imagemapped

As a step towards treating this as re-usable metadata, here’s imagemap2svg.xslt from Max back in 2002. The results of running it with xsltproc are online: _output.svg (you need an SVG-happy browser). Firefox, Safari and Opera seem more or less happy with it (ie. they show the selected area against a pink background). This shows that imagemap data can be freed from the clutches of HTML, and repurposed. You can do similar things server-side using Apache Batik, a Java SVG toolkit. There are still a few 2002 examples floating around, showing how bits of the image can be described in RDF that includes imagemap info, and then manipulated using SVG tools driven from metadata.

Once we have this ability to pick out a region of an image (eg. photo) and tag it, it opens up a few fun directions. In the FOAF scene a few years ago, we had fun using RDF to tag image region parts with information about the things they depicted. But we didn’t really get into questions of surface-syntax, ie. how to maker rich claims about the image area directly within the HTML markup. These days, some combination of RDFa or microformats would probably be the thing to use (or perhaps GRDDL). I’ve sent mail to the RDFa group looking for help with this (see that message for various further related-work links too).

Specifically, I’d love to have some clean HTML markup that said, not just “this area of the photo is associated with the URI http://danbri.org/”, but “this area is the Person whose openid is danbri.org, … and this area depicts the thing that is the primary topic of http://en.wikipedia.org/wiki/Eiffel_Tower”. If we had this, I think we’d have some nice tools for finding images, for explaining images to people who can’t see them, and for connecting people and social networks through codepiction.

Codepiction