Rick Jelliffe on XML Schema

From the TAG list:

XML Schemas is like using a Swiss Army knife to cook with. Most Asian kitchens get by with a handful of simple tools: chopsticks, hatchet, a good knife, perhaps even a spoon. But the logic of  the XSD WG is “Oh, the French need to make quenelles, we must have a quenelling spoon as a grave matter of Internationalization because it is not our business to judge what people need… as long it is more stuff.”    So XSD 1.1 welds another Swiss Army knife onto the existing one, so that no kitchen should suffer without a quenelling spoon.

See also earlier comments on the Schema Experience Workshop from W3C.

So tool-makers blame users for generating non-standard schemas, and users blame the spec for being to difficult to know whether their schemas are standard or not, and spec makers blame tool makers for not implementing the spec properly. Who will free us from this cycle of sin and death?

[...] The only way that XML Schemas can be refactored is with a different core XML Schemas working group. My current expectation is that a lot of nothing will happen until XQuery/XSLT2 becomes seen as a more central technology than XML Schemas; the goal will then be how to support XQuery most minimally.

XSD doesn’t trouble me as much as it troubles Rick, but I have long sympathised with the approach he advocates with Schematron. The RDF equivalent of this is the approach Libby and I called “Schemarama”, expressing constraints against RDF instance data using queries. See original 2001 demo using SquishQL, and a later reworking by Alistair Miles using SPARQL (currently offline?). Recent work from the OWL experts at Clark & Parsia (blog post; another blog post) is heading in the same direction. I wonder whether Rick’s observation about XML applies to RDF too, and that at some point, SPARQL querying facilities will be so ubiquitous in RDF tools that it becomes second nature to apply it to data checking tasks too…?

Update: see also SpinRDF from Holger & co. at Top Quadrant

OpenSocial schema extraction: via Javascript to RDF/OWL

OpenSocial’s API reference describes a number of classes (‘Person’, ‘Name’, ‘Email’, ‘Phone’, ‘Url’, ‘Organization’, ‘Address’, ‘Message’, ‘Activity’, ‘MediaItem’, ‘Activity’, …), each of which has various properties whose values are either strings, references to instances of other classes, or enumerations. I’d like to make them usable beyond the confines of OpenSocial, so I’m making an RDF/OWL version. OpenSocial’s schema is an attempt to provide an overarching model for much of present-day mainstream ‘social networking’ functionality, including dating, jobs etc. Such a broad effort is inevitably somewhat open-ended, and so may benefit from being linked to data from other complementary sources.

With a bit of help from the shindig-dev list, #opensocial IRC, and Kevin Brown and Kevin Marks, I’ve tracked down the source files used to represent OpenSocial’s data schemas: they’re in the opensocial-resources SVN repository on code.google.com. There is also a downstream copy in the Apache Shindig SVN repo (I’m not very clear on how versioning and evolution is managed between the two). They’re Javascript files, structured so that documentation can be generated via javadoc. The Shindig-PHP schema diagram I posted recently is a representation of this schema.

So – my RDF version. At the moment it is merely a list of classes and their properties (expressed using via rdfs:domain), written using RDFa/HTML. I don’t yet define rdfs:range for any of these, nor handle the enumerated values (opensocial.Enum.Smoker, opensocial.Enum.Drinker, opensocial.Enum.Gender, opensocial.Enum.LookingFor, opensocial.Enum.Presence) that are defined in enum.js.

The code is all in the FOAF SVN, and accessible via “svn co http://svn.foaf-project.org/foaftown/opensocial/vocab/”. I’ve also taken the liberty of including a copy of the OpenSocial *.js files, and Mozilla’s Rhino Javascript interpreter js.jar in there too, for self-containedness.

The code in schemarama.js will simply generate an RDFA/XHTML page describing the schema. This can be checked using the W3C validator, or converted to RDF/XML with the pyRDFa service at W3C.

I’ve tested the output using the OwlSight/pellet service from Clark & Parsia, and with Protege 4. It’s basic but seems OK and a foundation to build from. Here’s a screenshot of the output loaded into Protege (which btw finds 10 classes and 99 properties).

An example view from protege, showing the class browser in one panel, and a few properties of Person in another.

OK so why might this be interesting?

  • Using OpenSocial-derrived vocabulary, OpenSocial-exported data in other contexts
    • databases (queryable via SPARQL)
    • mixed with FOAF
    • mixed with Microformats
    • published directly in RDFa/HTML
  • Mapping OpenSocial terms with other contact and social network schemas

This suggests some goals for continued exploration:

It should be possible to use “OpenSocial markup” in an ordinary homepage or blog (HTML or XHTML), drawing on any of the descriptive concepts they define, through using RDFa’s markup notation. As Mark Birbeck pointed out recently, RDFa is an empty vessel – it does not define any descriptive vocabulary. Instead, the RDF toolset offers an environment in which vocabulary from multiple independent sources can be mixed and merged quite freely. The hard work of the OpenSocial team in analysing social network schemas and finding commonalities, or of the Microformats scene in defining simple building-block vocabularies … these can hopefully be combined within a single environment.