Drupal is now OAuth-enabled

via Sumit Kataria on the oauth list:

I am very happy to announce that Drupal’s OAuth module is now ready to use. Right now it just acts as server because we don’t need client support at this time, Client implementation will be done soon by the time release of Drupal 7 as ServicesAPI in Drupal 7 will be needing it. Endpoints for Drupal are:

REQUEST URL : http://www.example.com/?q=oauth/request
AUTH URL : http://www.example.com/?q=oauth/auth
ACCESS : http://www.example.com/?q=oauth/access

At this time Services API is necessary to use OAuth (also it is the only module which is gonna use oauth as well – so not a big deal). This module also provides a test browser to produce OAuth tokens (whole Drupal way using multi-page form). Right now only PLAIN-TEXT signature method is supported but soon support for other methods will be added as well.

I’m slowly learning my way around Drupal, so this is rather good encouragement to learn faster…

See the announcement for more details and links.

Inevitable Nipple Analogy

A genetic theory of homosexuality.

The article reports on recent work (pdf) addressing the ‘if homosexuality is genetic, why hasn’t it died out?’ debate, which suggests that the ‘gene for male homosexuality persists because it promotes—and is passed down through—high rates of procreation among gay men’s mothers, sisters, and aunts‘.

In other words, gayness in men can be as natural and as the male nipple, even if both are initially puzzling when thought of in evolutionary terms. OK I’m stretching things slightly, but I can’t help but wonder whether the nipple analogy might be a good basis for informal arguments for a bit more tolerance:

Let He Who is Without Nipples Cast the First Stone?

Or maybe not.

More on the Great Nipple Question from straightdope.com; a moment of science; and the evolution-101 blog.

Beautiful plumage: Topic Maps Not Dead Yet

Echoing recent discussion of Semantic Web “Killer Apps”, an “are Topic Maps dead?” thread on the topicmaps mailing list. Signs of life offered include www.fuzzzy.com (‘Collaborative, semantic and democratic social bookmarking’, Topic Maps meet social networking; featured tag: ‘topic maps‘) and a longer-list from Are Gulbrandsen who suggests a predictable hype-cycle dropoff is occuring, as well as a migration of discussions from email into the blog world. For which, see the topicmaps planet aggregator, and through which I indirectly find Steve Pepper’s blog and an interesting post on how TMs relate to RDF, OWL and the Semantic Web (though I’d have hoped for some mention of SKOS too).

Are Gulbrandsen also cites NZETC (the New Zealand Electronic Tech Centre), winner of The Topic Maps Application of the year award at the Topic Maps 2008 conference; see Conal Tuohy’s presentation on Topic Maps for Cultural Heritage Collections (slides in PDF). On NZETC’s work: “It may not look that interesting to many people used to flashy web 2.0 sites, but to anybody who have been looking at library systems it’s a paradigm shift“.

Other Topic Map work highlighted: RAMline (Royal Academy of Music rewriting musical history). “A long-term research project into the mapping of three axes of musical time: the historical, the functional, and musical time itself.”; David Weinberger blogged about this work recently. Also MIPS / Institute for Bioinformatics and Systems Biology who “attempt to explain the complexity of life with Topic Maps” (see presentation from Volker Stümpflen (PDF); also a TMRA’07 talk).

Finally, pointers to opensource developer tools: Ruby Topic Maps and Wandora (Java/GPL), an extraction/mapping and publishing system which amongst other things can import RDF.

Topic Maps are clearly not dead, and the Web’s a richer environment because of this. They may not have set the world on fire but people are finding value in the specs and tools, while also exploring interop with RDF and other related technologies. My hunch is that we’ll continue to see a slow drift towards the use of RDF/OWL plus SKOS for apps that might otherwise have been addressed using TopicMaps, and a continued pragmatism from tool and app developers who see all these things as ways to solve problems, rather than as ends in themselves.

Just as with RDFa, GRDDL and Microformats, it is good and healthy for the Web community to be exploring multiple similar strands of activity. We’re smart enough to be able to flow data across these divides when needed, and having only a single technology stack is I think both intellectually limiting, socially impractical, and technologically short-sighted.

RDFa Basics video from Manu Sporny

Via Dave Beckett in #swig IRC,  Manu Sporny’s handy 10 minute overview of RDFa Basics (see also other versions, source materials).

Here’s a screen grab of the full FOAF example used. Note that the WG renamed ‘instanceof’ to ‘typeof’ recently.

FOAF example expressed in RDFa

For the video-averse, a full transcript is available. Here’s the full XHTML markup example from the above image:

<body xmlns:foaf="http://xmlns.com/foaf/0.1/">
 <span about="#jane" typeof="foaf:Person"
       property="foaf:name">Jane McJanerson</span>
 <span about="#mac" typeof="foaf:Person"
       property="foaf:name">Mac McJanerson</span>
 <span about="#jane" rel="foaf:knows"
       resource="#mac">Jane is friends with Mac.</span>
</body>

Google Data APIs (and partial YouTube) supporting OAuth

Building on last month’s announcement of OAuth for the Google Contacts API, this from Wei on the oauth list:

Just want to let you know that we officially support OAuth for all Google Data APIs.

See blog post:

You’ll now be able to use standard OAuth libraries to write code that authenticates users to any of the Google Data APIs, such as Google Calendar Data API, Blogger Data API, Picasa Web Albums Data API, or Google Contacts Data API. This should reduce the amount of duplicate code that you need to write, and make it easier for you to write applications and tools that work with a variety of services from multiple providers. [...]

There’s also a footnote, “* OAuth also currently works for YouTube accounts that are linked to a Google Account when using the YouTube Data API.”

See the documentation for more details.

On the YouTube front, I have no idea what % of their accounts are linked to Google; lots I guess. Some interesting parts of the YouTube API: retrieve user profiles, access/edit contacts, find videos uploaded by a particular user or favourited by them plus of course per-video metadata (categories, keywords, tags, etc). There’s a lot you could do with this, in particular it should be possible to find out more about a user by looking at the metadata for the videos they favourite.

Evidence-based profiles are often better than those that are merely asserted, without being grounded in real activity. The list of people I actively exchange mail or IM with is more interesting to me than the list of people I’ve added on Facebook or Orkut; the same applies with profiles versus tag-harvesting. This is why the combination of last.fm’s knowledge of my music listening behaviour with the BBC’s categorisation of MusicBrainz artist IDs is more interesting than asking me to type my ‘favourite band’ into a box. Finding out which bands I’ve friended on MySpace would also be a nice piece of evidence to throw into that mix (and possible, since MusicBrainz also notes MySpace URIs).

So what do these profiles look like? The YouTube ‘retrieve a profile‘ API documentation has an example. It’s Atom-encoded, and beyond the video stuff mentioned above has fields like:

  <yt:age>33</yt:age>
  <yt:username>andyland74</yt:username>
  <yt:books>Catch-22</yt:books>
  <yt:gender>m</yt:gender>
  <yt:company>Google</yt:company>
  <yt:hobbies>Testing YouTube APIs</yt:hobbies>
  <yt:location>US</yt:location>
  <yt:movies>Aqua Teen Hungerforce</yt:movies>
  <yt:music>Elliott Smith</yt:music>
  <yt:occupation>Technical Writer</yt:occupation>
  <yt:school>University of North Carolina</yt:school>
  <media:thumbnail url='http://i.ytimg.com/vi/YFbSxcdOL-w/default.jpg'/>
  <yt:statistics viewCount='9' videoWatchCount='21' subscriberCount='1'
    lastWebAccess='2008-02-25T16:03:38.000-08:00'/>

Not a million miles away from the OpenSocial schema I was looking at yesterday, btw.

I haven’t yet found where it says what I can and can’t do with this information…

OpenSocial schema extraction: via Javascript to RDF/OWL

OpenSocial’s API reference describes a number of classes (‘Person’, ‘Name’, ‘Email’, ‘Phone’, ‘Url’, ‘Organization’, ‘Address’, ‘Message’, ‘Activity’, ‘MediaItem’, ‘Activity’, …), each of which has various properties whose values are either strings, references to instances of other classes, or enumerations. I’d like to make them usable beyond the confines of OpenSocial, so I’m making an RDF/OWL version. OpenSocial’s schema is an attempt to provide an overarching model for much of present-day mainstream ‘social networking’ functionality, including dating, jobs etc. Such a broad effort is inevitably somewhat open-ended, and so may benefit from being linked to data from other complementary sources.

With a bit of help from the shindig-dev list, #opensocial IRC, and Kevin Brown and Kevin Marks, I’ve tracked down the source files used to represent OpenSocial’s data schemas: they’re in the opensocial-resources SVN repository on code.google.com. There is also a downstream copy in the Apache Shindig SVN repo (I’m not very clear on how versioning and evolution is managed between the two). They’re Javascript files, structured so that documentation can be generated via javadoc. The Shindig-PHP schema diagram I posted recently is a representation of this schema.

So – my RDF version. At the moment it is merely a list of classes and their properties (expressed using via rdfs:domain), written using RDFa/HTML. I don’t yet define rdfs:range for any of these, nor handle the enumerated values (opensocial.Enum.Smoker, opensocial.Enum.Drinker, opensocial.Enum.Gender, opensocial.Enum.LookingFor, opensocial.Enum.Presence) that are defined in enum.js.

The code is all in the FOAF SVN, and accessible via “svn co http://svn.foaf-project.org/foaftown/opensocial/vocab/”. I’ve also taken the liberty of including a copy of the OpenSocial *.js files, and Mozilla’s Rhino Javascript interpreter js.jar in there too, for self-containedness.

The code in schemarama.js will simply generate an RDFA/XHTML page describing the schema. This can be checked using the W3C validator, or converted to RDF/XML with the pyRDFa service at W3C.

I’ve tested the output using the OwlSight/pellet service from Clark & Parsia, and with Protege 4. It’s basic but seems OK and a foundation to build from. Here’s a screenshot of the output loaded into Protege (which btw finds 10 classes and 99 properties).

An example view from protege, showing the class browser in one panel, and a few properties of Person in another.

OK so why might this be interesting?

  • Using OpenSocial-derrived vocabulary, OpenSocial-exported data in other contexts
    • databases (queryable via SPARQL)
    • mixed with FOAF
    • mixed with Microformats
    • published directly in RDFa/HTML
  • Mapping OpenSocial terms with other contact and social network schemas

This suggests some goals for continued exploration:

It should be possible to use “OpenSocial markup” in an ordinary homepage or blog (HTML or XHTML), drawing on any of the descriptive concepts they define, through using RDFa’s markup notation. As Mark Birbeck pointed out recently, RDFa is an empty vessel – it does not define any descriptive vocabulary. Instead, the RDF toolset offers an environment in which vocabulary from multiple independent sources can be mixed and merged quite freely. The hard work of the OpenSocial team in analysing social network schemas and finding commonalities, or of the Microformats scene in defining simple building-block vocabularies … these can hopefully be combined within a single environment.

vCard Format Specification: draft-ietf-vcarddav-vcardrev-02

A new work-in-progress vCard spec. See June 25th draft, and emailed changelog. Excerpted here:

o Removed useless text in IMPP description.
o Added CalDAV-SCHED example to CALADRURI.
o Removed CAPURI property.
o Dashes in dates and colons in times are now mandatory.
o Allow for dates such as 2008 and 2008-05 and times such as 07 and
07:54.
o Removed inline vCard value.
o Made AGENT only accept URI references instead of inline vCards.
o Added the MEMBER property.
o Renamed the UID parameter to PID.
o Changed the value type of the PID parameter to “a small integer.”
o Changed the presence of UID and PID when synchronization is to be
used from MUST to SHOULD.
o Added the RELATED (Section 7.6.7) property.
o Fixed many ABNF typos (issue #252).
o Changed formatting of ABNF comments to make them easier to read
(issue #226).

From the Introduction:

This draft contains much of the same text as 2425 and 2426 which may not be correct. Those two RFCs have been merged and the structure of this draft is what’s new. Some vCard-specific
suggestions have been added, but for the most part this is still very open. But we’d like to get feedback on the structure mostly so that it may be fixed.