A quick link roundup:
“The following provides some guidelines for the user interface define of becoming an OAuth service provider”
Detailed notes on UI issues, with screenshots and links to related work (opensocial etc.).
Myspace’s OAuth Testing tool:
The MySpace OAuth tool creates examples to show external developers the correct format for constructing HTTP requests signed according to OAuth specifications
… to help developers cure their OAuth woes. You can use the Playground to help debug problems, check your own implementation, or experiment with the Google Data APIs.
If anyone figures out how to post files to Blogger via their AtomPub/OAuth API, please post a writeup! We should be able to use it to post RDFa/FOAF etc hopefully…
Finally, what looks like an excellent set of introductory posts on OAuth: a Beginner’s Guide to OAuth from Eran Hammer-Lahav.
Dries Buytaert on ‘Drupal, the Semantic Web, and search’.
Discusses RDFa, SearchMonkey and more. Great stuff! Excerpt:
This kind of technology is not limited to global search. On a social networking site built with Drupal, it opens up the possibility to do all sorts of deep social searches – searching by types and levels of relationships while simultaneously filtering by other criteria. I was talking with David Peterson the other day about this, and if Drupal core supported FOAF and SIOC out of the box, you could search within your network of friends or colleagues. This would be a fundamentally new way to take advantage of your network or significantly increase the relevance of certain searches.
Meanwhile, PHP Shindig (Apache’s OpenSocial widget container) has now been integrated into Drupal. Currently version 5 but things are moving to get it working in 6.x too. See docs (this link has some issues) and the announcement/discussion from shindig-dev. Also great news…
Maybe this is a record for delayed blogging. Nine and a half years late, here’s a writeup I found (on a corpsed hard-drive) of an image metadata workshop held by NISO, in Washington. I wrote it up for the JISC JIDI project at ILRT, who funded my trip. I’m sure they won’t mind it being shared here. Eric Miller and Paul Miller were at the workshop too; I remember working on the old Dublin Core RDF spec with them. NISO’s Workshop Report is also available from their site.
In April 1999, NISO organised a two-day invitational meeting whose aim was to gather requirements for technical metadata for digital images. The meeting addressed both architectural issues, and began the work of defining and categorising actual data elements for representing technical information about images. A report has recently been published on the NISO web service providing preliminary overviews of the agreed elements, including an comprehensive overview [report] of the two day meeting. This report for the JIDI group does not attempt to reproduce this work, but instead reports on a number of issues relating to the aims of JISC-funded services such as JIDI and TASI.
The workshop wrestled with a number of problems familiar to many from the non-imaging metadata community. In particular, it reaffirmed the conclusions drawn at the end of the earlier Dublin Core and Images workshop, namely that the requirements (and challenges) of image-oriented metadata applications were in many respects close to those of the bibliographic and text-oriented areas. Where digital imaging applications had specific requirements, these were in many cases related to the high mobility of digital image objects, and the frequency with which these objects were subjected to tranformations (eg. format conversion, editing).
A major concern which arose many times during the meeting was that of metadata loss through transformation. Many major software tools (eg. Photoshop) destroy or scramble embedded metadata. Conversion and editing of digital image objects tends, with currently available software, to damage embedded metadata. Given the high mobility of digital images, it is nevertheless appealling to explore models whereby metadata can travel with the objects from application to application. The meeting explored a number of frameworks whereby this could be achieved, including the use of a ‘container format‘ which might encapsulate both an image and accompanying metadata. The Java ‘JAR’ archive approach, which combines multiple files plus a metadata ‘manifest’ into a single portable object, was raised as a possible approach. A variant of this model was also discussed which used XML/RDF manifest files within a .ZIP or .JAR container to bundle both image data and metadata into a single transportable object. Although there was some interest in these approaches, no consensus was reached on the appropriate way forward, nor on whether this was a specifically image-related challenge or a general problem for the industry which might benefit from a generalised solution.
There was some consensus that digital signatures would need to be deployed over both image metadata, and the image data itself, for applications (eg. JIDI collections) which require some degree of quality assurance regarding the transformational processes that image collections have undergone. For digital signature technology to be applied to such content a canonicalisation algorithm is necessary. RDF and XML were raised as possible solutions in this area, although it was noted that the Signed XML initiative within W3C was not yet underway, and tha t the RDF working groups had deferred work on canonicalisation of RDF metadata until the issues had been more fully explored by the Signed XML group. Although digital signature technology can be applied to any content for which a text-based representation can be derrived, the issue of canonicalisation. Broadly, this means that applications which use digital signatures to make trust decisions, and which transform images, will ultimately benefit from a higher level of abstraction concerning ‘what it is that has been signed’. Current technology allows applications to think of simple textual files (eg. containing metadata) as being verifiable using digital signatures over that content. Future work will allow applications to instead make trust decisions relating to the logical ‘assertions’ represented in the files. The implication here is that trust-based metadata applications are at an early stage, but that we can anticipate within perhaps 2 years there being greater infrastructural support for applications which reason usefully about embedded metadata, eg. concerning the provenence, intellectual property rights and transformational history of some image object.
Why dig this up? Partly because I think a lot of practical work fed into the RDF design and it’s adoption eg. in Dublin Core, and this history is poorly documented. Also to kick myself for making prediction (” anticipate within perhaps 2 years”, phoey :) and because I’m looking again at signed RDF, largely in a FOAF/SPARQL context. Also I’ve just added a ‘foaf4lib’ category to the blog, as an offering to the code4lib aggregator and to accompany the new foaf4lib mailing list we have in the FOAF project.
Keeping my notes findable: Rmutt on MacOSX will build cleanly if you change the CC line in the Makefile to be:
CC = gcc -arch x86_64
That is all. For those who haven’t tried it, Rmutt is a grammar-based nonsense generator along the lines of the Dada Engine (see also Web version). The Dada Engine powers the near-legendary Postmodernism Generator, though I’m lately thinking about putting these kinds of tools into a hosted (openid/wiki etc) environment where people could collaboratively build up shared grammars.