OpenID, OAuth UI and tool links

A quick link roundup:

From ‘Google OAuth & Federated Login Research‘:

“The following provides some guidelines for the user interface define of becoming an OAuth service provider”

Detailed notes on UI issues, with screenshots and links to related work (opensocial etc.).

Myspace’s OAuth Testing tool:

The MySpace OAuth tool creates examples to show external developers the correct format for constructing HTTP requests signed according to OAuth specifications

Google’s OAuth playground tool (link):

… to help developers cure their OAuth woes. You can use the Playground to help debug problems, check your own implementation, or experiment with the Google Data APIs.

If anyone figures out how to post files to Blogger via their AtomPub/OAuth API, please post a writeup! We should be able to use it to post RDFa/FOAF etc hopefully…

Yahoo’s OpenID usability research. Really good to see this made public, I hope others do likewise. There’s a summary page and a full report in PDF, “Yahoo! OpenID: One Key, Many Doors“.

Finally, what looks like an excellent set of introductory posts on OAuth: a Beginner’s Guide to OAuth from Eran Hammer-Lahav.

Drupal social/data Web developments

Dries Buytaert  on ‘Drupal, the Semantic Web, and search’.

Discusses RDFa, SearchMonkey and more. Great stuff! Excerpt:

This kind of technology is not limited to global search. On a social networking site built with Drupal, it opens up the possibility to do all sorts of deep social searches – searching by types and levels of relationships while simultaneously filtering by other criteria. I was talking with David Peterson the other day about this, and if Drupal core supported FOAF and SIOC out of the box, you could search within your network of friends or colleagues. This would be a fundamentally new way to take advantage of your network or significantly increase the relevance of certain searches.

Meanwhile, PHP Shindig (Apache’s OpenSocial widget container) has now been integrated into Drupal. Currently version 5 but things are moving to get it working in 6.x too. See docs (this link has some issues) and the announcement/discussion from shindig-dev. Also great news…

Problem statement

A Pew Research Center survey released a few days ago found that only half of Americans correctly know that Mr. Obama is a Christian. Meanwhile, 13 percent of registered voters say that he is a Muslim, compared with 12 percent in June and 10 percent in March.

More ominously, a rising share — now 16 percent — say they aren’t sure about his religion because they’ve heard “different things” about it.

When I’ve traveled around the country, particularly to my childhood home in rural Oregon, I’ve been struck by the number of people who ask something like: That Obama — is he really a Christian? Isn’t he a Muslim or something? Didn’t he take his oath of office on the Koran?

It was in the NYTimes, so it must be true. Will the last one to leave the Web please turn off the lights.

Mozilla Ubiquity

The are some interesting things going on at Mozilla Labs. Yesterday, Ubiquity was all over the mailing lists. You can think of it as “what the Humanized folks did next”, or as a commandline for the Web, or as a Webbier sibling to QuickSilver, the MacOSX utility. I prefer to think of it as the Mozilla add-on that distracted me all day. Ubiquity continues Mozilla’s exploration of the potential UI uses of its “awesome bar” (aka Location bar). Ubiquity is invoked on my Mac with alt-space, at which point it’ll enthusiastically try to autocomplete a verb-centric Webby task from whatever I type. It does this by consulting a pile of built-in and community-provided Javacript functions, which have access to the Web, your browser (hello, widget security fans)… and it also has access to UI, in terms of an overlaid preview window, as well as a context menu that can actually be genuinely contextual, ie. potentially sensitive to microformat and RDFa markup.

So it might help to think of ubiquity as a cross between The Hobbit, GreaseMonkeyBookmarklets, and Mozilla’s earlier forms of packaged addon. Ok, well it’s not very Hobbit, I just wanted an excuse for this screen grab. But it is about natural language interfaces to complex Webby datasources and services.

The basic idea here is that commands (triggered by some keyword) can be published in the Web as links to simple Javascript files that can be single-click added (without need for browser restart) by anyone trusting enough to add the code to their browser. Social/trust layers to help people avoid bad addons are in the works too.

I spent yesterday playing. There are some rough edges, but this is fun stuff for sure. The emphasis is on verbs, hence on doing, rather than solely on lookups, query and data access. Coupled with the dependency on third party Javascript, this is going to need some serious security attention. But but but… it’s so much fun to use and develop for. Something will shake out security-wise. Even if Ubiquity commands are only shared amongst trusting power users who have signed each other’s PGP keys, I think it’ll still have an important niche.

What did I make? A kind of stalk-a-tron, FOAF lookup tool. It currently only consults Google’s Social Graph API, an experimental service built from all the public FOAF and XFN on the Web plus some logic to figure out which account pages are held by the same person. My current demo simply retrieves associated URLs and photos, and displays them overlaid on the current page. If you can’t get it working via the Ubiquity auto-subscribe feature, try adding it by pasting the raw Javascript into the command-editor screen. See also the ‘sindice-term‘ lookup tool from Michael Hausenblas. It should be fun seeing how efforts like Bengee’s SPARQLScript work can be plugged in here, too.

Miracle of binary

The ultimate absurdity is now staring us in the face: a universal library of two volumes, one containing a single dot and the other a dash. Persistent repetition and alternation of the two is sufficient, we well know, for spelling out any and every truth. The miracle of the finite but universal library is a mere inflation of the miracle of binary notation: everything worth saying, and everything else as well, can be said with two characters. It is a letdown befitting the Wizard of Oz, but it has been a boon to computers.

— Willard van Orman Quine on the Universal Library

(via Borges’ Library of Babel indirectly found via Dan Connolly’s RDFization of the animals quote)

This somehow reminded me of a couple other links I found earlier on Turing Machines built in Conway’s game of Life: one from Paul Rendell, another from Paul Chapman. These machines also have a kind of strange beauty

Be your own twitter: laconi.ca microblog platform and identi.ca

The laconi.ca microblogging platform is as open as you could hope for. That elusive trinity: open source; open standards; and open content.

The project is led by Evan Prodromou (evan) of Wikitravel fame, whose company just launched identi.ca, “an open microblogging service” built with Laconica. These are fast gaining feature-parity with twitter; yesterday we got a “replies” tab; this morning I woke to find “search” working. Plenty of interesting people have  signed up and grabbed usernames. Twitter-compatible tools are emerging.

At first glance this might look the typical “clone” efforts that spring up whenever a much-loved site gets overloaded. Identi.ca‘s success is certainly related to the scaling problems at Twitter, but it’s much more important than that. Looking at FriendFeed comments about identi.ca has sometimes been a little depressing: there is too often a jaded, selfish “why is this worth my attention?” tone. But they’re missing something. Dave Winer wrote a “how to think about identi.ca” post recently; worth a read, as is the ever-wise Edd Dumbill on “Why identica is important”. This project deserves your attention if you value Twitter, or if you care about a standards-based decentralised Social Web.

I have a testbed copy at foaf2foaf.org (I’ve been collecting notes for Laconica installations at Dreamhost). It is also federated. While there is support for XMPP (an IM interface) the main federation mechanism is based on HTTP and OAuth, using the openmicroblogging.org spec. Laconica supports OpenID so you can play  without needing another password. But the OpenID usage can also help with federation and account matching across the network.

Laconica (and the identi.ca install) support FOAF by providing a FOAF files  – data that is being indexed already by Google’s Social Graph API. For eg. see  my identi.ca FOAF; and a search of Google SGAPI for my identi.ca account.  It is in PHP (and MySQL) – hacking on FOAF consumer code using ARC is a natural step. If anyone is interested to help with that, talk to me and to Evan (and to Bengee of course).

Laconica encourages everyone to apply a clear license to their microblogged posts; the initial install suggests Creative Commons Attribution 3. Other options will be added. This is important, both to ensure the integrity of this a system where posts can be reliably federated, but also as part of a general drift towards the opening up of the Web.

Imagine you are, for example, a major media content owner, with tens of thousands of audio, video, or document files. You want to know what the public are saying about your stuff, in all these scattered distributed Social Web systems. That is just about do-able. But then you want to know what you can do with these aggregated comments. Can you include them on your site? Horrible problem! Who really wrote them? What rights have they granted? The OpenID/CC combination suggests a path by which comments can find their way back to the original publishers of the content being discussed.

I’ve been posting a fair bit lately about OAuth, which I suspect may be even more important than OpenID over the next couple of years. OAuth is an under-appreciated technology piece, so I’m glad to see it being used nicely for Laconica. Laconica installations allow you to subscribe to an account from another account elsewhere in the Web. For example, if I am logged into my testbed site at http://foaf2foaf.org/bandri and I visit http://identi.ca/libby, I’ll get an option to (remote-)subscribe. There are bugs and usability problems as of right now, but the approach makes sense: by providing the url of the remote account, identi.ca can bounce me over to foaf2foaf which will ask “really want to subscribe to Libby? [y/n]“, setting up API permissioning for cross-site data flow behind the scenes.

I doubt that the openmicroblogging spec will be the last word on this kind of syndication / federation. But it is progress, practical and moving fast. A close cousin of this design is the work from the SMOB (Semantic Microblogging) project, who use SIOC, FOAF and HTTP. I’m happy to see a conversation already underway about bridging those systems.

Do please consider supporting the project. And a special note for Semantic Web (over)enthusiasts: don’t just show up and demand new RDF-related features. Either build them yourself or dive into the project as a whole. Have a nose around the buglist. There is of course plenty of scope for semwebbery, but I suggest a first priority ought to be to help the project reach a point of general usability and adoption. I’ve nothing against Twitter just as I had nothing at all against Six Apart and Movable Type, back before they opensourced. On the contrary, Movable Type was a great product from great people. But the freedoms and flexibility that opensource buys us are hard to ignore. And so I use WordPress now, having migrated like countless others. My suspicion is we’re at a “WordPress/MovableType” moment here with Identica/Laconica and Twitter, and that of all the platforms jostling to be the “new twitter”, this one is most deserving of success. With opensource, Laconica can be the new Laconica…

You can follow me here identi.ca/danbri

The Time of Day

(a clock showing no time)

From my Skype logs [2008-06-19 Dan Brickley: 18:24:23]

So I had a drunken dream about online microcurrencies last night. Also about cats and water-slides but that’s another story. Idea was of a karma donation system based on one-off assignments from person to person of specified chunks of their lifetime; ‘giving the time of day’. you’re allowed to give any time of day taken from those days you’ve been alive so far. they’re not directly redistributable, nor necessarily related to what happened during the specified time. there’s no central banker, beyond the notion of ‘the public record’. The system naturally favours the old/experienced, but if someone gets drunk and gives all their time/karma to a porn site, at least in the morning they’ll have another 24h ‘in the bank’. Or they could retract/deny the gift, although doing so a lot would also be visible in the public record and doing so excessively would make one look a bit sketchy, one’s time gifts seem less valuable etc. Anchoring to a real world ‘good’ (time) is supposed to provide some control against runaway inflation, as is non-redistributability, but also the time thing is nice for visualizations and explanation. I’m not really sure if it makes sense but thought i’d write it down before i forget the idea…

One idea would be for the time gifts to be redeemable, but that i think pushes the metaphor too far into being a real currency for a fictional world where hourly rates are flattened. Some Lets schemes probably work that way I guess…

So I’ve been meaning to write this up, but in the absense of having done so, here’s the idea as it first struck me. I had been thinking a bit about online reputation services, and the kinds of information they might aggregate. Garlik’s QDOS and FOAF experiments being a good example of this kind of evidence aggregation. As OpenID, FOAF, microformats etc. take hold, I really think we’ll see a massive parting of waves, red sea style, with the “public record” on one side, and “private stuff” on the other.

And in the public record, we’ll be attaching information about the things we make and do to well-known identifiers for people (and their semi-detached aliases). Various websites have rating and karma mechanisms, but it is far from clear how they’ll look when shared in the public Web. Nor whether something robust and not-too-gameable will come out of it. There are certainly various modelling idioms (eg. advogato do their internal calculations, and then put everyone in one of several broad-brush groups; here’s my advogato FOAF). See also my previous notes on representing expertise.

Now in some IRC channels, there are bots where you can dish out credit by typing things like:

edd++ # xtechy

…and have a bot add up the credits, as well as the comments. In small IRC communities these aren’t gamed except for fun. So I’ve been thinking: how can these kinds of habits ever work in the wider Web, where people are spread across Web sites (but nevetherless identifiable with OpenID and FOAF). How could it not turn hideous? What limited resource do we each have a supply of? No, not kidneys. And in a hungover stupor I came to think that “the time of day” could be such a resource. It’s really just a metaphor, and I’m not sure at all that the quantifiable nature is a benefit. But I also quite like that we each have a neverending supply of the stuff, and that even a fleeting moment can count.

Update: here’s a post from Simon Lucy which has a very similar direction (it was Simon I was drinking with the night before writing this). Excerpt:

And what do you do with your positive balance? Need you do anything? I imagine those that care will publish their balance or compare it with others in similar way to company cars or hi fi tvs. There will always be envy and jealousy.

But no one can steal your balance, misuse it.

So who wants to host the Ego Bank.

The main difference compared to my suggested scheme, is just the ‘the Web’ and the public record it carries, are the “ego bank”, creating a playground for aggregators of karma, credibility and reputation information. “The time of day” would just be one such category of information…

Referata, a Semantic Media Wiki hosting site

From Yaron Koren on the semediawiki-users list:

I’m pleased to announce the release of the site Referata, at referata.com: a hosting site for SMW-based semantic wikis. This is not the first site to offer hosting of wikis using Semantic MediaWiki (that’s Wikia, as of a few months ago), but it is the first to also offer the usage of Semantic Forms, Semantic Drilldown, Semantic Calendar, Semantic Google Maps and some of the other related extensions you’ve probably heard about; Widgets, Header Tabs, etc. As such, I consider it the first site that lets people create true collaborative databases, where many people can work together on a set of well-structured data.

See announcement and their features page for more details. Basic usage is free; $20/month premium accounts can have private data, and $250/month enterprise accounts can use their own domains. Not a bad plan I think. A showcase Referata wiki would help people understand the offering better. In the meantime there is elsewhere a list of sites using Semantic MediaWiki. That list omits Chickipedia; we can only wonder why. Also I have my suspicions that Intellipedia runs with the SMW extensions too, but that’s just guessing. Regardless, there are a lot of fun things you could do with this, take a look…