Subject classification and Statistics

Subject classification and statistics share some common problems. This post takes a small example discussed at this week’s ODaF event on “Semantic Statistics” in Tilberg, and explores its expression coded in the Universal Decimal Classification (UDC). UDC supports faceted description, providing an abstract grammar allowing sentence-like subject descriptions to be composed from the “raw materials” defined in its vocabulary scheme.

This makes the mapping of UDC (and to some extent also Dewey classifications)  into W3C’s SKOS somewhat lossy, since patterns and conventions for documenting these complex, composed structures are not yet well established. In the NoTube project we are looking into this in a TV context, in large part because the BBC archives make extensive use of UDC via their Lonclass scheme; see my ‘investigating Lonclass‘ and UDC seminar talk for more on those scenarios. Until this week I hadn’t thought enough about the potential for using this to link deep into statistical datasets.

One of the examples discussed on Tuesday was as follows (via Richard Cyganiak):

“There were 66 fatal occupational injuries in the Washington, DC metropolitan area in 2008″

There was much interesting discussion in Tilburg about the proper scope and role of Linked Data techniques for sharing this kind of statistical data. Do we use RDF essentially as metadata, to find ‘black boxes’ full of stats, or do we use RDF to try to capture something of what the statistics are telling us about the world? When do we use RDF as simple factual data directly about the world (eg. school X has N pupils [currently; or at time t]), and when does it become a carrier for raw numeric data whose meaning is not so directly expressed at the factual level?

The state of the art in applying RDF here seems to be SDMX-RDF, see Richard’s slides. The SDMX-RDF work uses SKOS to capture code lists, to describe cross-domain concepts and to indicate subject matter.

Given all this, I thought it would be worth taking this tiny example and looking at how it might look in UDC, both as an example of the ‘compositional semantics’ some of us hope to capture in extended SKOS descriptions, but also to explore scenarios that cross-link numeric data with the bibliographic materials that can be found via library classification techniques such as UDC. So I asked the ever-helpful Aida Slavic (editor in chief of the UDC), who talked me through how this example data item looks from a UDC perspective.

I asked,

So I’ve just got home from a meeting on semweb/stats. These folk encode data values with stuff like “There were 66 fatal occupational injuries in the Washington, DC metropolitan area in 2008″. How much of that could have a UDC coding? I guess I should ask, how would subject index a book whose main topic was “occupational injuries in the Washington DC metro area in 2008″?

Aida’s reply (posted with permission):

You can present all of it & much more using UDC. When you encode a subject like this in UDC you store much more information than your proposed sentence actually contains. So my decision of how to ‘translate this into udc’ would depend on learning more about the actual text and the context of the message it conveys, implied audience/purpose, the field of expertise for which the information in the document may be relevant etc. I would probably wonder whether this is a research report, study, news article, textbook, radio broadcast?

Not knowing more then you said I can play with the following: 331.46(735.215.2/.4)”2008

Accidents at work — Washington metropolitan area — year 2008
or a bit more detailed:  331.46-053.18(735.215.2/.4)”2008
Accidents at work — dead persons – Washington metropolitan area — year 2008
[you can say the number of dead persons but this is not pertinent from point of view of indexing and retrieval]

…or maybe (depending what is in the content and what is the main message of the text) and because you used the expression ‘fatal injuries’ this may imply that this is more health and safety/ prevention area in health hygiene which is in medicine.

The UDC structures composed here are:

TIME “2008”

PLACE (735.215.2/.4)  Counties in the Washington metropolitan area

TOPIC 1
331     Labour. Employment. Work. Labour economics. Organization of  labour
331.4     Working environment. Workplace design. Occupational safety.  Hygiene at work. Accidents at work
331.46  Accidents at work ==> 614.8

TOPIC 2
614   Prophylaxis. Public health measures. Preventive treatment
614.8    Accidents. Risks. Hazards. Accident prevention. Persona protection. Safety
614.8.069    Fatal accidents

NB – classification provides a bit more context and is more precise than words when it comes to presenting content i.e. if the content is focused on health and safety regulation and occupation health then the choice of numbers and their order would be different e.g. 614.8.069:331.46-053.18 [relationship between] health & safety policies in prevention of fatal injuries and accidents at work.

So when you read  UDC number 331.46 you do not see only e.g. ‘accidents at work’ but  ==>  ‘accidents at work < occupational health/safety < labour economics, labour organization < economy
and when you see UDC number 614.8  it is not only fatal accidents but rather ==> ‘fatal accidents < accident prevention, safety, hazards < Public health and hygiene. Accident prevention

When you see (735.2….) you do not only see Washington but also United States, North America

So why is this interesting? A couple of reasons…

1. Each of these complex codes combines several different hierarchically organized components; just as they can be used to explore bibliographic materials, similar approaches might be of value for navigating the growing collections of public statistical data. If SKOS is to be extended / improved to better support subject classification structures, we should take care also to consider use cases from the world of statistics and numeric data sharing.

2. Multilingual aspects. There are plans to expose SKOS data for the upper levels of UDC. An HTML interface to this “UDC summary” is already available online, and includes collected translations of textual labels in many languages (see progress report) . For example, we can look up 331.4 and find (in hierarchical context) definitions in English (“Working environment. Workplace design. Occupational safety. Hygiene at work. Accidents at work”), alongside e.g. Spanish (“Entorno del trabajo. Diseño del lugar de trabajo. Seguridad laboral. Higiene laboral. Accidentes de trabajo”), CroatianArmenian, …

Linked Data is about sharing work; if someone else has gone to the trouble of making such translations, it is probably worth exploring ways of re-using them. Numeric data is (in theory) linguistically neutral; this should make linking to translations particularly attractive. Much of the work around RDF and stats is about providing sufficient context to the raw values to help us understand what is really meant by “66” in some particular dataset. By exploiting SDMX-RDF’s use of SKOS, it should be possible to go further and to link out to the wider literature on workplace fatalities. This kind of topical linking should work in both directions: exploring out from numeric data to related research, debate and findings, but also coming in and finding relevant datasets that are cross-referenced from books, articles and working papers. W3C recently launched a Library Linked Data group, I look forward to learning more about how libraries are thinking about connecting numeric and non-numeric information.

RDFa in Drupal 7: last call for feedback before alpha release

Stéphane has just posted a call for feedback on the Drupal 7 RDFa design, before the first official alpha release.

First reaction above all, is that this is great news! Very happy to see this work maturing.

I’ve tried to quickly suggest some tweaks to the vocab, by hacking his diagram in photoshop. All it really shows is that I’ve forgotten how to use photoshop, but I’ll upload it here anyway.

So if you click through to the full image, you can see my rough edits.

I’d suggest:

  1. Use (dcterms) dc:subject as the way of pointing from a document to it’s SKOS subject.
  2. Use (dcterms) dc:creator as the relationship between a document and the person that created it (note that in FOAF, we now declare foaf:maker to map as an equivalentProperty to (dcterms)dc:creator).
  3. Distinguish between the description of the person versus their account in the Drupal system; I would use foaf:Person for the human, and sioc:User (a kind of foaf:OnlineAccount) as the drupal account. The foaf property to link from the former to the latter is foaf:account (new name for foaf:holdsAccount).
  4. Focus on SIOC where it is most at-home: in modelling the structure of the discussion; threading, comments and dialog.
  5. Provide a generated URI for the person. I don’t 100% understand Stephane’s comment, “Hash URIs for identifying things different from the page describing them can be implemented quite easily but this case hasn’t emerged in core” but perhaps this will be difficult? I’d suggest using URIs ending “userpage#!person” so the fragment IDs can’t clash with HTML usage.

If the core release can provide this basic structure, including a hook for describing the human person rather than the site-specific account (ie. sioc:User) then extensions should be able to add their own richness. The current markup doesn’t quite work for that end, as the human user is only described indirectly (unless  I understand current reading of sioc:User).

Anyway, I’m nitpicking! This is really great, and a nice and well-deserved boost for the RDFa community.

WOT in RDFa?

(This post is written in RDFa…)

To the best of my knowledge, Ludovic Hirlimann‘s PGP fingerprint is 6EFBD26FC7A212B2E093 B9E868F358F6C139647C. You might also be interested in his photos on flickr, or his workplace, Mozilla Messaging. The GPG key details were checked over a Skype video call with me, Ludo and Kaare A. Larsen.

This blog post isn’t signed, the URIs it referenced don’t use SSL, and the image could be switched by evildoers at any time! But the question’s worth asking: is this kind of scruffy key info useful, if there’s enough of it? If I wrote it somehow in Thunderbird’s editor instead, would it be easier to sign? Will 99.9% of humans ever know enough of what’s going on to understand what signing a bunch of complex markup means?

For earlier discussion of this kind of thing, see Joseph Reagle’s Key-free Trust piece (“Does Google Show How the Semantic Web Could Replace Public Key Infrastructure?”). It’s more PKI-free trust than PK-free.

My ’70s Schoolin’ (in RDFa)

I went to Hamsey Green school in the 1970s.

Looking in the UK Govt datasets, I see it is listed there with a homepage of ‘http://www.hamsey-green-infant.surrey.sch.uk’ (which doesn’t seem to work).

Some queries I’m trying via the SPARQL dataset (I’ll update this post if I make them work…)

First a general query, from which I found the URL manually, …

select distinct ?x ?y where { ?x <http ://education.data.gov.uk/def/school/websiteAddress> ?y . }

Then I can go back into the data, and find other properties of the school:

PREFIX sch-ont:  <http://education.data.gov.uk/def/school/>
select DISTINCT ?x ?p ?z WHERE
{
?x sch-ont:websiteAddress "http://www.hamsey-green-infant.surrey.sch.uk" .
?x ?p ?z .
}

Results in json

How to make this presentable? I can’t get output=html to work, but if I run this ‘construct’ query it creates a simple flat RDF document:

PREFIX sch-ont:  <http://education.data.gov.uk/def/school/>
CONSTRUCT {
 ?x ?p ?z .
}
WHERE
{
?x sch-ont:websiteAddress "http://www.hamsey-green-infant.surrey.sch.uk" .
?x ?p ?z .
}

So, where are we here? We see two RDF datasets about the same school. One is the simple claim that I attended the

school at some time in the past (1976-1978, in fact). The other describes many of its current attributes; most of which may be different now from in the past. In my sample RDFa, I used the most popular Web link for the school to represent it; in the Edubase government data, it has a Web site address but it seems not to be current.

Assuming we’d used the same URIs for the school’s homepage (or indeed for the school itself) then these bits of data could be joined.

Perhaps a more compelling example of data linking would be to show this schools data mixed in with something like MySociety’s excellent interactive travel maps? Still, the example above shows that basic “find people who went to my school” queries should be very possible…

WordPress trust syndication revisited: F2F plugin

This is a followup to my Syndicating trust? Mediawiki, WordPress and OpenID post. I now have a simple implementation that exports data from WordPress: the F2F plugin. Also some experiments with consuming aggregates of this information from multiple sources.

FOAF has always had a bias towards describing social things that are shown rather than merely stated; this is particularly so in matters of trust. One way of showing basic confidence in others, is by accepting their comments on your blog or Web site. F2F is an experiment in syndicating information about these kinds of everyday public events. With F2F, others can share and re-use this sort of information too; or deal with it in aggregate to spread the risk and bring more evidence into their trust-related decisions. Or they might just use it to find interesting people’s blogs.

OpenID is a technology that lets people authenticate by showing they control some URL. WordPress blogs that use the OpenID plugin slowly accumulate a catalogue of URLs when people leave comments that are approved or rejected. In my previous post I showed how I was using the list of approved OpenIDs from my blog to help configure the administrative groups on the FOAF wiki.

This may all raise more questions than it answers. What level of detail is appropriate? are numbers useful, or just lists? in what circumstances is it sensible or risky to merge such data? is there a reasonable use for both ‘accept’ lists and ‘unaccept’ lists? What can we do with a list of OpenID URLs once we’ve got it? How do we know when two bits of trust ‘evidence’ actually share a common source? How do we find this information from the homepage of a blog?

If you install the F2F plugin (and have been using the OpenID plugin long enough to have accumulated a database table of OpenIDs associated with submitted comments), you can experiment with this. Basically it will generate HTML in RDFa format describing a list of people . See the F2F Wiki page for details and examples.

The script is pretty raw, but today it all improved a fair bit with help from Ed Summers, Daniel Krech and Morten Frederiksen. Ed and Daniel helped me get started with consuming this RDFa and SPARQL in the latest version of the rdflib Python library. Morten rewrote my initial nasty hack, so that it used WordPress Shortcodes instead of hardcoding a URL path. This means that any page containing a certain string – f2f in chunky brackets – will get the OpenID list added to it. I’ll try that now, right here in this post. If it works, you’ll get a list of URLs below. Also thanks to Gerald Oskoboiny for discussions on this and reputation-related aggregation ideas; see his page on reputation and trust for lost more related ideas and sites. See also Peter Williams’ feedback on the foaf-dev list.

Next steps? I’d be happy to have a few more installations of this, to get some testbed data. Ideally from an overlapping community so the datasets are linked, though that’s not essential. Ed has a copy installed currently too. I’ll also update the scripts I use to manage the FOAF MediaWiki admin groups, to load data from RDFa blogs; mine and others if people volunteer relevant data. It would be great to have exports from other software too, eg. Drupal or MediaWiki.

Comment accept list for http://danbri.org/words

WordPress, TinyMCE and RDFa editors

I’m writing this in WordPress’s ‘Visual’ mode WYSIWYG HTML editor, and thinking “how could it be improved to support RDFa?”

Well let’s think. Humm. In RDFa, every section of text is always ‘about’ something, and then has typed links or properties associated with that thing. So there are icons ‘B’ for bold, ‘I’ for italics, etc. And thingies for bulleted lists, paragraphs, fonts. I’m not sure how easily it would handle the nesting of full RDFa, but some common case could be a start: a paragraph that was about some specific thing, and within which the topical focus didn’t shift.

So, here is a paragraph about me. How to say it’s really about me? There could be a ‘typeof’ attribute on the paragraph element, with value foaf:Person, and an ‘about’ attribute with a URI for me, eg. http://danbri.org/foaf.rdf#danbri

In this UI, I just type textual paragraphs and then double newlines are interpreted by the publishing engine as separate HTML paragraphs. I don’t see any paragraph-level or even post-level properties in the user interface where an ‘about’ URI could be stored. But I’m sure something could be hacked. Now what about properties and links? Let’s see…

I’d like to link to TinyMCE now, since that is the HTML editor that is embedded in WordPress. So here is a normal link to TinyMCE. To make it, I searched for the TinyMCE homepage, copied its URL into the copy/paste buffer, then pressed the ‘add link’ icon here after highlighting the text I wanted to turn into a link.

It gave me a little HTML/CSS popup window with some options – link URL, target window, link title and (CSS) class; I’ve posted a screenshot below. This bit of UI seems like it could be easily extended for typed links. For example, if I were adding a link to a paragraph that was about a Film, and I was linking to (a page about) its director or an actor, we might want to express that using an annotation on the link. Due to the indirection (we’re trying to say that the link is to a page whose primary topic is the director, etc.), this might complicate our markup. I’ll come back to that later.

tinymce link edit screenshop

tinymce link edit screenshot

I wonder if there’s any mention of RDFa on the TinyMCE site? Nope. Nor even any mention of Microformats.

Perhaps the simplest thing to try to build into TinyMCE would be for a situation where a type on the link expressed a relationship between the topic of the blog post (or paragraph), and some kind of document.

For example, a paragraph about me, might want to annotate a link to my old school’s homepage (ie. http://www.westergate.w-sussex.sch.uk/ ) with a rel=”foaf:schoolHomepage” property.

So our target RDFa would be something like (assuming this markup is escaped and displayed nicely):

<p about=”http://danbri.org/foaf.rdf#danbri” typeof=”foaf:Person” xmlns:foaf=”http://xmlns.com/foaf/0.1/”>

I’m Dan, and I went to <a rel=”foaf:schoolHomepage” href=”http://www.westergate.w-sussex.sch.uk/”>Westergate School</a>

</p>.

See recent discussion on the foaf-dev list where we contrast this idiom with one that uses a relationship (eg. ‘school’) that directly links a person to a school, rather than to the school’s homepage. Toby made some example RDFa output of the latter style, in response to a debate about whether the foaf:schoolHomepage idiom is an ‘anti-pattern’. Excerpt:

<p about=”http://danbri.org/foaf.rdf#danbri” typeof=”foaf:Person” xmlns:foaf=”http://xmlns.com/foaf/0.1/”>

My name is <span property=”foaf:name”>Dan Brickley</span> and I spent much of the ’80s at <span rel=”foaf:school”><a typeof=”foaf:Organization” property=”foaf:name” rel=”foaf:homepage” href=”http://www.westergate.w-sussex.sch.uk/”>Westergate School</a>.

</p>

So there are some differences between these two idioms. From the RDF side they tell you similar things: the school I went to. The latter idiom is more expressive, and allows additional information to be expressed about the school, without mixing it up with the school’s homepage. The former simply says “this link is to the homepage of Dan’s school”; and presumably defers to the school site or elsewhere for more details.

I’m interested to learn more about how these expressivity tradeoffs relate to the possibilities for visual / graphical editors. Could TinyMCE be easily extended to support either idiom? How would the GUI recognise a markup pattern suitable for editing in either style? Can we avoid having to express relationships indirectly, ie. via pages? Would it be a big job to get some basic RDFa facilities into WordPress via TinyMCE?

For related earlier work, see the 2003 SWAD-Europe report on Semantic blogging from the team then at HP Labs Bristol.

Syndicating trust? Mediawiki, WordPress and OpenID

Fancy title but simple code. A periodic update script is setting user/group membership rules on the FOAF wiki based on a list of trusted (for this purpose) OpenIDs exported from a nearby blog. If you’ve commented on the blog using OpenID and it was accepted, this means you can also perform some admin actions (page deletes, moves, blocking spammers etc.) on the FOAF wiki without any additional fuss.

Both WordPress blogs and Mediawiki wikis have some support for OpenID logins.

The FOAF wiki until recently only had one Sysop and Bureaucrat account (a bureaucrat has the same privileges as a Sysop except for the ability to create new bureaucrat accounts). So I’ve begun an experiment exploring idea of pre-approving certain OpenIDs for bureaucrat activities. For now, I take a list of OpenIDs from my own blog; these appear to be just the good guys, but this might be because only real humans have commented on my blog via OpenID. With a bit of tweaking I’m sure I could write SQL to select out only OpenIDs associated with posts or comments I’ve accepted as non spammy, though.

So now there’s a script I can run (thanks tobyink and others in #swig IRC for help) which compares an externally supplied list of OpenID URIs with those OpenIDs known to the wiki, and upgrades the status of any overlaps to be bureaucrats. Currently the ‘syndication’ is trivial since the sites are on the same machine, and the UI is minimal; I haven’t figured out how best to convey this notion of ‘pre-approved upgrade’ to the people I’m putting in an admin group. Quite reasonably they might object to being misrepresented as contributors; who knows.

But all that aside, take a look and have a think. This kind of approach has a lot going for it. We will have all kinds of lists of people, groups of people, and in many cases we’ll know their OpenIDs. So why not pool what we know? If a blog or wiki has information about an OpenID that shows it is somehow trustworthy, or at least not obviously a spammer, there’s every reason to make notations (eg. FOAF/RDFa) that allow other such sites to harvest and integrate that data…

See also Dan Connolly’s DIG blog post on this, and the current list of Bureaucrats on the FOAF Wiki (and associated documentation). If your names on the list, it just means your OpenID was on a pre-approved list of folk who I trust based on their interactions with my own blog. I’d love to add more sources here and make it genuinely communal.

This is all part of the process of getting FOAF moving again. The brains of FOAF is in the IssueTracker page, and since the site was damaged by spammers and hackers recently I’m trying to make sure we have a happy / wholesome environment for maintaining shared documents. And that’s more than I can do as a solo admin, hence this design for opening things up…

Remote remotes

I’ve just closed the loop on last weekend’s XMPP / Apple Remote hack, using Strophe.js, a library that extends XMPP into normal Web pages. I hope I’ll find some way to use this in the NoTube project (eg. wired up to Web-based video playing in OpenSocial apps), but even if not it has been a useful learning experience. See this screenshot of a live HTML page, receiving and displaying remotely streamed events (green blob: button clicked; grey blob: button released). It doesn’t control any video yet, but you get the idea I hope.

Remote apple remote HTML demo

Remote apple remote HTML demo, screenshot showing a picture of handheld apple remote with a grey blob over the play/pause button, indicating a mouse up event. Also shows debug text in html indicating ButtonUpEvent: PLPZ.

This webclient needs the JID and password details for an XMPP account, and I think these need to be from the same HTTP server the HTML is published on. It works using BOSH or other tricks, but for now I’ve not delved into those details and options. Source is in the Buttons area of the FOAF svn: webclient. I made a set of images, for each button in combination with button-press (‘down’), button-release (‘up’). I’m running my own ejabberd and using an account ‘buttons@foaf.tv’ on the foaf.tv domain. I also use generic XMPP IM accounts on Google Talk, which work fine although I read recently that very chatty use of such services can result in data rates being reduced.

To send local Apple Remote events to such a client, you need a bit of code running on an OSX machine. I’ve done this in a mix of C and Ruby: imremoted.c (binary) to talk to the remote, and the script buttonhole_surfer.rb to re-broadcast the events. The ruby code uses Switchboard and by default loads account credentials from ~/.switchboardrc.

I’ve done a few tests with this setup. It is pretty responsive considering how much indirection is involved: but the demo UI I made could be prettier. The + and – buttons behave differently to the left and right (and menu and play/pause); only + and – send an event immediately. The others wait until the key is released, then send a pair of events. The other keys except for play/pause will also forget what’s happening unless you act quickly. This seems to be a hardware limitation. Apparently Apple are about to ship an updated $20 remote; I hope this aspect of the design is reconsidered, as it limits the UI options for code using these remotes.

I also tried it using two browsers side by side on the same laptop; and two laptops side by side. The events get broadcasted just fine. There is a lot more thinking to do re serious architecture, where passwords and credentials are stored, etc. But XMPP continues to look like a very interesting route.

Finally, why would anyone bother installing compiled C code, Ruby (plus XMPP libraries), their own Jabber server, and so on? Well hopefully, the work can be divided up. Not everyone installs a Jabber server. My thinking is that we can bundle a collection of TV and SPARQL XMPP functionality in a single install, such that local remotes can be used on the network, but also local software (eg. XBMC/Plex/Boxee) can also be exposed to the wider network – whether it’s XMPP .js running inside a Web page as shown here, or an iPhone or a multi-touch table. Each will offer different interaction possibilities, but they can chat away to each other using a common link, and common RDF vocabularies (an area we’re working on in NoTube). If some common micro-protocols over XMPP (sending clicks or sending commands or doing RDF queries) can support compelling functionality, then installing a ‘buttons adaptor’ is something you might do once, with multiple benefits. But for now, apart from the JQbus piece, the protocols are still vapourware. First I wanted to get the basic plumbing in place.

Update: I re-discovered a useful ‘which bosh server do you need?’ which reminds me that there are basically two kinds of BOSH software offering; those that are built into some existing Jabber/XMPP server (like the ejabberd installation I’m using on foaf.tv) and those that are stand-alone connection managers that proxy traffic into the wider XMPP network. In terms of the current experiment, it means the event stream can (within the XMPP universe) come from addresses other than the host running the Web app. So I should also try installing Punjab. Perhaps it will also let webapps served from other hosts (such as opensocial containers) talk to it via JSON tricks? So far I have only managed to serve working Buttons/Strophe HTML from the same host as my Jabber server, ie. foaf.tv. I’m not sure how feasible the cross-domain option is.

Update x2: Three different people have now mentioned opensoundcontrol to me as something similar, at least on a LAN; it clearly deserves some investigation

Streaming Apple Events over XMPP

I’ve just posted a script that will re-route the OSX Apple Remote event stream out across XMPP using the Switchboard Ruby library, streaming click-down and click-up events from the device out to any endpoint identified by a Jabber/XMPP JID (i.e. Jabber ID). In my case, I’m connecting to XMPP as the user xmpp:buttons@foaf.tv, who is buddies with xmpp:bob.notube@gmail.com, ie. they are on each other’s Jabber rosters already. Currently I simply send a textual message with the button-press code; a real app would probably use an XMPP IQ stanza instead, which is oriented more towards machines than human readers.

The nice thing about this setup is that I can log in on another laptop to Gmail, run the Javascript Gmail / Google Talk chat UI, and hear a ‘beep’ whenever the event/message arrives in my browser. This is handy for informally testing the laggyness of the connection, which in turn is critical when designing remote control protocols: how chatty should they be? How much smarts should go into the client? which bit of the system really understands what the user is doing? Informally, the XMPP events seem pretty snappy, but I’d prefer to see some real statistics to understand what a UI might risk relying on.

What I’d like to do now is get a Strophe Javascript client running. This will attach to my Jabber server and allow these events to show up in HTML/Javascript apps…

Here’s sample output of the script (local copy but it looks the same remotely), in which I press and release quickly every button in turn:

Cornercase:osx danbri$ ./buttonhole_surfer.rb
starting event loop.

=> Switchboard started.
ButtonDownEvent: PLUS (0x1d)
ButtonUpEvent: PLUS (0x1d)
ButtonDownEvent: MINU (0x1e)
ButtonUpEvent: MINU (0x1e)
ButtonDownEvent: LEFT (0x17)
ButtonUpEvent: LEFT (0x17)
ButtonDownEvent: RIGH (0x16)
ButtonUpEvent: RIGH (0x16)
ButtonDownEvent: PLPZ (0x15)
ButtonUpEvent: PLPZ (0x15)
ButtonDownEvent: MENU (0x14)
ButtonUpEvent: MENU (0x14)
^C
Shutdown initiated.
Waiting for shutdown to complete.
Shutdown initiated.
Waiting for shutdown to complete.
Cornercase:osx danbri$