Everything Still Looks Like A Graph (but graphs look like maps)

Last October I posted a writeup of some experiments that illustrate item-to-item similarities from Apache Mahout using Gephi for visualization. This was under a heading that quotes Ben Fry, “Everything looks like a graph” (but almost nothing should ever be drawn as one). There was also some followup discussion on the Gephi project blog.

I’ve just seen a cluster of related Gephi experiments, which are reinforcing some of my prejudices from last year’s investigations:

These are all well worth a read, both for showing the potential and the limitations of Gephi. It’s not hard to find critiques of the intelligibility or utility of squiggly-but-inspiring network diagrams; Ben Fry’s point was well made. However I think each of the examples I link here (and my earlier experiments) show there is some potential in such layouts for showing ‘similarity neighbourhoods’ in a fairly appealing and intuitive form.

In the case of the history of Philosophy it feels a little odd using a network diagram since the chronological / timeline aspect is quite important to the notion of a history. But still it manages to group ‘like with like’, to the extent that the inter-node connections probably needn’t even be shown.

I’m a lot more comfortable with taking the ‘everything looks like a graph’ route if we’re essentially generating a similarity landscape. Whether these ‘landscapes’ can be made to be stable in the face of dataset changes or re-generation of the visualization is a longer story. Gephi is currently a desktop tool, and as such has memory issues with really large graphs, but I think it shows the potential for landscape-oriented graph visualization. Longer term I expect we’ll see more of a split between something like Hadoop+Mahout for big data crunching (e.g. see Mahout’s spectral clustering component which takes node-to-node affinities as input) and something WebGL and browser-based UI for the front-end. It’s a shame the Gephi efforts in this direction (GraphGL) seem to have gone quiet, but for those of you with modern graphics cards and browsers, take a look at alterqualia’s ‘dynamic terrain‘ WebGL demo to get a feel for how landscape-shaped datasets could be presented…

Also btw look at the griffsgraphs landscape of literature; this was built solely from ‘influences’ relationships from Wikipedia… then compare this with the landscapes I was generating last year from Harvard bibliographic data. They were both built solely using subject classification data from Harvard. Now imagine if we could mutate the resulting ‘map’ by choosing our own weighting composited across these two sources. Perhaps for the music or movies or TV areas of the map we might composite in other sources, based on activity data analysed by recommendation engine, or just different factual relationships.

There’s no single ‘correct’ view of the bibliographic landscape; what makes sense for a phd researcher, a job seeker or a schoolkid will naturally vary. This is true also of similarity measures in general, i.e. for see-also lists in plain HTML as well as fancy graph or landscape-based visualizations. There are more than metaphorical comparisons to be drawn with the kind of compositing tools we see in systems like Blender, and plenty of opportunities for putting control into end-user rather than engineering hands.

In just the last year, Harvard (and most recently OCLC) have released their bibliographic dataset for public re-use, the Wikidata project has launched, and browser support for WebGL has been improving with every release. Despite all the reasonable concerns out there about visualizing graphs as graphs, there’s a lot to be said for treating graphs as maps…

Video Linking: Archives and Encyclopedias

This is a quick visual teaser for some archive.org-related work I’m doing with NoTube colleagues, and a collaboration with Kingsley Idehen on navigating it.

In NoTube we are trying to match people and TV content by using rich linked data representations of both. I love Archive.org and with their help have crawled an experimental subset of the video-related metadata for the Archive. I’ve also used a couple of other sources; Sean P. Aune’s list of 40 great movies, and the Wikipedia page listing US public domain films. I fixed, merged and scraped until I had a reasonable sample dataset for testing.

I wanted to test the Microsoft Pivot Viewer (a Silverlight control), and since OpenLink’s Virtuoso package now has built-in support, I got talking with Kingsley and we ended up with the following demo. Since not everyone has Silverlight, and this is just a rough prototype that may be offline, I’ve made a few screenshots. The real thing is very visual, with animated zooms and transitions, but screenshots give the basic idea.

Notes: the core dataset for now is just links between archive.org entries and Wikipedia/dbpedia pages. In NoTube we’ll also try Lupedia, Zemanta, Reuter’s OpenCalais services on the Archive.org descriptions to see if they suggest other useful links and categories, as well as any other enrichment sources (delicious tags, machine learning) we can find. There is also more metadata from the Archive that we should also be using.

This simple preview simply shows how one extra fact per Archived item creates new opportunities for navigation, discovery and understanding. Note that the UI is in no way tuned to be TV, video or archive specific; rather it just lets you explore a group of items by their ‘facets’ or common properties. It also reveals that wiki data is rather chaotic, however some fields (release date, runtime, director, star etc.) are reliably present. And of course, since the data is from Wikipedia, users can always fix the data.

You often hear Linked Data enthusiasts talk about data “silos”, and the need to interconnect them. All that means here, is that when collections are linked, then improvements to information on one side of the link bring improvements automatically to the other. When a Wikipedia page about a director, actor or movie is improved, it now also improves our means of navigating Archive.org’s wonderful collection. And when someone contributes new video or new HTML5-powered players to the Archive, they’re also enriching the Encyclopedia too.

Archive.org films on a timeline by release date according to Wikipedia.

One thing to mention is that everything here comes from the Wikipedia data that is automatically extracted from by DBpedia, and that currently the extractors are not working perfectly on all films. So it should get better in the future. I also added a lot of the image links myself, semi-automatically. For now, this navigation is much more factually-based than topic; however we do have Wikipedia categories for each film, director, studio etc., and these have been mapped to other category systems (formal and informal), so there’s a lot of other directions to explore.

What else can we do? How about flip the tiled barchart to organize by the film’s distributor, and constrain the ‘release date‘ facet to the 1940s:

That’s nice. But remember that with Linked Data, you’re always dealing with a subset of data. It’s hard to know (and it’s hard for the interface designers to show us) when you have all the relevant data in hand. In this case, we can see what this is telling us about the videos currently available within the demo. But does it tell us anything interesting about all the films in the Archive? All the films in the world? Maybe a little, but interpretation is difficult.

Next: zoom in to a specific item. The legendary Plan 9 from Outer Space (wikipedia / dbpedia).

Note the HTML-based info panel on the right hand side. In this case it’s automatically generated by Virtuoso from properties of the item. A TV-oriented version would be less generic.

Finally, we can explore the collection by constraining the timeline to show us items organized according to release date, for some facet. Here we show it picking out the career of one Edward J. Kay, at least as far as he shows up as composer of items in this collection:

Now turning back to Wikipedia to learn about ‘Edward J. Kay’, I find he has no entry (beyond these passing mentions of his name) in the English Wikipedia, despite his work on The Ape Man, The Fatal Hour, and other films.  While the German Wikipedia does honour him with an entry, I wonder whether this kind of Linked Data navigation will change the dynamics of the ‘deletionism‘ debates at Wikipedia.  Firstly by showing that structured data managed elsewhere can enrich the Wikipedia (and vice-versa), removing some pressure for a single Wiki to cover everything. Secondly it provides a tool to stand further back from the data and view things in a larger context; a context where for example Edward J. Kay’s achievements become clearer.

Much like Freebase Parallax, the Pivot viewer hints at a future in which we explore data by navigating from sets of things to other sets of things - like the set of film’s Edward J. Kay contributed to.  Pivot doesn’t yet cover this, but it does very vividly present the potential for this kind of navigation, showing that navigation of films, TV shows and actors may be richer when it embraces more general mechanisms.

How to tell you’re living in the future: bacterial computers, HTML and RDF

Clue no.1. Papers like “Solving a Hamiltonian Path Problem with a bacterial computer” barely raise an eyebrow.

Clue no.2. Undergraduates did most of the work.

And the clincher, …

Clue no.3. The paper is shared nicely in the Web, using HTML, Creative Commons document license, and useful RDF can be found nearby.

From those-crazy-eggheads dept, … bacterial computers solving graph data problems. Can’t wait for the javascript API. Except the thing of interest here isn’t so much the mad science but what they say about how they did it. But the paper is pretty fun stuff too.

The successful design and construction of a system that enables bacterial computing also validates the experimental approach inherent in synthetic biology. We used new and existing modular parts from the Registry of Standard Biological Parts [17] and connected them using a standard assembly method [18]. We used the principle of abstraction to manage the complexity of our designs and to simplify our thinking about the parts, devices, and systems of our project. The HPP bacterial computer builds upon our previous work and upon the work of others in synthetic biology [19-21]. Perhaps the most impressive aspect of this work was that undergraduates conducted every aspect of the design, modeling, construction, testing, and data analysis.

Meanwhile, over on partsregistry.org you can read more about the bits and pieces they squished together. It’s like a biological CPAN. And in fact the anology is being actively pursued: see openwetware.org’s work on an RDF description of the catalogue.

I grabbed an RDF file from that site and confirm that simple queries like

select * from <SemanticSBOLv0.13_BioBrick_Data_v0.13.rdf>  where {<http://sbol.bhi.washington.edu/rdf/sbol.owl#BBa_I715022> ?p ?v }

and

select * from <SemanticSBOLv0.13_BioBrick_Data_v0.13.rdf>  where {?x ?p <http://sbol.bhi.washington.edu/rdf/sbol.owl#BBa_I715022>  }

… do navigate me around the graph that describes the pieces described in their paper.

Here’s what the HTML paper says right now,

We designed and built all the basic parts used in our experiments as BioBrick compatible parts and submitted them to the Registry of Standard Biological Parts [17]. Key basic parts and their Registry numbers are: 5′ RFP (BBa_I715022), 3′ RFP (BBa_ I715023), 5′ GFP (BBa_I715019), and 3′ GFP (BBa_I715020). All basic parts were DNA sequence verified. The basic parts hixC(BBa_J44000), Hin LVA (BBa_J31001) were used from our previous experiments [8]. The parts were assembled by the BioBrick standard assembly method [18] yielding intermediates and devices that were also submitted to the Registry. Important intermediate and devices constructed are: Edge A (BBa_S03755), Edge B (BBa_S03783), Edge C (BBa_S03784), ABC HPP construct (BBa_I715042), ACB HPP construct (BBa_I715043), and BAC HPP construct (BBa_I715044). We previously built the Hin-LVA expression cassette (BBa_S03536) [8].

How nice to have a scholarly publication in HTML format, open-access published under creative commons license, and backed by machine-processable RDF data. Never mind undergrads getting bacteria to solve NP-hard graph problems, it’s the modern publishing and collaboration machinery described here that makes me feel I’m living in the future…

(World Wide Web – Let’s Share What We Know…)

ps. thanks to Dan Connolly for nudging me to get this shared with the planetrdf.com-reading community. Maybe it’ll nudge Kendall into posting something too.

Subject classification and Statistics

Subject classification and statistics share some common problems. This post takes a small example discussed at this week’s ODaF event on “Semantic Statistics” in Tilberg, and explores its expression coded in the Universal Decimal Classification (UDC). UDC supports faceted description, providing an abstract grammar allowing sentence-like subject descriptions to be composed from the “raw materials” defined in its vocabulary scheme.

This makes the mapping of UDC (and to some extent also Dewey classifications)  into W3C’s SKOS somewhat lossy, since patterns and conventions for documenting these complex, composed structures are not yet well established. In the NoTube project we are looking into this in a TV context, in large part because the BBC archives make extensive use of UDC via their Lonclass scheme; see my ‘investigating Lonclass‘ and UDC seminar talk for more on those scenarios. Until this week I hadn’t thought enough about the potential for using this to link deep into statistical datasets.

One of the examples discussed on Tuesday was as follows (via Richard Cyganiak):

“There were 66 fatal occupational injuries in the Washington, DC metropolitan area in 2008″

There was much interesting discussion in Tilburg about the proper scope and role of Linked Data techniques for sharing this kind of statistical data. Do we use RDF essentially as metadata, to find ‘black boxes’ full of stats, or do we use RDF to try to capture something of what the statistics are telling us about the world? When do we use RDF as simple factual data directly about the world (eg. school X has N pupils [currently; or at time t]), and when does it become a carrier for raw numeric data whose meaning is not so directly expressed at the factual level?

The state of the art in applying RDF here seems to be SDMX-RDF, see Richard’s slides. The SDMX-RDF work uses SKOS to capture code lists, to describe cross-domain concepts and to indicate subject matter.

Given all this, I thought it would be worth taking this tiny example and looking at how it might look in UDC, both as an example of the ‘compositional semantics’ some of us hope to capture in extended SKOS descriptions, but also to explore scenarios that cross-link numeric data with the bibliographic materials that can be found via library classification techniques such as UDC. So I asked the ever-helpful Aida Slavic (editor in chief of the UDC), who talked me through how this example data item looks from a UDC perspective.

I asked,

So I’ve just got home from a meeting on semweb/stats. These folk encode data values with stuff like “There were 66 fatal occupational injuries in the Washington, DC metropolitan area in 2008″. How much of that could have a UDC coding? I guess I should ask, how would subject index a book whose main topic was “occupational injuries in the Washington DC metro area in 2008″?

Aida’s reply (posted with permission):

You can present all of it & much more using UDC. When you encode a subject like this in UDC you store much more information than your proposed sentence actually contains. So my decision of how to ‘translate this into udc’ would depend on learning more about the actual text and the context of the message it conveys, implied audience/purpose, the field of expertise for which the information in the document may be relevant etc. I would probably wonder whether this is a research report, study, news article, textbook, radio broadcast?

Not knowing more then you said I can play with the following: 331.46(735.215.2/.4)”2008

Accidents at work — Washington metropolitan area — year 2008
or a bit more detailed:  331.46-053.18(735.215.2/.4)”2008
Accidents at work — dead persons – Washington metropolitan area — year 2008
[you can say the number of dead persons but this is not pertinent from point of view of indexing and retrieval]

…or maybe (depending what is in the content and what is the main message of the text) and because you used the expression ‘fatal injuries’ this may imply that this is more health and safety/ prevention area in health hygiene which is in medicine.

The UDC structures composed here are:

TIME “2008”

PLACE (735.215.2/.4)  Counties in the Washington metropolitan area

TOPIC 1
331     Labour. Employment. Work. Labour economics. Organization of  labour
331.4     Working environment. Workplace design. Occupational safety.  Hygiene at work. Accidents at work
331.46  Accidents at work ==> 614.8

TOPIC 2
614   Prophylaxis. Public health measures. Preventive treatment
614.8    Accidents. Risks. Hazards. Accident prevention. Persona protection. Safety
614.8.069    Fatal accidents

NB – classification provides a bit more context and is more precise than words when it comes to presenting content i.e. if the content is focused on health and safety regulation and occupation health then the choice of numbers and their order would be different e.g. 614.8.069:331.46-053.18 [relationship between] health & safety policies in prevention of fatal injuries and accidents at work.

So when you read  UDC number 331.46 you do not see only e.g. ‘accidents at work’ but  ==>  ‘accidents at work < occupational health/safety < labour economics, labour organization < economy
and when you see UDC number 614.8  it is not only fatal accidents but rather ==> ‘fatal accidents < accident prevention, safety, hazards < Public health and hygiene. Accident prevention

When you see (735.2….) you do not only see Washington but also United States, North America

So why is this interesting? A couple of reasons…

1. Each of these complex codes combines several different hierarchically organized components; just as they can be used to explore bibliographic materials, similar approaches might be of value for navigating the growing collections of public statistical data. If SKOS is to be extended / improved to better support subject classification structures, we should take care also to consider use cases from the world of statistics and numeric data sharing.

2. Multilingual aspects. There are plans to expose SKOS data for the upper levels of UDC. An HTML interface to this “UDC summary” is already available online, and includes collected translations of textual labels in many languages (see progress report) . For example, we can look up 331.4 and find (in hierarchical context) definitions in English (“Working environment. Workplace design. Occupational safety. Hygiene at work. Accidents at work”), alongside e.g. Spanish (“Entorno del trabajo. Diseño del lugar de trabajo. Seguridad laboral. Higiene laboral. Accidentes de trabajo”), CroatianArmenian, …

Linked Data is about sharing work; if someone else has gone to the trouble of making such translations, it is probably worth exploring ways of re-using them. Numeric data is (in theory) linguistically neutral; this should make linking to translations particularly attractive. Much of the work around RDF and stats is about providing sufficient context to the raw values to help us understand what is really meant by “66” in some particular dataset. By exploiting SDMX-RDF’s use of SKOS, it should be possible to go further and to link out to the wider literature on workplace fatalities. This kind of topical linking should work in both directions: exploring out from numeric data to related research, debate and findings, but also coming in and finding relevant datasets that are cross-referenced from books, articles and working papers. W3C recently launched a Library Linked Data group, I look forward to learning more about how libraries are thinking about connecting numeric and non-numeric information.

RDFa in Drupal 7: last call for feedback before alpha release

Stéphane has just posted a call for feedback on the Drupal 7 RDFa design, before the first official alpha release.

First reaction above all, is that this is great news! Very happy to see this work maturing.

I’ve tried to quickly suggest some tweaks to the vocab, by hacking his diagram in photoshop. All it really shows is that I’ve forgotten how to use photoshop, but I’ll upload it here anyway.

So if you click through to the full image, you can see my rough edits.

I’d suggest:

  1. Use (dcterms) dc:subject as the way of pointing from a document to it’s SKOS subject.
  2. Use (dcterms) dc:creator as the relationship between a document and the person that created it (note that in FOAF, we now declare foaf:maker to map as an equivalentProperty to (dcterms)dc:creator).
  3. Distinguish between the description of the person versus their account in the Drupal system; I would use foaf:Person for the human, and sioc:User (a kind of foaf:OnlineAccount) as the drupal account. The foaf property to link from the former to the latter is foaf:account (new name for foaf:holdsAccount).
  4. Focus on SIOC where it is most at-home: in modelling the structure of the discussion; threading, comments and dialog.
  5. Provide a generated URI for the person. I don’t 100% understand Stephane’s comment, “Hash URIs for identifying things different from the page describing them can be implemented quite easily but this case hasn’t emerged in core” but perhaps this will be difficult? I’d suggest using URIs ending “userpage#!person” so the fragment IDs can’t clash with HTML usage.

If the core release can provide this basic structure, including a hook for describing the human person rather than the site-specific account (ie. sioc:User) then extensions should be able to add their own richness. The current markup doesn’t quite work for that end, as the human user is only described indirectly (unless  I understand current reading of sioc:User).

Anyway, I’m nitpicking! This is really great, and a nice and well-deserved boost for the RDFa community.

WordPress trust syndication revisited: F2F plugin

This is a followup to my Syndicating trust? Mediawiki, WordPress and OpenID post. I now have a simple implementation that exports data from WordPress: the F2F plugin. Also some experiments with consuming aggregates of this information from multiple sources.

FOAF has always had a bias towards describing social things that are shown rather than merely stated; this is particularly so in matters of trust. One way of showing basic confidence in others, is by accepting their comments on your blog or Web site. F2F is an experiment in syndicating information about these kinds of everyday public events. With F2F, others can share and re-use this sort of information too; or deal with it in aggregate to spread the risk and bring more evidence into their trust-related decisions. Or they might just use it to find interesting people’s blogs.

OpenID is a technology that lets people authenticate by showing they control some URL. WordPress blogs that use the OpenID plugin slowly accumulate a catalogue of URLs when people leave comments that are approved or rejected. In my previous post I showed how I was using the list of approved OpenIDs from my blog to help configure the administrative groups on the FOAF wiki.

This may all raise more questions than it answers. What level of detail is appropriate? are numbers useful, or just lists? in what circumstances is it sensible or risky to merge such data? is there a reasonable use for both ‘accept’ lists and ‘unaccept’ lists? What can we do with a list of OpenID URLs once we’ve got it? How do we know when two bits of trust ‘evidence’ actually share a common source? How do we find this information from the homepage of a blog?

If you install the F2F plugin (and have been using the OpenID plugin long enough to have accumulated a database table of OpenIDs associated with submitted comments), you can experiment with this. Basically it will generate HTML in RDFa format describing a list of people . See the F2F Wiki page for details and examples.

The script is pretty raw, but today it all improved a fair bit with help from Ed Summers, Daniel Krech and Morten Frederiksen. Ed and Daniel helped me get started with consuming this RDFa and SPARQL in the latest version of the rdflib Python library. Morten rewrote my initial nasty hack, so that it used WordPress Shortcodes instead of hardcoding a URL path. This means that any page containing a certain string – f2f in chunky brackets – will get the OpenID list added to it. I’ll try that now, right here in this post. If it works, you’ll get a list of URLs below. Also thanks to Gerald Oskoboiny for discussions on this and reputation-related aggregation ideas; see his page on reputation and trust for lost more related ideas and sites. See also Peter Williams’ feedback on the foaf-dev list.

Next steps? I’d be happy to have a few more installations of this, to get some testbed data. Ideally from an overlapping community so the datasets are linked, though that’s not essential. Ed has a copy installed currently too. I’ll also update the scripts I use to manage the FOAF MediaWiki admin groups, to load data from RDFa blogs; mine and others if people volunteer relevant data. It would be great to have exports from other software too, eg. Drupal or MediaWiki.

Comment accept list for http://danbri.org/words

WordPress, TinyMCE and RDFa editors

I’m writing this in WordPress’s ‘Visual’ mode WYSIWYG HTML editor, and thinking “how could it be improved to support RDFa?”

Well let’s think. Humm. In RDFa, every section of text is always ‘about’ something, and then has typed links or properties associated with that thing. So there are icons ‘B’ for bold, ‘I’ for italics, etc. And thingies for bulleted lists, paragraphs, fonts. I’m not sure how easily it would handle the nesting of full RDFa, but some common case could be a start: a paragraph that was about some specific thing, and within which the topical focus didn’t shift.

So, here is a paragraph about me. How to say it’s really about me? There could be a ‘typeof’ attribute on the paragraph element, with value foaf:Person, and an ‘about’ attribute with a URI for me, eg. http://danbri.org/foaf.rdf#danbri

In this UI, I just type textual paragraphs and then double newlines are interpreted by the publishing engine as separate HTML paragraphs. I don’t see any paragraph-level or even post-level properties in the user interface where an ‘about’ URI could be stored. But I’m sure something could be hacked. Now what about properties and links? Let’s see…

I’d like to link to TinyMCE now, since that is the HTML editor that is embedded in WordPress. So here is a normal link to TinyMCE. To make it, I searched for the TinyMCE homepage, copied its URL into the copy/paste buffer, then pressed the ‘add link’ icon here after highlighting the text I wanted to turn into a link.

It gave me a little HTML/CSS popup window with some options – link URL, target window, link title and (CSS) class; I’ve posted a screenshot below. This bit of UI seems like it could be easily extended for typed links. For example, if I were adding a link to a paragraph that was about a Film, and I was linking to (a page about) its director or an actor, we might want to express that using an annotation on the link. Due to the indirection (we’re trying to say that the link is to a page whose primary topic is the director, etc.), this might complicate our markup. I’ll come back to that later.

tinymce link edit screenshop

tinymce link edit screenshot

I wonder if there’s any mention of RDFa on the TinyMCE site? Nope. Nor even any mention of Microformats.

Perhaps the simplest thing to try to build into TinyMCE would be for a situation where a type on the link expressed a relationship between the topic of the blog post (or paragraph), and some kind of document.

For example, a paragraph about me, might want to annotate a link to my old school’s homepage (ie. http://www.westergate.w-sussex.sch.uk/ ) with a rel=”foaf:schoolHomepage” property.

So our target RDFa would be something like (assuming this markup is escaped and displayed nicely):

<p about=”http://danbri.org/foaf.rdf#danbri” typeof=”foaf:Person” xmlns:foaf=”http://xmlns.com/foaf/0.1/”>

I’m Dan, and I went to <a rel=”foaf:schoolHomepage” href=”http://www.westergate.w-sussex.sch.uk/”>Westergate School</a>

</p>.

See recent discussion on the foaf-dev list where we contrast this idiom with one that uses a relationship (eg. ‘school’) that directly links a person to a school, rather than to the school’s homepage. Toby made some example RDFa output of the latter style, in response to a debate about whether the foaf:schoolHomepage idiom is an ‘anti-pattern’. Excerpt:

<p about=”http://danbri.org/foaf.rdf#danbri” typeof=”foaf:Person” xmlns:foaf=”http://xmlns.com/foaf/0.1/”>

My name is <span property=”foaf:name”>Dan Brickley</span> and I spent much of the ’80s at <span rel=”foaf:school”><a typeof=”foaf:Organization” property=”foaf:name” rel=”foaf:homepage” href=”http://www.westergate.w-sussex.sch.uk/”>Westergate School</a>.

</p>

So there are some differences between these two idioms. From the RDF side they tell you similar things: the school I went to. The latter idiom is more expressive, and allows additional information to be expressed about the school, without mixing it up with the school’s homepage. The former simply says “this link is to the homepage of Dan’s school”; and presumably defers to the school site or elsewhere for more details.

I’m interested to learn more about how these expressivity tradeoffs relate to the possibilities for visual / graphical editors. Could TinyMCE be easily extended to support either idiom? How would the GUI recognise a markup pattern suitable for editing in either style? Can we avoid having to express relationships indirectly, ie. via pages? Would it be a big job to get some basic RDFa facilities into WordPress via TinyMCE?

For related earlier work, see the 2003 SWAD-Europe report on Semantic blogging from the team then at HP Labs Bristol.

Remote remotes

I’ve just closed the loop on last weekend’s XMPP / Apple Remote hack, using Strophe.js, a library that extends XMPP into normal Web pages. I hope I’ll find some way to use this in the NoTube project (eg. wired up to Web-based video playing in OpenSocial apps), but even if not it has been a useful learning experience. See this screenshot of a live HTML page, receiving and displaying remotely streamed events (green blob: button clicked; grey blob: button released). It doesn’t control any video yet, but you get the idea I hope.

Remote apple remote HTML demo

Remote apple remote HTML demo, screenshot showing a picture of handheld apple remote with a grey blob over the play/pause button, indicating a mouse up event. Also shows debug text in html indicating ButtonUpEvent: PLPZ.

This webclient needs the JID and password details for an XMPP account, and I think these need to be from the same HTTP server the HTML is published on. It works using BOSH or other tricks, but for now I’ve not delved into those details and options. Source is in the Buttons area of the FOAF svn: webclient. I made a set of images, for each button in combination with button-press (‘down’), button-release (‘up’). I’m running my own ejabberd and using an account ‘buttons@foaf.tv’ on the foaf.tv domain. I also use generic XMPP IM accounts on Google Talk, which work fine although I read recently that very chatty use of such services can result in data rates being reduced.

To send local Apple Remote events to such a client, you need a bit of code running on an OSX machine. I’ve done this in a mix of C and Ruby: imremoted.c (binary) to talk to the remote, and the script buttonhole_surfer.rb to re-broadcast the events. The ruby code uses Switchboard and by default loads account credentials from ~/.switchboardrc.

I’ve done a few tests with this setup. It is pretty responsive considering how much indirection is involved: but the demo UI I made could be prettier. The + and – buttons behave differently to the left and right (and menu and play/pause); only + and – send an event immediately. The others wait until the key is released, then send a pair of events. The other keys except for play/pause will also forget what’s happening unless you act quickly. This seems to be a hardware limitation. Apparently Apple are about to ship an updated $20 remote; I hope this aspect of the design is reconsidered, as it limits the UI options for code using these remotes.

I also tried it using two browsers side by side on the same laptop; and two laptops side by side. The events get broadcasted just fine. There is a lot more thinking to do re serious architecture, where passwords and credentials are stored, etc. But XMPP continues to look like a very interesting route.

Finally, why would anyone bother installing compiled C code, Ruby (plus XMPP libraries), their own Jabber server, and so on? Well hopefully, the work can be divided up. Not everyone installs a Jabber server. My thinking is that we can bundle a collection of TV and SPARQL XMPP functionality in a single install, such that local remotes can be used on the network, but also local software (eg. XBMC/Plex/Boxee) can also be exposed to the wider network – whether it’s XMPP .js running inside a Web page as shown here, or an iPhone or a multi-touch table. Each will offer different interaction possibilities, but they can chat away to each other using a common link, and common RDF vocabularies (an area we’re working on in NoTube). If some common micro-protocols over XMPP (sending clicks or sending commands or doing RDF queries) can support compelling functionality, then installing a ‘buttons adaptor’ is something you might do once, with multiple benefits. But for now, apart from the JQbus piece, the protocols are still vapourware. First I wanted to get the basic plumbing in place.

Update: I re-discovered a useful ‘which bosh server do you need?’ which reminds me that there are basically two kinds of BOSH software offering; those that are built into some existing Jabber/XMPP server (like the ejabberd installation I’m using on foaf.tv) and those that are stand-alone connection managers that proxy traffic into the wider XMPP network. In terms of the current experiment, it means the event stream can (within the XMPP universe) come from addresses other than the host running the Web app. So I should also try installing Punjab. Perhaps it will also let webapps served from other hosts (such as opensocial containers) talk to it via JSON tricks? So far I have only managed to serve working Buttons/Strophe HTML from the same host as my Jabber server, ie. foaf.tv. I’m not sure how feasible the cross-domain option is.

Update x2: Three different people have now mentioned opensoundcontrol to me as something similar, at least on a LAN; it clearly deserves some investigation

Streaming Apple Events over XMPP

I’ve just posted a script that will re-route the OSX Apple Remote event stream out across XMPP using the Switchboard Ruby library, streaming click-down and click-up events from the device out to any endpoint identified by a Jabber/XMPP JID (i.e. Jabber ID). In my case, I’m connecting to XMPP as the user xmpp:buttons@foaf.tv, who is buddies with xmpp:bob.notube@gmail.com, ie. they are on each other’s Jabber rosters already. Currently I simply send a textual message with the button-press code; a real app would probably use an XMPP IQ stanza instead, which is oriented more towards machines than human readers.

The nice thing about this setup is that I can log in on another laptop to Gmail, run the Javascript Gmail / Google Talk chat UI, and hear a ‘beep’ whenever the event/message arrives in my browser. This is handy for informally testing the laggyness of the connection, which in turn is critical when designing remote control protocols: how chatty should they be? How much smarts should go into the client? which bit of the system really understands what the user is doing? Informally, the XMPP events seem pretty snappy, but I’d prefer to see some real statistics to understand what a UI might risk relying on.

What I’d like to do now is get a Strophe Javascript client running. This will attach to my Jabber server and allow these events to show up in HTML/Javascript apps…

Here’s sample output of the script (local copy but it looks the same remotely), in which I press and release quickly every button in turn:

Cornercase:osx danbri$ ./buttonhole_surfer.rb
starting event loop.

=> Switchboard started.
ButtonDownEvent: PLUS (0x1d)
ButtonUpEvent: PLUS (0x1d)
ButtonDownEvent: MINU (0x1e)
ButtonUpEvent: MINU (0x1e)
ButtonDownEvent: LEFT (0x17)
ButtonUpEvent: LEFT (0x17)
ButtonDownEvent: RIGH (0x16)
ButtonUpEvent: RIGH (0x16)
ButtonDownEvent: PLPZ (0x15)
ButtonUpEvent: PLPZ (0x15)
ButtonDownEvent: MENU (0x14)
ButtonUpEvent: MENU (0x14)
^C
Shutdown initiated.
Waiting for shutdown to complete.
Shutdown initiated.
Waiting for shutdown to complete.
Cornercase:osx danbri$