Apple Remote events – a quick howto

Anyone who has recently bought an Apple computer probably has one or more Apple Remotes.

I have been learning how to access them. Conclusion: iremoted does 95% of what you probably need, and the discussion over on cocoadev.com tells you more than you probably wanted to know. My experiments are written up in a corner of the FOAF wiki, and have a large collection of related links stored under the ‘buttons’ tag at delicious.com.

Summary: If you run iremoted, it calls OSX APIs and gets notified whenever button presses on the Apple Remote are detected.

There are 6 buttons: a menu button, a play/pause main button, and four others surrounding the main button. To the left and right there are ‘back’ and ‘fwd'; and above/below they are labelled ‘+’ and ‘-‘.

Interestingly from a protocol and UI design view, those last buttons behave differently. When you press and release the other 4 buttons, the 2 events are not reported until after you release; but with ‘+’ and ‘-‘, the initial press is reported immediately. Furthermore, only ‘+’ and ‘-‘ do anything interesting if you press for longer than a second or so; the others result in your action being silently dropped, even when you release the button.

I am looking at this since I am revisiting my old Jqbus FOAF experiments with XMPP and SPARQL for data access; but this time building in some notion of behaviour too, hence the new name: Buttons. The idea is to use XMPP as a universal communication environment between remote controls and remote-controllable things, initially in the world around TV. So I am exploring adaptors for existing media centres and media players (Boxee, XBMC, iTunes etc).

There are various levels of abstraction at which XMPP could be used as a communications pipe: it could stream clicks directly, by simply reflecting the stream of Infra Red events, translating them into IQ messages sent to JID-identified XMPP accounts. Or we could try to interpret them closer to their source, and send more meaningful events. For example, if the “+” is held for 3 seconds, do we wait until it is released before sending a “Maximize volume” message out across the ‘net. Other approaches (eg. as suggested on the xmpp list, and in some accessibility work) include exporting bits of UI to the remote. Or in the case of really dumb remotes with no screen, at least some sort of invisible proxy/wrapper that tries to avoid sprawing every user click out across XMPP.

Anyway, I thought this worth a writeup. Here’s a link to my copy of iremoted, iremoted_buttons.c. The main change (apart from trivial syntax edits for its output) is the use of kIOHIDOptionsTypeSeizeDevice. This is pretty important, as it stops other processes on your mac receiving the same events; eg. FrontRow taking over the laptop UI whenever ‘menu’ is pressed, or the local volume being changed by ‘+’ and ‘-‘:

    ioReturnValue = (*hidDeviceInterface)->open(hidDeviceInterface, kIOHIDOptionsTypeSeizeDevice);

Local Video for Local People

OK it’s all Google stuff, but still good to see. Go to Google Maps, My Maps, to find ‘Videos from YouTube’ listed. Here’s where I used to live (Bristol UK) and where I live now (Amsterdam, The Netherlands). Here’s a promo film of some nearby art installations from ArtZuid, who even have a page in English. I wouldn’t have found the video or the nearby links except through the map overlay. I don’t know exactly how they’re geotagging the videos, I can’t see an option under ‘my videos’ in YouTube, so perhaps it’s automatic or viewer annotations. In YouTube, you can add a map link under ‘My Videos’ / ‘Edit Video'; I didn’t see that initially. I made some investigations into similar issues (videos on maps) while at Joost; see brief mention in my Fundamentos Web slides from a couple of years ago.
Oh, nearly forgot to mention: zooming out to get a Europe or World-wide view is quite striking too.

Quick clarification on SPARQL extensions and “Lock-in”

It’s clear from discussion bouncing around IRC, Twitter, Skype and elsewhere that “Lock-in” isn’t a phrase to use lightly.

So I post this to make myself absolutely clear. A few days ago I mentioned in IRC a concern that newcomers to SPARQL and RDF databases might not appreciate which SPARQL extensions are widely implemented, and which are the specialist offerings of the system they happen to be using. I mentioned OpenLink’s Virtuoso in particular as a SPARQL implementation that had a rich and powerful set of extensions.

Since it seems there is some risk I might be mis-interpreted as suggesting OpenLink are actively trying to “do a Microsoft” and trap users in some proprietary pseudo-SPARQL, I’ll state what I took to be obvious background knowledge: OpenLink is a company who owe their success to the promotion of cross-vendor database portability, they have been tireless advocates of a standards-based Semantic Web, and they’re active in proposing extensions to W3C for standardisation. So – no criticism of OpenLink intended. None at all.

All I think we need here, are a few utilities that help developers understand the nature of the various SPARQL dialects and the potential costs/benefits of using them. Perhaps an online validator, alongside those for RDF/XML, RDFa, Turtle etc. Such a validator might usefully list the extensions used in some query, and give pointers (perhaps into a wiki) where the status of the various extensions constructs can be discussed and documented.

Since SPARQL is such a young language, it lacks a lot of things that are taken from granted in the SQL world, and so using rich custom extensions when available is for many developers a sensible choice. My only concern is that it must be a choice, and one entered into consciously.

Getting started with Mozilla Jetpack for Thunderbird (on OSX)

A few weeks ago, I started to experiment with Mozilla’s new Jetpack extension model when it became available for Thunderbird. Revisiting the idea today, I realise I’d forgotten the basic setup details, so am recording them here for future reference.

I found I had to download the source from the Mercurial Web interface, rather than use pre-prepared XPI installers. This may have improved by the time you read this. I also learned (from Standard9 in #jetpack IRC) that I need asuth’s repository, rather than the main one. Again, things move quickly, don’t assume this is true forever.

Here is what worked for me, on OSX.

1. Grab a .zip from the Jetpack repo, and unpack it locally on a machine that has Thunderbird installed.

2. Edit extensions/install.rdf and make sure the em:maxVersion in the Thunderbird section matches your version of Thunderbird. In mine I updated it to say <em:maxVersion>3.0b4</em:maxVersion> (instead of 3.0b4pre).

3.  See the README in the jetpack filetree for installation. With Thunderbird closed, I ran “python manage.py install –app=thunderbird” and I found Jetpack installed fine.

4. Run Thunderbird, you should see an about:jetpack tab, and corresponding options in the Tools menu.

This was enough to get started. See discussion on visophyte.org for some example code.

After installation, you can use the about:jetpack windows to load, reload and delete Jetpacks from URL.

So, why would you bother doing all this? Jetpack provides a simple way of extending an email client using Web technology.

In my current (unfinished!) experiment, for example, I’m looking at making a sidebar the shows information (photo, blog etc.) about the sender of the currently-viewed email. And I figured that if I blogged this HOWTO, someone more familiar with ajax, jquery etc might care to help with wiring this up to the Google Social Graph JSON API, so we can use FOAF and XFN to provide more contextual information around incoming mail…

Assuming you are running Thunderbird 3b4

Mirrors and Prisms: robust site-specific browsers

Mozilla (amongst others, see Chris Messina’s writeup of the trend, also Matt’s) have been exploring site-specific browsers through their Prism project. These combine aspects of the Web and Desktop environments, allowing you to have a desktop app tuned for browsing just one specific Web site. Prism is an application which, when run, will generate new per-site desktop applications. Currently it does not yet have a fancy packaging/installer, so users will need to install Prism plus the site files separately.

I have started to look at Prism as a basis for accessing robust, mirrored sites, so that a single point of failure (or censorship) might be avoided. With a lot help from Matt and others in #prism IRC chat, I have something almost working. The idea is simple: hack Prism so that the running browser code intercepts clicks and (based on some as-yet-undefined logic and preferences) gets the page from a list of mirrors, which might also be fetched dynamically from the ‘net.

I should also mention that one motivation here is for anti-censorship tools, to give users an easy way to access sites which might be blocked by their IP address or URL otherwise. I looked at FoxyProxy as an option but for site-specific robustness, running a full proxy server seems a bit heavy, compared to simply duplicating a set of files. Here’s what the main Prism app looks like:

prism-gutenberg

Screenshot showing Prism config settings for a site-specific browser.

Once you have Prism installed, you can hack a file named webrunner.js to intervene when links are clicked. In OSX, this can be found as /Applications/Prism.app/Contents/Resources/chrome/webrunner/content/webrunner.js.

Edit this: _domActivate : function(aEvent)

I added the following block to the start of this function:

var link = aEvent.target;
if (link instanceof HTMLAnchorElement && !WebRunner._isLinkExternal(link)) {
aEvent.preventDefault();
WebRunner._getBrowser().loadURI(“http://example.org/mirrors/”+link.href,null,null);
}

The idea here being that we intercept clicks, and rewrite them to point to equivalent http:// URIs elsewhere in the Web. As far as this goes, it works as advertised. But what I have is far from working… it would need some code in there to find the right mirror URLs to fetch from. Perhaps a list might be fetched on startup or first time a link is followed. It could also do with some work on packaging, so that this hacked version of Prism plus some actual site-specific browser config can be made into an easy-install Windows .exe or OSX .app. For a Windows installer, I am told that NSIS is a good place to start. You could also imagine a version that hid the mirrored URLs from user’s view. Since Prism has a built-in option to completely hide the URL navigation bar, I didn’t investigate this idea yet.

OK I think I’ve written up everything I learned from the helpful folks in IRC. I hope this repays some karma. If anyone cares to explore this further, or wants to help target student projects on exploring it, please get in touch.

Wolfram Alpha Interview

An interview with Xiang Wang of Wolfram Research in China, by Han Xu (Collin Hsu) of W3China fame.

Interesting excerpt:

Q: Since Wolfram|Alpha is dubbed ‘smarter’ than traditional search engines, I wonder how much AI techniques are actually employed in the system? How is inference done? What is the provenance of each fact/claim? And what if there is a disagreement? For example, how it would represent information about Israel/Palestine area?

A: It’s much more an engineered artifact than a humanlike artificial intelligence. Some of what it does – especially in language understanding – may be similar to what humans do. But its primary objective is to do directed computations, not to act as a general intelligence. Wolfram|Alpha uses established scientific or other models as the basis for its computations. Whenever it does new computations, it’s effectively deriving new facts. About the controversial data you asked about, we deal in different ways with numerical data and particular issues. For numerical data, Wolfram| Alpha curators typically assign a range of values that are then carried through computations. For issues such as the interpretation of particular names or terms, like Israel/Palestine area issue mentioned in your question, Wolfram|Alpha typically prompts users to choose the assumption they want. We spend considerable effort on automated testing, expert review, and checking external data that we use to ensure the results. But with trillions of pieces of data, it’s inevitable that there are still errors out there. If you ever see a problem, please report it.

What kind of Semantic Web researcher are you?

It’s hard to keep secrets in today’s increasingly interconnected, networked world. Social network megasites, mobile phones, webcams and  inter-site syndication can broadcast and amplify the slightest fragment of information. Data linking and interpretation tools can put these fragments together, to paint a detailed picture of your life, both online and off.

This online richness creates offline risk. For example, if you’re going away on holiday, there are hundreds of ways in which potential thieves could learn that your home is vacant and therefore a target for crime: shared calendars, twittered comments from friends or family, flickr’d photographs. Any of these could reveal that your home and possessions sit unwatched, unguarded, presenting an easy target for criminals.

Q: What research challenge does this present to the Semantic Web community? How can we address the concern that Semantic and Social Web technology have more to offer Burglar Bill than to his victims?

A1: We need better technology for limiting the flow of data, proving a right to legitimate access to information, cross-site protocols for deleting leaked or retracted data that flows between sites, and calculating trust metrics for parties requesting data access.

A2: We need to find ways to reconnect people with their neighbours and neighbourhoods, so that homes don’t sit unwatched when their occupants are away.

ps. Dear Bill, I have my iphone, laptop, piggy bank and camera with me…

NoTube scenario: Facebooks groups and TV recommendation

Short version: If the Web knows I like a TV show, why can’t my TV be more useful?

So I have just joined a Facebook group, “Spaced Appreciation Society“:

Basic Info
Type: Common Interest – Pets & Animals
Description: If you’ve ever watched (and therefore loved) the TV series Spaced, then come and pay homage to the great Simon Pegg and Jess Stevenson. “You f’ing plum”
Contact Details
Website: http://www.spaced-out.org.uk/
Location: Meteor Street

That URL is (as with many of these groups) from a site whose primary topic is the thing the group’s about. In this case, about a TV show. It’s even in the public page for that group:

<tr><td class=”label”>Website:</td>
<td class=”data”><div class=”datawrap”><a href=”http://www.spaced-out.org.uk/” onmousedown=”return wait_for_load(this, event, function() { UntrustedLink.bootstrap($(this), &quot;&quot;, event) });” target=”_blank” rel=”nofollow”>http://www.spaced-out.org.uk/</a></div></td></tr>

If I search Google (Yahoo BOSS might be wiser, they have APIs) with:

link:http://www.spaced-out.org.uk/ site:wikipedia.org

It finds me:

http://en.wikipedia.org/wiki/Spaced

Although “link:http://www.spaced-out.org.uk/ site:dbpedia.org” doesn’t find anything, some URL rewriting gets me to:

http://dbpedia.org/page/Spaced

“Spaced is a British television situation comedy written by and starring Simon Pegg and Jessica Stevenson, and directed by Edgar Wright. It is noted for its rapid-fire editing, frequent dropping of pop-culture references, and occasional displays of surrealism. Two series of seven episodes were broadcast in 1999 and 2001 on Channel 4.”

dbpedia-owl:author
* dbpedia:Jessica_Hynes
* dbpedia:Simon_Pegg

dbpedia-owl:completionDate
* 2001-04-13 (xsd:date)

dbpedia-owl:director
* dbpedia:Edgar_Wright

dbpedia-owl:episodenumber
* 14

dbpedia-owl:executiveproducer
* dbpedia:Humphrey_Barclay

dbpedia-owl:genre
* dbpedia:Situation_comedy

dbpedia-owl:language
* dbpedia:English_language

dbpedia-owl:network
* dbpedia:Channel_4

dbpedia-owl:producer
* dbpedia:Gareth_Edwards
* dbpedia:Nira_Park

dbpedia-owl:releaseDate
* 1999-09-24 (xsd:date)

dbpedia-owl:runtime
* 24

dbpedia-owl:starring
* dbpedia:Jessica_Hynes
* dbpedia:Simon_Pegg

There are also links from here to Cyc (but an incorrect match) and to Freebase (to http://www.freebase.com/view/en/spaced).

Unfortunately, the Wikipedia “external links” section, with the URL for http://www.spaced-out.org.uk/ (marked “offical, fan-operated site” is not part of the DBpedia RDF export. I guess as it is not in an infobox. Extracting these external-link URLs at least for the TV, Actor and Movie related sections of Wikipedia might be worthwhile. And DBpedia would be useful for identifying the relevant subset to re-extract.

This idea of using such URLs as keys into Wikipedia/dbpedia data would also work with Identi.ca groups and others. In fact the matching might be easier in Identi.ca – I’m not sure how the Facebook APIs expose this stuff.

Anyway, if a show is about to be broadcast that includes eg. an interview with dbpedia:Jessica_Hynes or dbpedia:Simon_Pegg I’d like to hear about it.

So… is there any way I can use BBC’s /programmes to get upcoming information about who will be on the radio or telly, in a way that could be matched against dbpedia URIs?

Edit: I should’ve mentioned that Facebook in particular also has a more explicit “is a fan of” construct, with Products, Celebs, TV shows and Stores as types of thing you can be a fan of. Furthermore these show up on your public page, eg. here’s mine. I’m certainly interested in using that data, but also in a model that uses  general groups, since it is applicable to other sites that allow a group to indicate itself with a topical URL.