WOT in RDFa?

(This post is written in RDFa…)

To the best of my knowledge, Ludovic Hirlimann‘s PGP fingerprint is 6EFBD26FC7A212B2E093 B9E868F358F6C139647C. You might also be interested in his photos on flickr, or his workplace, Mozilla Messaging. The GPG key details were checked over a Skype video call with me, Ludo and Kaare A. Larsen.

This blog post isn’t signed, the URIs it referenced don’t use SSL, and the image could be switched by evildoers at any time! But the question’s worth asking: is this kind of scruffy key info useful, if there’s enough of it? If I wrote it somehow in Thunderbird’s editor instead, would it be easier to sign? Will 99.9% of humans ever know enough of what’s going on to understand what signing a bunch of complex markup means?

For earlier discussion of this kind of thing, see Joseph Reagle’s Key-free Trust piece (“Does Google Show How the Semantic Web Could Replace Public Key Infrastructure?”). It’s more PKI-free trust than PK-free.

My ’70s Schoolin’ (in RDFa)

I went to Hamsey Green school in the 1970s.

Looking in the UK Govt datasets, I see it is listed there with a homepage of ‘http://www.hamsey-green-infant.surrey.sch.uk’ (which doesn’t seem to work).

Some queries I’m trying via the SPARQL dataset (I’ll update this post if I make them work…)

First a general query, from which I found the URL manually, …

select distinct ?x ?y where { ?x <http ://education.data.gov.uk/def/school/websiteAddress> ?y . }

Then I can go back into the data, and find other properties of the school:

PREFIX sch-ont:  <http://education.data.gov.uk/def/school/>
select DISTINCT ?x ?p ?z WHERE
{
?x sch-ont:websiteAddress "http://www.hamsey-green-infant.surrey.sch.uk" .
?x ?p ?z .
}

Results in json

How to make this presentable? I can’t get output=html to work, but if I run this ‘construct’ query it creates a simple flat RDF document:

PREFIX sch-ont:  <http://education.data.gov.uk/def/school/>
CONSTRUCT {
 ?x ?p ?z .
}
WHERE
{
?x sch-ont:websiteAddress "http://www.hamsey-green-infant.surrey.sch.uk" .
?x ?p ?z .
}

So, where are we here? We see two RDF datasets about the same school. One is the simple claim that I attended the

school at some time in the past (1976-1978, in fact). The other describes many of its current attributes; most of which may be different now from in the past. In my sample RDFa, I used the most popular Web link for the school to represent it; in the Edubase government data, it has a Web site address but it seems not to be current.

Assuming we’d used the same URIs for the school’s homepage (or indeed for the school itself) then these bits of data could be joined.

Perhaps a more compelling example of data linking would be to show this schools data mixed in with something like MySociety’s excellent interactive travel maps? Still, the example above shows that basic “find people who went to my school” queries should be very possible…

WordPress trust syndication revisited: F2F plugin

This is a followup to my Syndicating trust? Mediawiki, WordPress and OpenID post. I now have a simple implementation that exports data from WordPress: the F2F plugin. Also some experiments with consuming aggregates of this information from multiple sources.

FOAF has always had a bias towards describing social things that are shown rather than merely stated; this is particularly so in matters of trust. One way of showing basic confidence in others, is by accepting their comments on your blog or Web site. F2F is an experiment in syndicating information about these kinds of everyday public events. With F2F, others can share and re-use this sort of information too; or deal with it in aggregate to spread the risk and bring more evidence into their trust-related decisions. Or they might just use it to find interesting people’s blogs.

OpenID is a technology that lets people authenticate by showing they control some URL. WordPress blogs that use the OpenID plugin slowly accumulate a catalogue of URLs when people leave comments that are approved or rejected. In my previous post I showed how I was using the list of approved OpenIDs from my blog to help configure the administrative groups on the FOAF wiki.

This may all raise more questions than it answers. What level of detail is appropriate? are numbers useful, or just lists? in what circumstances is it sensible or risky to merge such data? is there a reasonable use for both ‘accept’ lists and ‘unaccept’ lists? What can we do with a list of OpenID URLs once we’ve got it? How do we know when two bits of trust ‘evidence’ actually share a common source? How do we find this information from the homepage of a blog?

If you install the F2F plugin (and have been using the OpenID plugin long enough to have accumulated a database table of OpenIDs associated with submitted comments), you can experiment with this. Basically it will generate HTML in RDFa format describing a list of people . See the F2F Wiki page for details and examples.

The script is pretty raw, but today it all improved a fair bit with help from Ed Summers, Daniel Krech and Morten Frederiksen. Ed and Daniel helped me get started with consuming this RDFa and SPARQL in the latest version of the rdflib Python library. Morten rewrote my initial nasty hack, so that it used WordPress Shortcodes instead of hardcoding a URL path. This means that any page containing a certain string – f2f in chunky brackets – will get the OpenID list added to it. I’ll try that now, right here in this post. If it works, you’ll get a list of URLs below. Also thanks to Gerald Oskoboiny for discussions on this and reputation-related aggregation ideas; see his page on reputation and trust for lost more related ideas and sites. See also Peter Williams’ feedback on the foaf-dev list.

Next steps? I’d be happy to have a few more installations of this, to get some testbed data. Ideally from an overlapping community so the datasets are linked, though that’s not essential. Ed has a copy installed currently too. I’ll also update the scripts I use to manage the FOAF MediaWiki admin groups, to load data from RDFa blogs; mine and others if people volunteer relevant data. It would be great to have exports from other software too, eg. Drupal or MediaWiki.

[f2f]

WordPress, TinyMCE and RDFa editors

I’m writing this in WordPress’s ‘Visual’ mode WYSIWYG HTML editor, and thinking “how could it be improved to support RDFa?”

Well let’s think. Humm. In RDFa, every section of text is always ‘about’ something, and then has typed links or properties associated with that thing. So there are icons ‘B’ for bold, ‘I’ for italics, etc. And thingies for bulleted lists, paragraphs, fonts. I’m not sure how easily it would handle the nesting of full RDFa, but some common case could be a start: a paragraph that was about some specific thing, and within which the topical focus didn’t shift.

So, here is a paragraph about me. How to say it’s really about me? There could be a ‘typeof’ attribute on the paragraph element, with value foaf:Person, and an ‘about’ attribute with a URI for me, eg. http://danbri.org/foaf.rdf#danbri

In this UI, I just type textual paragraphs and then double newlines are interpreted by the publishing engine as separate HTML paragraphs. I don’t see any paragraph-level or even post-level properties in the user interface where an ‘about’ URI could be stored. But I’m sure something could be hacked. Now what about properties and links? Let’s see…

I’d like to link to TinyMCE now, since that is the HTML editor that is embedded in WordPress. So here is a normal link to TinyMCE. To make it, I searched for the TinyMCE homepage, copied its URL into the copy/paste buffer, then pressed the ‘add link’ icon here after highlighting the text I wanted to turn into a link.

It gave me a little HTML/CSS popup window with some options – link URL, target window, link title and (CSS) class; I’ve posted a screenshot below. This bit of UI seems like it could be easily extended for typed links. For example, if I were adding a link to a paragraph that was about a Film, and I was linking to (a page about) its director or an actor, we might want to express that using an annotation on the link. Due to the indirection (we’re trying to say that the link is to a page whose primary topic is the director, etc.), this might complicate our markup. I’ll come back to that later.

tinymce link edit screenshop
tinymce link edit screenshot

I wonder if there’s any mention of RDFa on the TinyMCE site? Nope. Nor even any mention of Microformats.

Perhaps the simplest thing to try to build into TinyMCE would be for a situation where a type on the link expressed a relationship between the topic of the blog post (or paragraph), and some kind of document.

For example, a paragraph about me, might want to annotate a link to my old school’s homepage (ie. http://www.westergate.w-sussex.sch.uk/ ) with a rel=”foaf:schoolHomepage” property.

So our target RDFa would be something like (assuming this markup is escaped and displayed nicely):

<p about=”http://danbri.org/foaf.rdf#danbri” typeof=”foaf:Person” xmlns:foaf=”http://xmlns.com/foaf/0.1/”>

I’m Dan, and I went to <a rel=”foaf:schoolHomepage” href=”http://www.westergate.w-sussex.sch.uk/”>Westergate School</a>

</p>.

See recent discussion on the foaf-dev list where we contrast this idiom with one that uses a relationship (eg. ‘school’) that directly links a person to a school, rather than to the school’s homepage. Toby made some example RDFa output of the latter style, in response to a debate about whether the foaf:schoolHomepage idiom is an ‘anti-pattern’. Excerpt:

<p about=”http://danbri.org/foaf.rdf#danbri” typeof=”foaf:Person” xmlns:foaf=”http://xmlns.com/foaf/0.1/”>

My name is <span property=”foaf:name”>Dan Brickley</span> and I spent much of the ’80s at <span rel=”foaf:school”><a typeof=”foaf:Organization” property=”foaf:name” rel=”foaf:homepage” href=”http://www.westergate.w-sussex.sch.uk/”>Westergate School</a>.

</p>

So there are some differences between these two idioms. From the RDF side they tell you similar things: the school I went to. The latter idiom is more expressive, and allows additional information to be expressed about the school, without mixing it up with the school’s homepage. The former simply says “this link is to the homepage of Dan’s school”; and presumably defers to the school site or elsewhere for more details.

I’m interested to learn more about how these expressivity tradeoffs relate to the possibilities for visual / graphical editors. Could TinyMCE be easily extended to support either idiom? How would the GUI recognise a markup pattern suitable for editing in either style? Can we avoid having to express relationships indirectly, ie. via pages? Would it be a big job to get some basic RDFa facilities into WordPress via TinyMCE?

For related earlier work, see the 2003 SWAD-Europe report on Semantic blogging from the team then at HP Labs Bristol.

Syndicating trust? Mediawiki, WordPress and OpenID

Fancy title but simple code. A periodic update script is setting user/group membership rules on the FOAF wiki based on a list of trusted (for this purpose) OpenIDs exported from a nearby blog. If you’ve commented on the blog using OpenID and it was accepted, this means you can also perform some admin actions (page deletes, moves, blocking spammers etc.) on the FOAF wiki without any additional fuss.

Both WordPress blogs and Mediawiki wikis have some support for OpenID logins.

The FOAF wiki until recently only had one Sysop and Bureaucrat account (a bureaucrat has the same privileges as a Sysop except for the ability to create new bureaucrat accounts). So I’ve begun an experiment exploring idea of pre-approving certain OpenIDs for bureaucrat activities. For now, I take a list of OpenIDs from my own blog; these appear to be just the good guys, but this might be because only real humans have commented on my blog via OpenID. With a bit of tweaking I’m sure I could write SQL to select out only OpenIDs associated with posts or comments I’ve accepted as non spammy, though.

So now there’s a script I can run (thanks tobyink and others in #swig IRC for help) which compares an externally supplied list of OpenID URIs with those OpenIDs known to the wiki, and upgrades the status of any overlaps to be bureaucrats. Currently the ‘syndication’ is trivial since the sites are on the same machine, and the UI is minimal; I haven’t figured out how best to convey this notion of ‘pre-approved upgrade’ to the people I’m putting in an admin group. Quite reasonably they might object to being misrepresented as contributors; who knows.

But all that aside, take a look and have a think. This kind of approach has a lot going for it. We will have all kinds of lists of people, groups of people, and in many cases we’ll know their OpenIDs. So why not pool what we know? If a blog or wiki has information about an OpenID that shows it is somehow trustworthy, or at least not obviously a spammer, there’s every reason to make notations (eg. FOAF/RDFa) that allow other such sites to harvest and integrate that data…

See also Dan Connolly’s DIG blog post on this, and the current list of Bureaucrats on the FOAF Wiki (and associated documentation). If your names on the list, it just means your OpenID was on a pre-approved list of folk who I trust based on their interactions with my own blog. I’d love to add more sources here and make it genuinely communal.

This is all part of the process of getting FOAF moving again. The brains of FOAF is in the IssueTracker page, and since the site was damaged by spammers and hackers recently I’m trying to make sure we have a happy / wholesome environment for maintaining shared documents. And that’s more than I can do as a solo admin, hence this design for opening things up…

Remote remotes

I’ve just closed the loop on last weekend’s XMPP / Apple Remote hack, using Strophe.js, a library that extends XMPP into normal Web pages. I hope I’ll find some way to use this in the NoTube project (eg. wired up to Web-based video playing in OpenSocial apps), but even if not it has been a useful learning experience. See this screenshot of a live HTML page, receiving and displaying remotely streamed events (green blob: button clicked; grey blob: button released). It doesn’t control any video yet, but you get the idea I hope.

Remote apple remote HTML demo
Remote apple remote HTML demo, screenshot showing a picture of handheld apple remote with a grey blob over the play/pause button, indicating a mouse up event. Also shows debug text in html indicating ButtonUpEvent: PLPZ.

This webclient needs the JID and password details for an XMPP account, and I think these need to be from the same HTTP server the HTML is published on. It works using BOSH or other tricks, but for now I’ve not delved into those details and options. Source is in the Buttons area of the FOAF svn: webclient. I made a set of images, for each button in combination with button-press (‘down’), button-release (‘up’). I’m running my own ejabberd and using an account ‘buttons@foaf.tv’ on the foaf.tv domain. I also use generic XMPP IM accounts on Google Talk, which work fine although I read recently that very chatty use of such services can result in data rates being reduced.

To send local Apple Remote events to such a client, you need a bit of code running on an OSX machine. I’ve done this in a mix of C and Ruby: imremoted.c (binary) to talk to the remote, and the script buttonhole_surfer.rb to re-broadcast the events. The ruby code uses Switchboard and by default loads account credentials from ~/.switchboardrc.

I’ve done a few tests with this setup. It is pretty responsive considering how much indirection is involved: but the demo UI I made could be prettier. The + and – buttons behave differently to the left and right (and menu and play/pause); only + and – send an event immediately. The others wait until the key is released, then send a pair of events. The other keys except for play/pause will also forget what’s happening unless you act quickly. This seems to be a hardware limitation. Apparently Apple are about to ship an updated $20 remote; I hope this aspect of the design is reconsidered, as it limits the UI options for code using these remotes.

I also tried it using two browsers side by side on the same laptop; and two laptops side by side. The events get broadcasted just fine. There is a lot more thinking to do re serious architecture, where passwords and credentials are stored, etc. But XMPP continues to look like a very interesting route.

Finally, why would anyone bother installing compiled C code, Ruby (plus XMPP libraries), their own Jabber server, and so on? Well hopefully, the work can be divided up. Not everyone installs a Jabber server. My thinking is that we can bundle a collection of TV and SPARQL XMPP functionality in a single install, such that local remotes can be used on the network, but also local software (eg. XBMC/Plex/Boxee) can also be exposed to the wider network – whether it’s XMPP .js running inside a Web page as shown here, or an iPhone or a multi-touch table. Each will offer different interaction possibilities, but they can chat away to each other using a common link, and common RDF vocabularies (an area we’re working on in NoTube). If some common micro-protocols over XMPP (sending clicks or sending commands or doing RDF queries) can support compelling functionality, then installing a ‘buttons adaptor’ is something you might do once, with multiple benefits. But for now, apart from the JQbus piece, the protocols are still vapourware. First I wanted to get the basic plumbing in place.

Update: I re-discovered a useful ‘which bosh server do you need?’ which reminds me that there are basically two kinds of BOSH software offering; those that are built into some existing Jabber/XMPP server (like the ejabberd installation I’m using on foaf.tv) and those that are stand-alone connection managers that proxy traffic into the wider XMPP network. In terms of the current experiment, it means the event stream can (within the XMPP universe) come from addresses other than the host running the Web app. So I should also try installing Punjab. Perhaps it will also let webapps served from other hosts (such as opensocial containers) talk to it via JSON tricks? So far I have only managed to serve working Buttons/Strophe HTML from the same host as my Jabber server, ie. foaf.tv. I’m not sure how feasible the cross-domain option is.

Update x2: Three different people have now mentioned opensoundcontrol to me as something similar, at least on a LAN; it clearly deserves some investigation

Streaming Apple Events over XMPP

I’ve just posted a script that will re-route the OSX Apple Remote event stream out across XMPP using the Switchboard Ruby library, streaming click-down and click-up events from the device out to any endpoint identified by a Jabber/XMPP JID (i.e. Jabber ID). In my case, I’m connecting to XMPP as the user xmpp:buttons@foaf.tv, who is buddies with xmpp:bob.notube@gmail.com, ie. they are on each other’s Jabber rosters already. Currently I simply send a textual message with the button-press code; a real app would probably use an XMPP IQ stanza instead, which is oriented more towards machines than human readers.

The nice thing about this setup is that I can log in on another laptop to Gmail, run the Javascript Gmail / Google Talk chat UI, and hear a ‘beep’ whenever the event/message arrives in my browser. This is handy for informally testing the laggyness of the connection, which in turn is critical when designing remote control protocols: how chatty should they be? How much smarts should go into the client? which bit of the system really understands what the user is doing? Informally, the XMPP events seem pretty snappy, but I’d prefer to see some real statistics to understand what a UI might risk relying on.

What I’d like to do now is get a Strophe Javascript client running. This will attach to my Jabber server and allow these events to show up in HTML/Javascript apps…

Here’s sample output of the script (local copy but it looks the same remotely), in which I press and release quickly every button in turn:

Cornercase:osx danbri$ ./buttonhole_surfer.rb
starting event loop.

=> Switchboard started.
ButtonDownEvent: PLUS (0x1d)
ButtonUpEvent: PLUS (0x1d)
ButtonDownEvent: MINU (0x1e)
ButtonUpEvent: MINU (0x1e)
ButtonDownEvent: LEFT (0x17)
ButtonUpEvent: LEFT (0x17)
ButtonDownEvent: RIGH (0x16)
ButtonUpEvent: RIGH (0x16)
ButtonDownEvent: PLPZ (0x15)
ButtonUpEvent: PLPZ (0x15)
ButtonDownEvent: MENU (0x14)
ButtonUpEvent: MENU (0x14)
^C
Shutdown initiated.
Waiting for shutdown to complete.
Shutdown initiated.
Waiting for shutdown to complete.
Cornercase:osx danbri$

Apple Remote events – a quick howto

Anyone who has recently bought an Apple computer probably has one or more Apple Remotes.

I have been learning how to access them. Conclusion: iremoted does 95% of what you probably need, and the discussion over on cocoadev.com tells you more than you probably wanted to know. My experiments are written up in a corner of the FOAF wiki, and have a large collection of related links stored under the ‘buttons’ tag at delicious.com.

Summary: If you run iremoted, it calls OSX APIs and gets notified whenever button presses on the Apple Remote are detected.

There are 6 buttons: a menu button, a play/pause main button, and four others surrounding the main button. To the left and right there are ‘back’ and ‘fwd’; and above/below they are labelled ‘+’ and ‘-‘.

Interestingly from a protocol and UI design view, those last buttons behave differently. When you press and release the other 4 buttons, the 2 events are not reported until after you release; but with ‘+’ and ‘-‘, the initial press is reported immediately. Furthermore, only ‘+’ and ‘-‘ do anything interesting if you press for longer than a second or so; the others result in your action being silently dropped, even when you release the button.

I am looking at this since I am revisiting my old Jqbus FOAF experiments with XMPP and SPARQL for data access; but this time building in some notion of behaviour too, hence the new name: Buttons. The idea is to use XMPP as a universal communication environment between remote controls and remote-controllable things, initially in the world around TV. So I am exploring adaptors for existing media centres and media players (Boxee, XBMC, iTunes etc).

There are various levels of abstraction at which XMPP could be used as a communications pipe: it could stream clicks directly, by simply reflecting the stream of Infra Red events, translating them into IQ messages sent to JID-identified XMPP accounts. Or we could try to interpret them closer to their source, and send more meaningful events. For example, if the “+” is held for 3 seconds, do we wait until it is released before sending a “Maximize volume” message out across the ‘net. Other approaches (eg. as suggested on the xmpp list, and in some accessibility work) include exporting bits of UI to the remote. Or in the case of really dumb remotes with no screen, at least some sort of invisible proxy/wrapper that tries to avoid sprawing every user click out across XMPP.

Anyway, I thought this worth a writeup. Here’s a link to my copy of iremoted, iremoted_buttons.c. The main change (apart from trivial syntax edits for its output) is the use of kIOHIDOptionsTypeSeizeDevice. This is pretty important, as it stops other processes on your mac receiving the same events; eg. FrontRow taking over the laptop UI whenever ‘menu’ is pressed, or the local volume being changed by ‘+’ and ‘-‘:

    ioReturnValue = (*hidDeviceInterface)->open(hidDeviceInterface, kIOHIDOptionsTypeSeizeDevice);

Local Video for Local People

OK it’s all Google stuff, but still good to see. Go to Google Maps, My Maps, to find ‘Videos from YouTube’ listed. Here’s where I used to live (Bristol UK) and where I live now (Amsterdam, The Netherlands). Here’s a promo film of some nearby art installations from ArtZuid, who even have a page in English. I wouldn’t have found the video or the nearby links except through the map overlay. I don’t know exactly how they’re geotagging the videos, I can’t see an option under ‘my videos’ in YouTube, so perhaps it’s automatic or viewer annotations. In YouTube, you can add a map link under ‘My Videos’ / ‘Edit Video’; I didn’t see that initially. I made some investigations into similar issues (videos on maps) while at Joost; see brief mention in my Fundamentos Web slides from a couple of years ago.
Oh, nearly forgot to mention: zooming out to get a Europe or World-wide view is quite striking too.

Quick clarification on SPARQL extensions and “Lock-in”

It’s clear from discussion bouncing around IRC, Twitter, Skype and elsewhere that “Lock-in” isn’t a phrase to use lightly.

So I post this to make myself absolutely clear. A few days ago I mentioned in IRC a concern that newcomers to SPARQL and RDF databases might not appreciate which SPARQL extensions are widely implemented, and which are the specialist offerings of the system they happen to be using. I mentioned OpenLink’s Virtuoso in particular as a SPARQL implementation that had a rich and powerful set of extensions.

Since it seems there is some risk I might be mis-interpreted as suggesting OpenLink are actively trying to “do a Microsoft” and trap users in some proprietary pseudo-SPARQL, I’ll state what I took to be obvious background knowledge: OpenLink is a company who owe their success to the promotion of cross-vendor database portability, they have been tireless advocates of a standards-based Semantic Web, and they’re active in proposing extensions to W3C for standardisation. So – no criticism of OpenLink intended. None at all.

All I think we need here, are a few utilities that help developers understand the nature of the various SPARQL dialects and the potential costs/benefits of using them. Perhaps an online validator, alongside those for RDF/XML, RDFa, Turtle etc. Such a validator might usefully list the extensions used in some query, and give pointers (perhaps into a wiki) where the status of the various extensions constructs can be discussed and documented.

Since SPARQL is such a young language, it lacks a lot of things that are taken from granted in the SQL world, and so using rich custom extensions when available is for many developers a sensible choice. My only concern is that it must be a choice, and one entered into consciously.

Getting started with Mozilla Jetpack for Thunderbird (on OSX)

A few weeks ago, I started to experiment with Mozilla’s new Jetpack extension model when it became available for Thunderbird. Revisiting the idea today, I realise I’d forgotten the basic setup details, so am recording them here for future reference.

I found I had to download the source from the Mercurial Web interface, rather than use pre-prepared XPI installers. This may have improved by the time you read this. I also learned (from Standard9 in #jetpack IRC) that I need asuth’s repository, rather than the main one. Again, things move quickly, don’t assume this is true forever.

Here is what worked for me, on OSX.

1. Grab a .zip from the Jetpack repo, and unpack it locally on a machine that has Thunderbird installed.

2. Edit extensions/install.rdf and make sure the em:maxVersion in the Thunderbird section matches your version of Thunderbird. In mine I updated it to say <em:maxVersion>3.0b4</em:maxVersion> (instead of 3.0b4pre).

3.  See the README in the jetpack filetree for installation. With Thunderbird closed, I ran “python manage.py install –app=thunderbird” and I found Jetpack installed fine.

4. Run Thunderbird, you should see an about:jetpack tab, and corresponding options in the Tools menu.

This was enough to get started. See discussion on visophyte.org for some example code.

After installation, you can use the about:jetpack windows to load, reload and delete Jetpacks from URL.

So, why would you bother doing all this? Jetpack provides a simple way of extending an email client using Web technology.

In my current (unfinished!) experiment, for example, I’m looking at making a sidebar the shows information (photo, blog etc.) about the sender of the currently-viewed email. And I figured that if I blogged this HOWTO, someone more familiar with ajax, jquery etc might care to help with wiring this up to the Google Social Graph JSON API, so we can use FOAF and XFN to provide more contextual information around incoming mail…

Assuming you are running Thunderbird 3b4

Mirrors and Prisms: robust site-specific browsers

Mozilla (amongst others, see Chris Messina’s writeup of the trend, also Matt’s) have been exploring site-specific browsers through their Prism project. These combine aspects of the Web and Desktop environments, allowing you to have a desktop app tuned for browsing just one specific Web site. Prism is an application which, when run, will generate new per-site desktop applications. Currently it does not yet have a fancy packaging/installer, so users will need to install Prism plus the site files separately.

I have started to look at Prism as a basis for accessing robust, mirrored sites, so that a single point of failure (or censorship) might be avoided. With a lot help from Matt and others in #prism IRC chat, I have something almost working. The idea is simple: hack Prism so that the running browser code intercepts clicks and (based on some as-yet-undefined logic and preferences) gets the page from a list of mirrors, which might also be fetched dynamically from the ‘net.

I should also mention that one motivation here is for anti-censorship tools, to give users an easy way to access sites which might be blocked by their IP address or URL otherwise. I looked at FoxyProxy as an option but for site-specific robustness, running a full proxy server seems a bit heavy, compared to simply duplicating a set of files. Here’s what the main Prism app looks like:

prism-gutenberg
Screenshot showing Prism config settings for a site-specific browser.

Once you have Prism installed, you can hack a file named webrunner.js to intervene when links are clicked. In OSX, this can be found as /Applications/Prism.app/Contents/Resources/chrome/webrunner/content/webrunner.js.

Edit this: _domActivate : function(aEvent)

I added the following block to the start of this function:

var link = aEvent.target;
if (link instanceof HTMLAnchorElement && !WebRunner._isLinkExternal(link)) {
aEvent.preventDefault();
WebRunner._getBrowser().loadURI(“http://example.org/mirrors/”+link.href,null,null);
}

The idea here being that we intercept clicks, and rewrite them to point to equivalent http:// URIs elsewhere in the Web. As far as this goes, it works as advertised. But what I have is far from working… it would need some code in there to find the right mirror URLs to fetch from. Perhaps a list might be fetched on startup or first time a link is followed. It could also do with some work on packaging, so that this hacked version of Prism plus some actual site-specific browser config can be made into an easy-install Windows .exe or OSX .app. For a Windows installer, I am told that NSIS is a good place to start. You could also imagine a version that hid the mirrored URLs from user’s view. Since Prism has a built-in option to completely hide the URL navigation bar, I didn’t investigate this idea yet.

OK I think I’ve written up everything I learned from the helpful folks in IRC. I hope this repays some karma. If anyone cares to explore this further, or wants to help target student projects on exploring it, please get in touch.

Wolfram Alpha Interview

An interview with Xiang Wang of Wolfram Research in China, by Han Xu (Collin Hsu) of W3China fame.

Interesting excerpt:

Q: Since Wolfram|Alpha is dubbed ‘smarter’ than traditional search engines, I wonder how much AI techniques are actually employed in the system? How is inference done? What is the provenance of each fact/claim? And what if there is a disagreement? For example, how it would represent information about Israel/Palestine area?

A: It’s much more an engineered artifact than a humanlike artificial intelligence. Some of what it does – especially in language understanding – may be similar to what humans do. But its primary objective is to do directed computations, not to act as a general intelligence. Wolfram|Alpha uses established scientific or other models as the basis for its computations. Whenever it does new computations, it’s effectively deriving new facts. About the controversial data you asked about, we deal in different ways with numerical data and particular issues. For numerical data, Wolfram| Alpha curators typically assign a range of values that are then carried through computations. For issues such as the interpretation of particular names or terms, like Israel/Palestine area issue mentioned in your question, Wolfram|Alpha typically prompts users to choose the assumption they want. We spend considerable effort on automated testing, expert review, and checking external data that we use to ensure the results. But with trillions of pieces of data, it’s inevitable that there are still errors out there. If you ever see a problem, please report it.

What kind of Semantic Web researcher are you?

It’s hard to keep secrets in today’s increasingly interconnected, networked world. Social network megasites, mobile phones, webcams and  inter-site syndication can broadcast and amplify the slightest fragment of information. Data linking and interpretation tools can put these fragments together, to paint a detailed picture of your life, both online and off.

This online richness creates offline risk. For example, if you’re going away on holiday, there are hundreds of ways in which potential thieves could learn that your home is vacant and therefore a target for crime: shared calendars, twittered comments from friends or family, flickr’d photographs. Any of these could reveal that your home and possessions sit unwatched, unguarded, presenting an easy target for criminals.

Q: What research challenge does this present to the Semantic Web community? How can we address the concern that Semantic and Social Web technology have more to offer Burglar Bill than to his victims?

A1: We need better technology for limiting the flow of data, proving a right to legitimate access to information, cross-site protocols for deleting leaked or retracted data that flows between sites, and calculating trust metrics for parties requesting data access.

A2: We need to find ways to reconnect people with their neighbours and neighbourhoods, so that homes don’t sit unwatched when their occupants are away.

ps. Dear Bill, I have my iphone, laptop, piggy bank and camera with me…