How to tell you’re living in the future: bacterial computers, HTML and RDF

Clue no.1. Papers like “Solving a Hamiltonian Path Problem with a bacterial computer” barely raise an eyebrow.

Clue no.2. Undergraduates did most of the work.

And the clincher, …

Clue no.3. The paper is shared nicely in the Web, using HTML, Creative Commons document license, and useful RDF can be found nearby.

From those-crazy-eggheads dept, … bacterial computers solving graph data problems. Can’t wait for the javascript API. Except the thing of interest here isn’t so much the mad science but what they say about how they did it. But the paper is pretty fun stuff too.

The successful design and construction of a system that enables bacterial computing also validates the experimental approach inherent in synthetic biology. We used new and existing modular parts from the Registry of Standard Biological Parts [17] and connected them using a standard assembly method [18]. We used the principle of abstraction to manage the complexity of our designs and to simplify our thinking about the parts, devices, and systems of our project. The HPP bacterial computer builds upon our previous work and upon the work of others in synthetic biology [19-21]. Perhaps the most impressive aspect of this work was that undergraduates conducted every aspect of the design, modeling, construction, testing, and data analysis.

Meanwhile, over on partsregistry.org you can read more about the bits and pieces they squished together. It’s like a biological CPAN. And in fact the anology is being actively pursued: see openwetware.org’s work on an RDF description of the catalogue.

I grabbed an RDF file from that site and confirm that simple queries like

select * from <SemanticSBOLv0.13_BioBrick_Data_v0.13.rdf>  where {<http://sbol.bhi.washington.edu/rdf/sbol.owl#BBa_I715022> ?p ?v }

and

select * from <SemanticSBOLv0.13_BioBrick_Data_v0.13.rdf>  where {?x ?p <http://sbol.bhi.washington.edu/rdf/sbol.owl#BBa_I715022>  }

… do navigate me around the graph that describes the pieces described in their paper.

Here’s what the HTML paper says right now,

We designed and built all the basic parts used in our experiments as BioBrick compatible parts and submitted them to the Registry of Standard Biological Parts [17]. Key basic parts and their Registry numbers are: 5′ RFP (BBa_I715022), 3′ RFP (BBa_ I715023), 5′ GFP (BBa_I715019), and 3′ GFP (BBa_I715020). All basic parts were DNA sequence verified. The basic parts hixC(BBa_J44000), Hin LVA (BBa_J31001) were used from our previous experiments [8]. The parts were assembled by the BioBrick standard assembly method [18] yielding intermediates and devices that were also submitted to the Registry. Important intermediate and devices constructed are: Edge A (BBa_S03755), Edge B (BBa_S03783), Edge C (BBa_S03784), ABC HPP construct (BBa_I715042), ACB HPP construct (BBa_I715043), and BAC HPP construct (BBa_I715044). We previously built the Hin-LVA expression cassette (BBa_S03536) [8].

How nice to have a scholarly publication in HTML format, open-access published under creative commons license, and backed by machine-processable RDF data. Never mind undergrads getting bacteria to solve NP-hard graph problems, it’s the modern publishing and collaboration machinery described here that makes me feel I’m living in the future…

(World Wide Web – Let’s Share What We Know…)

ps. thanks to Dan Connolly for nudging me to get this shared with the planetrdf.com-reading community. Maybe it’ll nudge Kendall into posting something too.

Syndicating trust? Mediawiki, WordPress and OpenID

Fancy title but simple code. A periodic update script is setting user/group membership rules on the FOAF wiki based on a list of trusted (for this purpose) OpenIDs exported from a nearby blog. If you’ve commented on the blog using OpenID and it was accepted, this means you can also perform some admin actions (page deletes, moves, blocking spammers etc.) on the FOAF wiki without any additional fuss.

Both WordPress blogs and Mediawiki wikis have some support for OpenID logins.

The FOAF wiki until recently only had one Sysop and Bureaucrat account (a bureaucrat has the same privileges as a Sysop except for the ability to create new bureaucrat accounts). So I’ve begun an experiment exploring idea of pre-approving certain OpenIDs for bureaucrat activities. For now, I take a list of OpenIDs from my own blog; these appear to be just the good guys, but this might be because only real humans have commented on my blog via OpenID. With a bit of tweaking I’m sure I could write SQL to select out only OpenIDs associated with posts or comments I’ve accepted as non spammy, though.

So now there’s a script I can run (thanks tobyink and others in #swig IRC for help) which compares an externally supplied list of OpenID URIs with those OpenIDs known to the wiki, and upgrades the status of any overlaps to be bureaucrats. Currently the ‘syndication’ is trivial since the sites are on the same machine, and the UI is minimal; I haven’t figured out how best to convey this notion of ‘pre-approved upgrade’ to the people I’m putting in an admin group. Quite reasonably they might object to being misrepresented as contributors; who knows.

But all that aside, take a look and have a think. This kind of approach has a lot going for it. We will have all kinds of lists of people, groups of people, and in many cases we’ll know their OpenIDs. So why not pool what we know? If a blog or wiki has information about an OpenID that shows it is somehow trustworthy, or at least not obviously a spammer, there’s every reason to make notations (eg. FOAF/RDFa) that allow other such sites to harvest and integrate that data…

See also Dan Connolly’s DIG blog post on this, and the current list of Bureaucrats on the FOAF Wiki (and associated documentation). If your names on the list, it just means your OpenID was on a pre-approved list of folk who I trust based on their interactions with my own blog. I’d love to add more sources here and make it genuinely communal.

This is all part of the process of getting FOAF moving again. The brains of FOAF is in the IssueTracker page, and since the site was damaged by spammers and hackers recently I’m trying to make sure we have a happy / wholesome environment for maintaining shared documents. And that’s more than I can do as a solo admin, hence this design for opening things up…

OpenID – a clash of expectations?

Via Dan Connolly, this from the mod_auth_openid FAQ:

Q: Is it possible to limit login to some users, like htaccess/htpasswd does?

A: No. It is possible to limit authentication to certain identity providers (by using AuthOpenIDDistrusted and AuthOpenIDTrusted, see the main page for more info). If you want to restrict to specific users that span multiple identity providers, then OpenID probably isn’t the authentication method you want. Note that you can always do whatever vetting you want using the REMOTE_USER CGI environment variable after a user authenticates.

Funny, this is just what I thought was most interesting about OpenID: it lets you build sites where you can offer a varying experiences (including letting them in or not) to differ users based on what you know about them. OpenID itself doesn’t do everything out of the box, but by associating public URIs with people, it’s a very useful step.

A year ago I sketched a scenario in this vein (and it seems to have survived sanity check from Simon Willison, or at least he quotes it). It seems perhaps that OpenID is all things to all people…?

Miracle of binary

The ultimate absurdity is now staring us in the face: a universal library of two volumes, one containing a single dot and the other a dash. Persistent repetition and alternation of the two is sufficient, we well know, for spelling out any and every truth. The miracle of the finite but universal library is a mere inflation of the miracle of binary notation: everything worth saying, and everything else as well, can be said with two characters. It is a letdown befitting the Wizard of Oz, but it has been a boon to computers.

— Willard van Orman Quine on the Universal Library

(via Borges’ Library of Babel indirectly found via Dan Connolly’s RDFization of the animals quote)

This somehow reminded me of a couple other links I found earlier on Turing Machines built in Conway’s game of Life: one from Paul Rendell, another from Paul Chapman. These machines also have a kind of strange beauty

Who, what, where, when?

A “Who? what? where? when?” of the Semantic Web is taking shape nicely.

Danny Ayers shows some work with FOAF and the hCard microformat, picking up a theme first explored by Dan Connolly back in 2000: inter-conversion between RDF and HTML person descriptions. Danny generates hCards from SPARQL queries of FOAF, an approach which would pair nicely with GRDDL for going in the other direction.

Meanwhile at W3C, the closing days of the SW Best Practices Group have recently produced a draft of an RDF/OWL Representation of Wordnet. Wordnet is a fantastic resource, containing descriptions of pretty much every word in the English language. Anyone who has spent time in committees, deciding which terms to include in some schema/vocabulary, must surely feel the appeal of a schema which simply contains all possible words. There are essentially two approaches to putting Wordnet into the Semantic Web. A lexically-oriented approach, such as the one published at W3C for Wordnet 2.0, presents a description of words. It mirrors the structure of wordnet itself (verbs, nouns, synsets etc.). Consequently it can be a complete and unjudgemental reflection into RDF of all the work done by the Wordnet team.

The alternate, and complementary, approach is to explore ways of projecting the contents of Wordnet into an ontology, so that category terms (from the noun hierarchy) in Wordnet become classes in RDF. I made a simplistic approach at this some time back (see overview). It has appeal (alonside the linguistic version) because it allows RDF to be used to describe instances of classes for each concept in wordnet, with other properties of those instances. See WhyWordnetIsCool in the FOAF wiki for an example of Wordnet’s coverage of everyday objects.

So, getting Wordnet moving into the SW is great step. It gives us URIs to identify a huge number of everyday concepts. It’s coverage isn’t complete, and it’s ontology is kinda quirky. Aldo Gangemi and others have worked on tidying up the hierarchy; I believe only for version 1.6 of Wordnet so far. I hope that work will eventually get published at W3C or elsewhere as stable URIs we can all use.

In addition to Wordnet there are various other efforts that give types that can be used for the “what” of “who/what/where/when”. I’ve been talking with Rob McCool about re-publishing a version of the old TAP knowledge base. The TAP project is now closed, with Rob working for Yahoo and Guha at Google. Stanford maintain the site but aren’t working on it. So I’ve been working on a quick cleanup (wellformed RDF/XML etc.) of TAP that could get it into more mainstream use. TAP, unlike Wordnet, has more modern everyday commercial concepts (have a look), as well as a lot of specific named instances of these classes.

Which brings me to (Semantic) Wikipedia; another approach to identifying things and their types on the Semantic Web. A while back we added isPrimaryTopicOf to FOAF, to make it easier to piggyback on Wikipedia for RDF-identifying things that have Wiki (and other) pages about them. The Semantic Mediawiki project goes much much further in this direction, providing a rich mapping (classes etc.) into RDF for much of Wikipedia’s more data-oriented content. Very exciting, especially if it gets into the main codebase.

So I think the combination of things like Wordnet, TAP, Wikipedia, and instance-identifying strategies such as “isPrimaryTopicOf”, will give us a solid base for identifying what the things are that we’re describing in the Semantic Web.

And regarding. “Where?” and “when?” … on the UI front, we saw a couple of announcements recently: OpenLayers v1.0, which provides Google-maps-like UI functionality, but opensource and standards friendly. And for ‘when’, a similar offering: the timeline widget. This should allow for fancy UIs to be wired in with RDF calendar or RDF-geo tagged data.

Talking of which… good news of the week: W3C has just announced a Geo incubator group (see detailed charter), whose mission includes updates for the basic Geo (ie. lat/long etc) vocabulary we created in the SW Interest Group.

Ok, I’ve gone on at enough length already, so I’ll talk about SKOS another time. In brief – it fits in here in a few places. When extended with more lexical stuff (for describing terms, eg. multi-lingual thesauri) it could be used as a base for representing the lexically-oriented version of Wordnet. And it also fits in nicely with Wikipedia, I believe.

Last thing, don’t get me wrong — I’m not claiming these vocabs and datasets and bits of UI are “the” way to do ‘who/what/where/when’ on the Semantic Web. They’re each one of several options. But it is great to have a whole pile of viable options at last :)

Dublin Core “RDF idiom conversion rules” in SPARQL

This note is both an earlybird (and partial) technical review of the new SPARQL RDF query spec and a proposal to the DC-Architecture Working Group for a possible representation language that captures some rules that explain how some different forms of DC relate to each other. As a review, I have to say that I now find CONSTRUCT more useful than I originally expected. Maybe a full W3C RDF Rules language would be a useful thing to do after all… :)

The Dublin Core metadata vocabulary is used sometimes as a set of relationships between documents (etc.) and other resources (entities, things); and sometimes as a set of relationships to textual strings associated with those things. The DCMI Abstract Model document gives some more detail on this, and proposes a model to explain DC thinking on this issue. Appendix B of that
document describes the mapping of this abstraction into RDF.

One option (todo: find url of Andy’s doc listing options) being explored by the DC-Architecture WG is for Dublin Core to publish explicit guidelines explaining how the two representational idioms can be interconverted. What I try to do below is capture DC community concensus about how the two styles of using properties from the http://purl.org/dc/elements/1.1/ vocabulary compare.

The rules in prose:

Rule 1: Whenever we see a DC property from http://purl.org/dc/elements/1.1/ applied to some resource ‘x’, and the value of that resource is something that is not a literal (ie. a string or markup), then any values of the rdfs:label property which apply to that thing, are also values of our DC property. eg. if a document has a dc:creator whose value is a thing that has an rdfs:label “Andy Powell”, then it is also true that the document has a dc:creator property whose value is the literal string “Andy Powell”.

Rule 2: As above, but reversed. Whenever there is some string that is an rdfs:Literal that is the value of a DC property for some resource ‘x’, then it will also be true that there exists some resource that is not an rdfs:Literal, and that has an rdfs:label whose value is that string, and that ‘x’ stands in the same DC property relationship to that resource.

Problem statement: this sort of thing is (see above) hard to express in prose, we’d like a clean machine-readable way of expressing these rules, so that test cases can be formulated, mailing list discussions can be made more precise, and (perhaps) so that software tools can directly exploit these rules to use with RDF.

The SPARQL query language is a W3C work-in-progress for querying RDF data. It mainly provides some mechanisms for specifying bits of RDF to match in some query, but also provides a basic mechanism for CONSTRUCTing further RDF statements based on combining matched data with a template structure. This is a preliminary investigation into the possibility of using SPARQL’s CONSTRUCT mechanism to express the rules being discussed in the DC community. Readers should note that both SPARQL and the DC “idiom conversion rules” are both works in progress, and that I might not have full grasp of either.

Anyway, here goes:

Rule 1 in SPARQL: generate simple form from explicit form

PREFIX dc:   <http ://purl.org/dc/elements/1.1/>
PREFIX rdfs: <http ://www.w3.org/2000/01/rdf-schema#>
PREFIX rdf: <http ://www.w3.org/1999/02/22-rdf-syntax-ns#>
CONSTRUCT   ( ?x ?relation ?string )
WHERE       ( ?x ?relation ?thing )
            ( ?thing rdf:type x:NonLiteralResource)
            ( ?thing rdfs:label ?string)
            ( ?relation rdfs:isDefinedBy http://purl.org/dc/elements/1.1/) 

Rule 2 in SPARQL: explicit form from string form (harder)

PREFIX dc:   <http ://purl.org/dc/elements/1.1/>
PREFIX rdfs: <http ://www.w3.org/2000/01/rdf-schema#>
PREFIX rdf: <http ://www.w3.org/1999/02/22-rdf-syntax-ns#>
CONSTRUCT   ( ?x ?relation ?thing )
            ( ?thing rdfs:label ?string)
WHERE    ( ?x ?relation ?string )
            ( ?string rdf:type rdfs:Literal)
            ( ?relation rdfs:isDefinedBy http://purl.org/dc/elements/1.1/)

See also: comments in SPARQL draft on “what if construct graph includes unbound variables”…

followup: Dan Connolly and others have noted that I should be careful not to advocate the use of SPARQL as a Rule language. I lost his specific comments due to problems with WordPress, but agree entirely. What I was trying to do here was see how it might look if one did try to express rules using these queries, to better understand how the two technologies relate. Please don’t consider SPARQL a rule language (but please do share your experiences with having RDF Rule, Query and OWL languages work together…).