planetplanet foafrolls

The PlanetPlanet feed reader (and the Venus variant) exposes its blogroll via RDF/FOAF, typically at “/foafroll.xml” URIs. I ran through the list of Planet installations from the main site, and found the following, which might be interesting for experimentation, crawling, whitelist work etc. Or you could just make a giant feedlist and install Venus yourself, composing your own meta selection from the feeds described in these files.


http://widgetarians.org/foafroll.xml


http://www.planetapache.org/foafroll.xml


http://www.beclan.org/aggregator/foafroll.xml


http://planet.classpath.org/foafroll.xml


http://www.debian.org.hk/planet/foafroll.xml


http://planet.hellug.gr/foafroll.xml


http://planet.freedesktop.org/foafroll.xml


http://planet.humbug.org.au/foafroll.xml


http://planet.gnome-ev.de/foafroll.xml


http://gstreamer.freedesktop.org/planet/foafroll.xml


http://planet.jabber.org/foafroll.xml


http://planet.mozilla.org/foafroll.xml


http://planet.foss.org.my/foafroll.xml


http://planet.go-oo.org/foafroll.xml


http://planet.perl.org/foafroll.xml


http://www.planetpython.org/foafroll.xml


http://planet.slug.org.au/foafroll.xml


http://planetsun.org/foafroll.xml


http://www.planetsuse.org/foafroll.xml


http://planet.twistedmatrix.com/foafroll.xml


http://advocacydev.org/blogs/foafroll.xml


http://planet.arslinux.com/foafroll.xml


http://fossplanet.com/foafroll.xml


http://indyblogs.protest.net/foafroll.xml


http://www.cs.princeton.edu/~mp/malayalam/blogs/foafroll.xml


http://planet.mozillazine.org/foafroll.xml

http://planetjava.org/foafroll.xml # bad xml

http://planetkde.org/foafroll.xml

http://www.planet-im.com/foafroll.xml # no feed urls

http://planet.linux.net.mk/foafroll.xml

FOAF diagram (day 2)


FOAF diagram (day 2)
Originally uploaded by danbri

Another revision, after feedback from Ivan.

The original had “Thing” in italics (a convention I tried before adding in doap: dc: and sioc: references), to indicate it was from another namespace. I’ve now made that heritage explicit (although I suspect it might confuse, the idea is pretty central so worth explaining).

Originally I had both “tipjar” and “mbox” drawn as if they were literal properties, when both are relational. The later layout I’m using allows tipjar to be drawn without crossovers, so now only “mbox” is an oddity. I’ve used italics as a hint of that, but without explicit explanation.

A lot of the redundancy in the diagram comes from the property inverses, but I want to leave them in since they’re important to explain. The expanded key in the bottom-left now has a nerdy little explanation of “inverse property”. I also use the word “subClass” explicitly on the fat arrow, to tap into an OO heritage that might be floating around in people’s minds. In other words, I want people to realise that “a Person is an Agent is a Spatial thing is a thing” is what we’re getting at with those fat little arrows.

On Ivan’s suggestion I trimmed the property lists a little. Removing the non-jabber IM properties, and first/last name. The space earned from the latter was immediately spent by adding in bio:olb since it’s useful and I’ve the impression it’s widely used.

I think that’s about it for changes. Ivan suggested removing the DOAP and SIOC partys, but to my mind they are important because those vocabularies (and projects) elaborate on parts of FOAF which are important but otherwise neglected: the description of Projects, and of Users.

OK one more version. This has coloured blogs for the core FOAF classes. I think I made a mistake choosing light blue, since that is the colour code I’ve assigned to mean “inverse functional property”, nearly. Perhaps a light yellow?

Anyway here it is for now. Guess I should stop using Flickr for this really!

with colours

Update: I’ve put the files in svn (original OmniGraffle XML is the .graffle file; also some SVG output although I’m unsure the quality), and made another slight revision, this time focussing on the layout of the ends arrows for better readability; previously they were cluttered and the property directions were therefore hard to read.


widgetarians.org

widgetarians.org

I’ve just made Widgetarians.org, a Planet-based aggregator of Widgety stuff.

The contents are somewhat ad-hoc, and will evolve in unpredictable ways as people publish and fix feeds, and as I hear about new sites. For example, at the time of writing I couldn’t find any working public feeds for the great things the Opera folks are doing with widgets. The exact focus of the site might evolve, but the audience I have in mind are developers who are building widgets (and “gadgets”, “applications” etc.) using standard Web technologies. So I’ll gather feeds from blogs, galleries, developer discussion lists that might interest such people. The site also aggregates anything tagged with “widgets” on delicious, so that’s a quick way to share things into this site.

Since I’ve been working at Joost this year, I ought to take care to be clear that widgetarians.org is done in my own time, using my own resources, and suchlike. Of course I’d be happy if people checked out the evolving Joost widgets platform, but this particular site is not Joost-specific. The more developers building widgets (of all kinds) using standard Web technology, the better! I’m also not going to make a big fuss about what exactly counts as a “widget”; it’s a pretty intuitive, family-resemblance concept. In many ways the old Java applet dream of Web from 1995 or so, or modular computing architectures like OpenDoc were in the same space. But the focus I have here is around desktop and Web page add-ons that bundle up javascript, css, xml markup etc., and on the use of emerging standards for interop and skill-sharing around such platforms.

ps. the logo features venezuelan ants… in orbit around planet sugar :)

Querying Facebook in SPARQL

A fair few people have been asking about FOAF exporters from Facebook. I’m not entirely sure what else is out there, but Matthew Rowe has just announced a Facebook FOAF generator. It doesn’t dump all 35 million records into your Web browser, thankfully. But it will export a minimal description of you and your Facebook associates. At the moment, you get name, a photo URL, and (in this revision of the tool) a Facebook account name using FOAF’s OnlineAccount construct.

As an aside, this part of the FOAF design provides a way for identifiers from arbitrary services to be described in FOAF without special-purpose support. Some services have shortcut property names, eg. msnChatID and we may add more, but it is also important to allow this kind of freeform, decentralised identification. People shouldn’t have to petition the FOAF spec editors before any given Social Network site’s IDs can be supported; they can always use their own vocabulary alongside FOAF, or use the OnlineAccount construct as shown here.

I’ve saved my Facebook export on my Web site, working on the assumption that Facebook IDs are not private data. If people think otherwise, let me know and I’ll change the setup. We might also discuss whether even sharing the names and connectivity graph will upset people’s privacy expectations, but that’s for another day. Let me know if you’re annoyed!

Here is a quick SPARQL query, which simply asks for details of each person mentioned in the file who has an account on Facebook.


PREFIX : <http://xmlns.com/foaf/0.1/>
SELECT DISTINCT ?name, ?pic, ?id
WHERE {
[ a :Person;
:name ?name;
:depiction ?pic;
:holdsAccount [ :accountServiceHomepage <http://www.facebook.com/> ; :accountName ?id ]
]
}
ORDER BY ?name

I tested this online using Dave Beckett’s Rasqal-based Web service. It should return a big list of the first 200 people matched by the query, ordered alphabetically by name.

For “Web 2.0″ fans, SPARQL‘s result sets are essentially tabular (just like SQL), and have encodings in both simple XML and JSON. So whatever you might have heard about RDF’s syntactic complexity, you can forget it when dealing with a SPARQL engine.

Here’s a fragment of the JSON results from the above query:


{
"name" : { "type": "literal", "value": "Dan Brickley" },
"pic" : { "type": "uri", "value": "http://danbri.org/yasns/facebook/danbri-fb.rdf" },
"id" : { "type": "literal", "value": "624168" }
},
{
"name" : { "type": "literal", "value": "Dan Brickley" },
"pic" : { "type": "uri", "value": "http://profile.ak.facebook.com/profile5/575/66/s501730978_7421.jpg" },
"id" : { "type": "literal", "value": "501730978" }
}, ...

What’s going on here? (a) Why are there two of me? (b) And why does it think that one of us has my Facebook FOAF file’s URL as a mugshot picture?

There’s no big mystery here. Firstly, there’s another guy who has the cheek to be called Dan Brickley. We’re friends on Facebook, even though we should probably be mortal enemies or something. Secondly, why does it give him the wrong URL for his photo? This is also straightforward, if a little technical. Basically, it’s an easily-fixed bug in this version of the FOAF exporter I used. When an image URL is not available, the convertor is still generating markup like “<foaf:depiction rdf:resource=””/>”. This empty URL is treated in RDF as the extreme case of a relative link, ie. the same kind of thing as writing “../../images/me.jpg” in a normal Web page. And since RDF is all about de-contextualising information, your RDF parser will try to resolve the relative link before passing the data on to storage or query systems (fiddly details are available to those that care). If the foaf:depiction property were simply ommitted when no photo was present, this problem wouldn’t arise. We’d then have to make the query a little more flexible, so that it still matched people even if there was no depiction, but that’s easy. I’ll show it next time.

I mentioned a couple of days ago that SPARQL is a query language with built-in support for asking questions about data provenance, ie. we can mix in “according to Facebook”, “according to Jabber” right into the WHERE clause of queries such as the one I show here. I’m not going to get into that today, but I will close with a visual observation about why that is important.

yasn map, borrowed from data junk, valleywag blog
To state the obvious, there’ll always be multiple Web sites where people hang out and socialise. A friend sent me this link the other day; a world map of social networks (thumbnail version copied here). I can’t vouch for the science behind it, but it makes the point that we risk fragmenting Web communities on geographic boundaries if we don’t bridge the various IM and YASN networks. There are lots of ways this can be done, each with different implications for user experience, business model, cost and practicality. But it has to happen. And when it does, we’ll be wanting ways of asking questions against aggregations from across these sites…

“The World is now closed”

Facebook in many ways is pretty open for a ‘social networking’ site. It gives extension apps a good amount of access to both data and UI. But the closed world language employed in their UI betrays the immodest assumption “Facebook knows all”.

  • Eric Childress and Stuart Weibel are now friends with Charles McCathienevile.
  • John Doe is now in a relationship.
  • You have 210 friends.

To state the obvious: maybe Eric, Stu and Chaals were already friends. Maybe Facebook was the last to know about John’s relationship; maybe friendship isn’t countable. As the walls between social networking sites slowly melt (I put Jabber/XMPP first here, with OpenID, FOAF, SPARQL and XFN as helper apps), me and my 210 closest friends will share fragments of our lives with a wide variety of sites. If we choose to make those descriptions linkable, the linked sites will increasingly need to refine their UI text to be a little more modest: even the biggest site doesn’t get the full story.

Closed World Assumption (Abort/Retry/Fail)
Facebook are far from alone in this (see this Xbox screenshot too, “You do not have any friends!”); but even with 35M users, the mistake is jarring, and not just to Semantic Web geeks of the missing isn’t broken school. It’s simply a mistake to fail to distinguish the world from its description, or the territory from the map.

A description of me and my friends hosted by a big Web site isn’t “my social network”. Those sites are just a database containing claims made by different people, some verified, some not. And with, inevitably, lots missing. My “social network” is an abstractification of a set of interlinked real-world histories. You could make the case that there has only ever been one “social network” since the distant beginnings of human society; certainly those who try to do geneology with Web data formats run into this in a weaker form, including the need to balance competing and partial information. We can do better than categorised “buddylists” when describing people, their inter-connections and relationships. And in many ways Facebook is doing just great here. Aside from the Pirates-vs-Ninjas noise, many extension applications on Facebook allow arbitrary events from elsewhere in the Web to bubble up through their service and be seen (or filtered) by others who are linked to me in their database. For example:

Facebook is good at reporting events, generally. Especially those sourced outside the system. Where it isn’t so great is when reporting internal-events, eg. someone telling it about a relationship. Event descriptions are nice things to syndicate btw since they never go out of date. Syndicating descriptions of the changeable properties of the world, on the other hand, is more slippery since you need to have all other relevant facts to be able to say how the world is right now (or implicitly, how it used to be, before). “Dan has painted his car red” versus “Dan’s car is now red”. “Dan has bookmarked the Jabber user profile spec” versus “Dan now has 1621 bookmarks”. “Dan has added Charles to his Facebook profile” versus “Dan is now friends with Charles”.

We need better UI that reflects what’s really going on. There will be users who choose to live much of their lives in public view, spread across sites, sharing enough information for these accounts to be linked. Hopefully they’ll be as privacy-smart and selective as Pew suggests. Personas and ‘characters’ can be spread across sites without either site necessarily revealing a real-world identity; secrets are keepable, at least in theory. But we will see people’s behaviour and claims from one site leak into another, and with approval. I don’t think this will be just through some giant “social graph” of strictly enumerated relationships, but through a haze of vaguer data.

What we’re most missing is a style of end-user UI here that educates users about this world that spans websites, couching things in terms of claims hosted in sites, rather than in absolutist terms. I suppose I probably don’t have 210 “friends” (whatever that means) in real life, although I know a lot of great people and am happy to be linked to them online. But I have 210 entries in a Facebook-hosted database. My email whitelist file has 8785 email addresses in it currently; email accounts that I’m prepared to assume aren’t sending me spam. I’m sure I can’t have 8785 friends. My Google Mail (and hence GTalk Jabber) account claims 682 contacts, and has some mysterious relationship to my Orkut account where I have 200+ (more randomly selected) friends. And now the OpenID roster on my blog gives another list (as of today, 19 OpenIDs that made it past the WordPress spam filter). Modern social websites shouldn’t try to tell me how many friends I have; that’s just silly. And they shouldn’t assume their database knows it all. What they can do is try to tell me things that are interesting to me, with some emphasis on things that touch my immediate world and the extended world of those I’m variously connected to.

So what am I getting at here? I guess it’s just that we need these big social sites to move away from making teen-talk claims about how the world is – “Sally (now) loves John” – and instead become reflectors for the things people are saying, “Sally announces that she’s in love with John”; “John says that he used to work for Microsoft” versus “John worked for Microsoft 2004-2006″; “Stanford University says Sally was awarded a PhD in 2008″. Today’s young internet users are growing up fast, and the Web around them needs also to mature.

One of the most puzzling criticisms you’ll often hear about the Semantic Web initiative is that is requires a single universal truth, a monolithic ontology to model all of human knowledge. Those of us in the SW community know that this isn’t so; we’ve been saying for a long time that our (meta)data architecture is designed to allow people to publish claims “in which
statements can draw upon multiple vocabularies that are managed in a decentralised fashion by various communities of expertise.”
As the SemWeb technology stack now has a much better approach to representing data provenance (SPARQL named graphs replacing RDF’99 statement reification) I believe we should now be putting more emphasis on a related theme: Semantic Web data can represent disputes, competing claims, and contradictions. And we can query it in an SQL-like language (SPARQL) that allows us to ask questions not just of some all-knowing database, but about what different databases are telling us.

The closed world approach to data gives us a lot, don’t get me wrong. I’m not the only one with a love-hate relationship with SQL. There are many optimisations we can do in a traditional SQL or XML Schema environment which become hard in an RDF context. In particular, going “open world” makes for a harder job when hosting and managing data rather than merely aggregating and integrating it. Nevertheless, if you’re looking for a modern Web data environment for aggregating claims of the “Stanford University says Sally was awarded a PhD in 1995″ form, SPARQL has a lot to offer.

When we’re querying a single, all-knowing, all-trusted database, SQL will do the job (eg. see Facebook’s FQL for example). When we need to take a bit more care with “who said what” and “according to whom?” aspects, coupled with schema extensibility and frequently missing data, SQL starts to hurt. If we’re aggregating (and building UI for) ‘social web’ claims about the world rather than simple buddylists (which XMPP/Jabber gives us out of the box), I suspect aggregators will get burned unless they take care to keep careful track of who said what, whether using SPARQL or some home-grown database system in the same spirit. And I think they’ll find that doing so will be peculiarly rewarding, giving us a foundation for applications that do substantially more than merely listing your buddies…

Who, what, where, when?

A “Who? what? where? when?” of the Semantic Web is taking shape nicely.

Danny Ayers shows some work with FOAF and the hCard microformat, picking up a theme first explored by Dan Connolly back in 2000: inter-conversion between RDF and HTML person descriptions. Danny generates hCards from SPARQL queries of FOAF, an approach which would pair nicely with GRDDL for going in the other direction.

Meanwhile at W3C, the closing days of the SW Best Practices Group have recently produced a draft of an RDF/OWL Representation of Wordnet. Wordnet is a fantastic resource, containing descriptions of pretty much every word in the English language. Anyone who has spent time in committees, deciding which terms to include in some schema/vocabulary, must surely feel the appeal of a schema which simply contains all possible words. There are essentially two approaches to putting Wordnet into the Semantic Web. A lexically-oriented approach, such as the one published at W3C for Wordnet 2.0, presents a description of words. It mirrors the structure of wordnet itself (verbs, nouns, synsets etc.). Consequently it can be a complete and unjudgemental reflection into RDF of all the work done by the Wordnet team.

The alternate, and complementary, approach is to explore ways of projecting the contents of Wordnet into an ontology, so that category terms (from the noun hierarchy) in Wordnet become classes in RDF. I made a simplistic approach at this some time back (see overview). It has appeal (alonside the linguistic version) because it allows RDF to be used to describe instances of classes for each concept in wordnet, with other properties of those instances. See WhyWordnetIsCool in the FOAF wiki for an example of Wordnet’s coverage of everyday objects.

So, getting Wordnet moving into the SW is great step. It gives us URIs to identify a huge number of everyday concepts. It’s coverage isn’t complete, and it’s ontology is kinda quirky. Aldo Gangemi and others have worked on tidying up the hierarchy; I believe only for version 1.6 of Wordnet so far. I hope that work will eventually get published at W3C or elsewhere as stable URIs we can all use.

In addition to Wordnet there are various other efforts that give types that can be used for the “what” of “who/what/where/when”. I’ve been talking with Rob McCool about re-publishing a version of the old TAP knowledge base. The TAP project is now closed, with Rob working for Yahoo and Guha at Google. Stanford maintain the site but aren’t working on it. So I’ve been working on a quick cleanup (wellformed RDF/XML etc.) of TAP that could get it into more mainstream use. TAP, unlike Wordnet, has more modern everyday commercial concepts (have a look), as well as a lot of specific named instances of these classes.

Which brings me to (Semantic) Wikipedia; another approach to identifying things and their types on the Semantic Web. A while back we added isPrimaryTopicOf to FOAF, to make it easier to piggyback on Wikipedia for RDF-identifying things that have Wiki (and other) pages about them. The Semantic Mediawiki project goes much much further in this direction, providing a rich mapping (classes etc.) into RDF for much of Wikipedia’s more data-oriented content. Very exciting, especially if it gets into the main codebase.

So I think the combination of things like Wordnet, TAP, Wikipedia, and instance-identifying strategies such as “isPrimaryTopicOf”, will give us a solid base for identifying what the things are that we’re describing in the Semantic Web.

And regarding. “Where?” and “when?” … on the UI front, we saw a couple of announcements recently: OpenLayers v1.0, which provides Google-maps-like UI functionality, but opensource and standards friendly. And for ‘when’, a similar offering: the timeline widget. This should allow for fancy UIs to be wired in with RDF calendar or RDF-geo tagged data.

Talking of which… good news of the week: W3C has just announced a Geo incubator group (see detailed charter), whose mission includes updates for the basic Geo (ie. lat/long etc) vocabulary we created in the SW Interest Group.

Ok, I’ve gone on at enough length already, so I’ll talk about SKOS another time. In brief – it fits in here in a few places. When extended with more lexical stuff (for describing terms, eg. multi-lingual thesauri) it could be used as a base for representing the lexically-oriented version of Wordnet. And it also fits in nicely with Wikipedia, I believe.

Last thing, don’t get me wrong — I’m not claiming these vocabs and datasets and bits of UI are “the” way to do ‘who/what/where/when’ on the Semantic Web. They’re each one of several options. But it is great to have a whole pile of viable options at last :)

FOAF vocabulary management details

I have an action from the Vocabulary Management task force of the W3C SW Best Practices Working Group to document the way the FOAF namespace currently works. Here goes.

The FOAF RDF vocabulary is named by the URI http://xmlns.com/foaf/0.1/ and has information available at that URI in both machine-friendly and human-friendly form. When that URI is dereferenced by HTTP, the default representation is currently HTML-based.

Aside: I say based because the markup, while basically being XHTML, also includes incline RDF/XML describing the FOAF vocabulary. The HTML TF of SWBPD WG is looking at ways of making such documents validatable; I hope a future version will be valid in some way, as well as presentable in mainstream browsers. This might be via RDF/A in XHTML2, RDF/A back-ported to an early XHTML variant, a GRDDL-based transform, or perhaps a CDF document if they’re deployable in legacy browsers.

Backing up to the big picture: we want to make it easy for both people and machines to find out what they need. So the basic idea is that there’s an HTML document describing FOAF for humans, and a set of RDF statements describing FOAF for machines. The RDF statements are available in several ways. As embedded RDF/XML in the HTML page. As a separate index.rdf document (this should be LINK REL’d from the former), and as a content-negotiable representation of the main URI, available to clients that send an “Accept: application/rdf+xml” header.

In addition to this, each FOAF term (assuming the Apache .htaccess is up to date; this might not be currently true) is redirected by the Web server to the main FOAF namespace URI. For example, http://xmlns.com/foaf/0.1/Person. This should in the future be done with a 303 HTTP redirection code, to be consistent with the recent W3C TAG decision on http-range-14 (apologies for the jargon and lack of links to explanation). Because FOAF term URIs don’t contain a # [edit: I wrote / previously; thanks mortenf], note that the redirection can’t point down into a sub-section of the HTML document. To make the HTML document more usable, we should probably put a better table of contents for terms nearer to the top of the page, to allow someone to navigate quickly to the description of that term.

One last point: each term defined in FOAF is accompanied not only by a label and comment (standard from RDFS); it also has a chunk of HTML markup, with cross-references to other terms. At the moment, this is not made available in any machine-readable form. There might be some scope here for common practice with SKOS and other vocabularies?

Sorry for the hasty writeup, I just wanted to get this recorded as a starting point…

Profiling GML for RSS/Atom, RDF and Web developers

I spent some time yesterday talking with Ron Lake about GML, RDF, RSS and other acronyms. GML was originally an RDF application, and various RDFisms can still be seen in the design. I learned a fair bit about GML, and about its extensibility and profiling mechanisms.

We discussed some possibilities for sharing data between GML, RSS/Atom and RDF environments. In particular, two options: RDF inside GML; and RDFized GML.

The possibility of embedding islands of RDF inside GML (eg. the GML for a restaurant might use RDF for restaurant-review or menu markup) is interesting, as would allow GML documents to use any RDF vocabulary to describe the features on a map. Currently, such extension data typically requires the creation of a custom XML Schema. The other option, “RDFized GML”, is to explore the creation of an RDF vocabulary that allows some useful subset of GML data to be used in RDF. I’ll come back to this in a minute.

While GML comes from the world of professional GIS, its influence is being felt more widely: Google Earth (formerly Keyhole) uses something called KML, which bears a great many similarities with GML. Meanwhile in the RDF and RSS/Atom world, the very basic addition of “geo:lat” and “geo:long” tagging (sometimes using the W3C SemWeb IG WGS_84 namespace) has got a number of toolmakers interested. This year has seen the release of Yahoo! Maps, Google Maps, Google Earth and most recently Microsoft Virtual Earth. We’ve also seen the release of the excellent Mapping Hacks book, and increasing interest in this area from Web developers.

Although the experimental SWIG RDF vocabulary only deals with points described in WGS_84, there have been various discussions on possible extensions (eg. RDFGeom-2d from Chris Goad). These are intriguing, but we should be careful to avoid re-inventing wheels. Basically, I think we have all the ingredients for a hybrid approach: an RDFized GML subset designed for use by Web developers alongside RSS/Atom, FOAF and other public-facing XML formats. GML serves well as a data format in the GIS community, but some work is needed to find a subset that will find adoption in the wider Web.

The tiny W3C SWIG vocab, and related geo:lat/long tagging of “geo”-RSS feeds has shown that there is real interest in a lightweight XML-based mechanism for sharing map-related markup. GML shows us (via a 600 page specification, for GML 3.1) quite how rich and complex a problem space we’re facing, and KML demonstrates that a medium-sized “GML lite” subset can get traction with webmasters and developers, when backed by useful tools and services.

There are two pieces of work to do here (setting aside for now the topic of RDF islands within GML documents). Let’s first find a strawman profile of GML. From my limited knowledge and discussion with others, something “GML 2-ish” but profiled against GML 3.1, is the area to explore. Then we try getting those data structures into RDF, so it can mix freely with other information.

I understand from Ron Lake that profiling is something that is actively encouraged for GML, and there are even tools to support it that come with the spec: have a look at subsetutility.zip. These files (thanks Ron!) show a pretty easy path for experimentation with profiles. In addition to the schema subsetting utilities, the .zip also includes (just as an example to help me understand GML) an example application schema CommonObjects.xsd, showing how to define things like ‘Building’, ‘River’, and a sample instance .xml file that uses it.

To use the profiling tool, just put the unzipped files directly in the base/ directory of .xsd files that ships with GML 3.1, then run an XSLT processor to generate a GML subset.

xsltproc depends.xsl gml.xsd > _gml.dep

xsltproc GML3.1.1Subset.xsl _gml.dep > _gmlSubset.xsd

…and that’s your profile. The scripts take care of all the dependencies (ie. they’ll read the 29 XML Schemas, so you don’t have to :)

The bits of GML you want are specified as parameters in GML3.1.1Subset.xsl. The default in this .zip is: gml:Point, gml:LineString, gml:Polygon, gml:LinearRing, gml:Observation, gml:TimeInstant, gml:TimePeriod

I’m no GML expert, but if someone can help get some instance data matching such a profile, I’ll have a go at RDFizing it. Also, of course, it will be useful to debate how many facilities from full GML would find use in the Webmaster (RSS, KML etc) scene.

Disclaimer: for now this is purely an informal collaboration. If we make something interesting, it might be worth investigation of something more formal between W3C (home of RDF, and where I work) and OGC (home of GML). For now, let’s just try out some ideas…