Learning WebGL on your iPhone: Radial Blur in GLSL

A misleading title perhaps, since WebGL isn’t generally available to iOS platform developers. Hacks aside, if you’re learning WebGL and have an iPhone it is still a very educational environment. WebGL essentially wraps OpenGL ES in a modern Web browser environment. You can feed data in and out as textures associated with browser canvas areas, manipulating data objects either per-vertex or per-pixel by writing ‘vertex’ and ‘fragment’ shaders in the GLSL language. Although there are fantastic tools out there like Three.js to hide some of these details, sooner or later you’ll encounter GLSL. The iPhone, thanks to tools like GLSL Studio and Paragraf, is a great environment for playing with GLSL. And playing is a great way of learning.

Radial Blur GLSL

GLSL fragment shaders are all about thinking about visuals “per-pixel”. You can get a quick feel for what’s possible by exploring the GLSL Sandbox site. The sandbox lets you live-edit GLSL shaders, which are then applied to a display area with trivial geometry – the viewing area is just two big triangles. See Iñigo Quilez’s livecoding videos or ‘rendering worlds with two triangles‘ for more inspiration.

All of which is still rocket science to me, but I was surprised at how accessible some of these ideas and effects can be. Back to the iPhone: using Paragraf, you can write GLSL fragment shaders, whose inputs include multi-touch events and textures from device cameras and photo galleries. This is more than enough to learn the basics of GLSL, even with realtime streaming video. Meanwhile, back in your Web browser, the new WebRTC video standards work is making such streams accessible to WebGL.

Here is a quick example based on Thibaut Despoulain‘s recent three.js-based tutorials showing techniques for compositing, animation and glow effects in WebGL.  His Volumetric Light Approximation post provides a fragment shader for computing radial blur, see his live demo for a control panel showing all the parameters that can be tweaked. Thanks to Paragraf, we can also adapt that shader to run on a phone, blurring the camera input around the location of the last on-screen touch (‘t1′). Here is the original, embedded within a .js library. And here is a cut down version adapted to use the pre-declared structures from Paragraf (or see gist for cleaner copy):

vec3 draw() {
  vec2 vUv = p;
  float fX=t1.x, fY=t1.y, illuminationDecay = 1.0,
  fExposure = 0.2, fDecay = 0.93,
  fDensity = .3, fWeight = 0.4, fClamp = 1.0;
  const int iSamples = 8;
  vec2 delta = vec2(vUv-vec2(fX,fY))/float(iSamples)*fDensity,
coord = vUv;
  vec4 FragColor = vec4(0.0);
  for(int i=0; i < iSamples ; i++)  {
    coord -= delta;
    vec4 texel = vec4( cam(coord), 0.0);
    texel *= illuminationDecay * fWeight;
    FragColor += texel;
    illuminationDecay *= fDecay;
  }
  FragColor *= fExposure;
  FragColor = clamp(FragColor, 0.0, fClamp);
  return(vec3(FragColor));
}

Cat photo

Blur

As I write this, I realise I’m blurring the lines between ‘radial blur’ and its application to create ‘god-rays’ in a richer setting. As I say, I’m not an expert here (and I just post a quick example and two hasty screenshots). My main purpose was rather to communicate that tools for learning more about such things are now quite literally in many people’s hands. And also that using GLSL for real-time per-pixel processing of smartphone camera input is a really fun way to dig deeper.

At this point I should emphasis that draw() here and other conventions are from Paragraf; see any GLSL or WebGL docs, or the original example here, for details.

Everything Still Looks Like A Graph (but graphs look like maps)

Last October I posted a writeup of some experiments that illustrate item-to-item similarities from Apache Mahout using Gephi for visualization. This was under a heading that quotes Ben Fry, “Everything looks like a graph” (but almost nothing should ever be drawn as one). There was also some followup discussion on the Gephi project blog.

I’ve just seen a cluster of related Gephi experiments, which are reinforcing some of my prejudices from last year’s investigations:

These are all well worth a read, both for showing the potential and the limitations of Gephi. It’s not hard to find critiques of the intelligibility or utility of squiggly-but-inspiring network diagrams; Ben Fry’s point was well made. However I think each of the examples I link here (and my earlier experiments) show there is some potential in such layouts for showing ‘similarity neighbourhoods’ in a fairly appealing and intuitive form.

In the case of the history of Philosophy it feels a little odd using a network diagram since the chronological / timeline aspect is quite important to the notion of a history. But still it manages to group ‘like with like’, to the extent that the inter-node connections probably needn’t even be shown.

I’m a lot more comfortable with taking the ‘everything looks like a graph’ route if we’re essentially generating a similarity landscape. Whether these ‘landscapes’ can be made to be stable in the face of dataset changes or re-generation of the visualization is a longer story. Gephi is currently a desktop tool, and as such has memory issues with really large graphs, but I think it shows the potential for landscape-oriented graph visualization. Longer term I expect we’ll see more of a split between something like Hadoop+Mahout for big data crunching (e.g. see Mahout’s spectral clustering component which takes node-to-node affinities as input) and something WebGL and browser-based UI for the front-end. It’s a shame the Gephi efforts in this direction (GraphGL) seem to have gone quiet, but for those of you with modern graphics cards and browsers, take a look at alterqualia’s ‘dynamic terrain‘ WebGL demo to get a feel for how landscape-shaped datasets could be presented…

Also btw look at the griffsgraphs landscape of literature; this was built solely from ‘influences’ relationships from Wikipedia… then compare this with the landscapes I was generating last year from Harvard bibliographic data. They were both built solely using subject classification data from Harvard. Now imagine if we could mutate the resulting ‘map’ by choosing our own weighting composited across these two sources. Perhaps for the music or movies or TV areas of the map we might composite in other sources, based on activity data analysed by recommendation engine, or just different factual relationships.

There’s no single ‘correct’ view of the bibliographic landscape; what makes sense for a phd researcher, a job seeker or a schoolkid will naturally vary. This is true also of similarity measures in general, i.e. for see-also lists in plain HTML as well as fancy graph or landscape-based visualizations. There are more than metaphorical comparisons to be drawn with the kind of compositing tools we see in systems like Blender, and plenty of opportunities for putting control into end-user rather than engineering hands.

In just the last year, Harvard (and most recently OCLC) have released their bibliographic dataset for public re-use, the Wikidata project has launched, and browser support for WebGL has been improving with every release. Despite all the reasonable concerns out there about visualizing graphs as graphs, there’s a lot to be said for treating graphs as maps…

MAMP / MySQL config notes for ‘Repair with keycache’ and table metadata lock

Problem: MySQL taking forever to load some large data dumps. Forever or longer.

“mysql> show processlist;” shows it wedged at “Repair with keycache” and “Waiting for table metadata lock”.

According to a handy Stack Overflow article, this is a known and dreaded condition, which can be addressed by making sure tmp dir has plenty of space, and increasing size of myisam_max_sort_file_size from 2G (2146435072) to 30G (32212254720). Using MAMP 1.9.6 it took some more digging to find out how to add a local my.cnf settings file for MySQL. This now lives in /Applications/MAMP/conf/my.cnf (I added into [mysqld] section a line saying ‘myisam_max_sort_file_size = 30G’ (or there-abouts). Shut down the MySQL server, create that my.cnf and restart; then confirm it read your config using ‘show variables’.

Does this work? Well I don’t know yet. But enough times I’ve searched around before and found my own notes, that I thought I should at least write this much down for my future self to find :)

Update: it worked. A data import that took 2+ weeks (before I gave up) now runs in a few hours. After the bulk of the data was imported, we see ‘Repair by sorting’ in ‘show processlist’ for a while (couple of hours for 15 million records, in my case). This is, as promised, faster than ‘Repair with keycache’. I’ve done this on two machines now (with the same data); on one of them I did notice some ‘Waiting for table metadata lock’ processes in the list, but it still successfully completed overnight.

Building R’s RGL library for OSX Snow Leopard

RGL is needed for nice interactive 3d plots in R, but a pain to find out how to build on a modern OSX machine.

“The rgl package is a visualization device system for R, using OpenGL as the rendering backend. An rgl device at its core is a real-time 3D engine written in C++. It provides an interactive viewpoint navigation facility (mouse + wheel support) and an R programming interface.”

The following commands worked for me in OSX Snow Leopard:

  • svn checkout svn://svn.r-forge.r-project.org/svnroot/rgl
  • R CMD INSTALL ./rgl/pkg/rgl –configure-args=”–disable-carbon” rgl

Here’s a test that should give an interactive 3D display if all went well, using a built-in dataset:

library(rgl)
cars.data <- as.matrix(sweep(mtcars[, -1], 2, colMeans(mtcars[, -1]))) # cargo cult'd
xx <- svd(cars.data %*% t(cars.data))
xxd <- xx$v %*% sqrt(diag(xx$d))
x1 <- xxd[, 1]
y1 <- xxd[, 2]
z1 <- xxd[, 3]
plot3d(x1,y1,z1,col="green", size=4)
text3d(x1,y1,z1, row.names(mtcars))

Linked Literature, Linked TV – Everything Looks like a Graph

cloud

Ben Fry in ‘Visualizing Data‘:

Graphs can be a powerful way to represent relationships between data, but they are also a very abstract concept, which means that they run the danger of meaning something only to the creator of the graph. Often, simply showing the structure of the data says very little about what it actually means, even though it’s a perfectly accurate means of representing the data. Everything looks like a graph, but almost nothing should ever be drawn as one.

There is a tendency when using graphs to become smitten with one’s own data. Even though a graph of a few hundred nodes quickly becomes unreadable, it is often satisfying for the creator because the resulting figure is elegant and complex and may be subjectively beautiful, and the notion that the creator’s data is “complex” fits just fine with the creator’s own interpretation of it. Graphs have a tendency of making a data set look sophisticated and important, without having solved the problem of enlightening the viewer.

markets

Ben Fry is entirely correct.

I suggest two excuses for this indulgence: if the visuals are meaningful only to the creator of the graph, then let’s make everyone a graph curator. And if the things the data attempts to describe — for example, 14 million books and the world they in turn describe — are complex and beautiful and under-appreciated in their complexity and interconnectedness, … then perhaps it is ok to indulge ourselves. When do graphs become maps?

I report here on some experiments that stem from two collaborations around Linked Data. All the visuals in the post are views of bibliographic data, based on similarity measures derrived from book / subject keyword associations, with visualization and a little additional analysis using Gephi. Click-through to Flickr to see larger versions of any image. You can’t always see the inter-node links, but the presentation is based on graph layout tools.

Firstly, in my ongoing work in the NoTube project, we have been working with TV-related data, ranging from ‘social Web’ activity streams, user profiles, TV archive catalogues and classification systems like Lonclass. Secondly, over the summer I have been working with the Library Innovation Lab at Harvard, looking at ways of opening up bibliographic catalogues to the Web as Linked Data, and at ways of cross-linking Web materials (e.g. video materials) to a Webbified notion of ‘bookshelf‘.

In NoTube we have been making use of the Apache Mahout toolkit, which provided us with software for collaborative filtering recommendations, clustering and automatic classification. We’ve barely scratched the surface of what it can do, but here show some initial results applying Mahout to a 100,000 record subset of Harvard’s 14 million entry catalogue. Mahout is built to scale, and the experiments here use datasets that are tiny from Mahout’s perspective.

gothic_idol

In NoTube, we used Mahout to compute similarity measures between each pair of items in a catalogue of BBC TV programmes for which we had privileged access to subjective viewer ratings. This was a sparse matrix of around 20,000 viewers, 12,500 broadcast items, with around 1.2 million ratings linking viewer to item. From these, after a few rather-too-casual tests using Mahout’s evaluation measure system, we picked its most promising similarity measure for our data (LogLikelihoodSimilarity or Tanimoto), and then for the most similar items, simply dumped out a huge data file that contained pairs of item numbers, plus a weight.

There are many many smarter things we could’ve tried, but in the spirit of ‘minimal viable product‘, we didn’t try them yet. These include making use of additional metadata published by the BBC in RDF, so we can help out Mahout by letting it know that when Alice loves item_62 and Bob loves item_82127, we also via RDF also knew that they are both in the same TV series and Brand. Why use fancy machine learning to rediscover things we already know, and that have been shared in the Web as data? We could make smarter use of metadata here. Secondly we could have used data-derrived or publisher-supplied metadata to explore whether different Mahout techniques work better for different segments of the content (factual vs fiction) or even, as we have also some demographic data, different groups of users.

markets

Anyway, Mahout gave us item-to-item similarity measures for TV. Libby has written already about how we used these in ‘second screen’ (or ‘N-th’ screen, aka N-Screen) prototypes showing the impact that new Web standards might make on tired and outdated notions of “TV remote control”.

What if your remote control could personalise a view of some content collection? What if it could show you similar things based on your viewing behavior, and that of others? What if you could explore the ever-growing space of TV content using simple drag-and-drop metaphors, sending items to your TV or to your friends with simple tablet-based interfaces?

medieval_society

So that’s what we’ve been up to in NoTube. There are prototypes using BBC content (sadly not viewable by everyone due to rights restrictions), but also some experiments with TV materials from the Internet Archive, and some explorations that look at TED’s video collection as an example of Web-based content that (via ted.com and YouTube) are more generally viewable. Since every item in the BBC’s Archive is catalogued using a library-based classification system (Lonclass, itself based on UDC) the topic of cross-referencing books and TV has cropped up a few times.

new_colonialism

Meanwhile, in (the digital Public Library of) America, … the Harvard Library Innovation Lab team have a huge and fantastic dataset describing 14 million bibliographic records. I’m not sure exactly how many are ‘books'; libraries hold all kinds of objects these days. With the Harvard folk I’ve been trying to help figure out how we could cross-reference their records with other “Webby” sources, such as online video materials. Again using TED as an example, because it is high quality but with very different metadata from the library records. So we’ve been looking at various tricks and techniques that could help us associate book records with those. So for example, we can find tags for their videos on the TED site, but also on delicious, and on youtube. However taggers and librarians tend to describe things quite differently. Tags like “todo”, “inspirational”, “design”, “development” or “science” don’t help us pin-point the exact library shelf where a viewer might go to read more on the topic. Or conversely, they don’t help the library sites understand where within their online catalogues they could embed useful and engaging “related link” pointers off to TED.com or YouTube.

So we turned to other sources. Matching TED speaker names against Wikipedia allows us to find more information about many TED speakers. For example the Tim Berners-Lee entry, which in its Linked Data form helpfully tells us that this TED speaker is in the categories ‘Japan_Prize_laureates’, ‘English_inventors’, ‘1955_births’, ‘Internet_pioneers’. All good to know, but it’s hard to tell which categories tell us most about our speaker or video. At least now we’re in the Linked Data space, we can navigate around to Freebase, VIAF and a growing Web of data-sources. It should be possible at least to associate TimBL’s TED talks with library records for his book (so we annotate one bibliographic entry, from 14 million! …can’t we map areas, not items?).

tv

Can we do better? What if we also associated Tim’s two TED talk videos with other things in the library that had the same subject classifications or keywords as his book? What if we could build links between the two collections based not only on published authorship, but on topical information (tags, full text analysis of TED talk transcripts). Can we plan for a world where libraries have access not only to MARC records, but also full text of each of millions of books?

Screen%20shot%202011-10-11%20at%2010.15.07%20AM

I’ve been exploring some of these ideas with David Weinberger, Paul Deschner and Matt Phillips at Harvard, and in NoTube with Libby Miller, Vicky Buser and others.

edu

Yesterday I took the time to make some visual sanity check of the bibliographic data as processed into a ‘similarity space’ in some Mahout experiments. This is a messy first pass at everything, but I figured it is better to blog something and look for collaborations and feedback, than to chase perfection. For me, the big story is in linking TV materials to the gigantic back-story of context, discussion and debate curated by the world’s libraries. If we can imagine a view of our TV content catalogues, and our libraries, as visual maps, with items clustered by similarity, then NoTube has shown that we can build these into the smartphones and tablets that are increasingly being used as TV remote controls.

Screen%20shot%202011-10-11%20at%2010.12.25%20AM

And if the device you’re using to pause/play/stop or rewind your TV also has access to these vast archives as they open up as Linked Data (as well as GPS location data and your Facebook password), all kinds of possibilities arise for linked, annotated and fact-checked TV, as well as for showing a path for libraries to continue to serve as maps of the entertainment, intellectual and scientific terrain around us.

Screen%20shot%202011-10-11%20at%2010.16.46%20AM

A brief technical description. Everything you see here was made with Gephi, Mahout and experimental data from the Library Innovation Lab at Harvard, plus a few scripts to glue it all together.

Mahout was given 100,000 extracts from the Harvard collection. Just main and sub-title, a local ID, and a list of topical phrases (mostly drawn from Library of Congress Subject Headings, with some local extensions). I don’t do anything clever with these or their sub-structure or their library-documented inter-relationships. They are treated as atomic codes, and flattened into long pseudo-words such as ‘occupational_diseases_prevention_control’ or ‘french_literature_16th_century_history_and_criticism’,
‘motion_pictures_political_aspects’, ‘songs_high_voice_with_lute’, ‘dance_music_czechoslovakia’, ‘communism_and_culture_soviet_union’. All of human life is there.

David Weinberger has been calling this gigantic scope our problem of the ‘Taxonomy of Everything’, and the label fits. By mushing phrases into fake words, I get to re-use some Mahout tools and avoid writing code. The result is a matrix of 100,000 bibliographic entities, by 27684 unique topical codes. Initially I made the simple test of feeding this as input to Mahout’s K-Means clustering implementation. Manually inspecting the most popular topical codes for each cluster (both where k=12 to put all books in 12 clusters, or k=1000 for more fine-grained groupings), I was impressed by the initial results.

Screen%20shot%202011-10-11%20at%2010.22.37%20AM

I only have these in crude text-file form. See hv/_k1000.txt and hv/_twelve.txt (plus dictionary, see big file
_harv_dict.txt ).

For example, in the 1000-cluster version, we get: ‘medical_policy_united_states’, ‘health_care_reform_united_states’, ‘health_policy_united_states’, ‘medical_care_united_states’,
‘delivery_of_health_care_united_states’, ‘medical_economics_united_states’, ‘politics_united_states’, ‘health_services_accessibility_united_states’, ‘insurance_health_united_states’, ‘economics_medical_united_states’.

Or another cluster: ‘brain_physiology’, ‘biological_rhythms’, ‘oscillations’.

How about: ‘museums_collection_management’, ‘museums_history’, ‘archives’, ‘museums_acquisitions’, ‘collectors_and_collecting_history’?

Another, conceptually nearby (but that proximity isn’t visible through this simple clustering approach), ‘art_thefts’, ‘theft_from_museums’, ‘archaeological_thefts’, ‘art_museums’, ‘cultural_property_protection_law_and_legislation’, …

Ok, I am cherry picking. There is some nonsense in there too, but suprisingly little. And probably some associations that might cause offense. But it shows that the tooling is capable (by looking at book/topic associations) at picking out similarities that are significant. Maybe all of this is also available in LCSH SKOS form already, but I doubt it. (A side-goal here is to publish these clusters for re-use elsewhere…).

Screen%20shot%202011-10-11%20at%2010.23.22%20AM

So, what if we take this, and instead compute (a bit like we did in NoTube from ratings data) similarity measures between books?

Screen%20shot%202011-10-11%20at%2010.24.12%20AM

I tried that, without using much of Mahout’s sophistication. I used its ‘rowsimilarityjob’ facility and generated similarity measures for each book, then threw out most of the similarities except the top 5, later the top 3, from each book. From this point, I moved things over into the Gephi toolkit (“photoshop for graphs”), as I wanted to see how things looked.

Screen%20shot%202011-10-11%20at%2010.37.06%20AM

First results shown here. Nodes are books, links are strong similarity measures. Node labels are titles, or sometimes title + subtitle. Some (the black-background ones) use Gephi’s “modularity detection” analysis of the link graph. Others (white background) I imported the 1000 clusters from the earlier Mahout experiments. I tried various of the metrics in Gephi and mapped these to node size. This might fairly be called ‘playing around’ at this stage, but there is at least a pipeline from raw data (eventually Linked Data I hope) through Mahout to Gephi and some visual maps of literature.

1k_overview

What does all this show?

That if we can find a way to open up bibliographic datasets, there are solid opensource tools out there that can give new ways of exploring the items described in the data. That those tools (e.g. Mahout, Gephi) provide many different ways of computing similarity, clustering, and presenting. There is no single ‘right answer’ for how to present literature or TV archive content as a visual map, clustering “like with like”, or arranging neighbourhoods. And there is also no restriction that we must work dataset-by-dataset, either. Why not use what we know from movie/TV recommendations to arrange the similarity space for books? Or vice-versa?

I must emphasise (to return to Ben Fry’s opening remark) that this is a proof-of-concept. It shows some potential, but it is neither a user interface, nor particularly informative. Gephi as a tool for making such visualizations is powerful, but it too is not a viable interface for navigating TV content. However these tools do give us a glimpse of what is hidden in giant and dull-sounding databases, and some hints for how patterns extracted from these collections could help guide us through literature, TV or more.

Next steps? There are many things that could be tried; more than I could attempt. I’d like to get some variant of these 2D maps onto ipad/android tablets, loaded with TV content. I’d like to continue exploring the bridges between content (eg. TED) and library materials, on tablets and PCs. I’d like to look at Mahout’s “collocated terms” extraction tools in more details. These allow us to pull out recurring phrases (e.g. “Zero Sum”, “climate change”, “golden rule”, “high school”, “black holes” were found in TED transcripts). I’ve also tried extracting bi-gram phrases from book titles using the same utility. Such tools offer some prospect of bulk-creating links not just between single items in collections, but between neighbourhood regions in maps such as those shown here. The cross-links will never be perfect, but then what’s a little serendipity between friends?

As full text access to book data looms, and TV archives are finding their way online, we’ll need to find ways of combining user interface, bibliographic and data science skills if we’re really going to make the most of the treasures that are being shared in the Web. Since I’ve only fragments of each, I’m always drawn back to think of this in terms of collaborative work.

A few years ago, Netflix had the vision and cash to pretty much buy the attention of the entire machine learning community for a measly million dollars. Researchers love to have substantive datasets to work with, and the (now retracted) Netflix dataset is still widely sought after. Without a budget to match Netflix’, could we still somehow offer prizes to help get such attention directed towards analysis and exploitation of linked TV and library data? We could offer free access to the world’s literature via a global network of libraries? Except everyone gets that for free already. Maybe we don’t need prizes.

Nearby in the Web: NoTube N-Screen, Flickr slideshow

K-means test in Octave

Matlab comes with K-means clustering ‘out of the box’. The GNU Octave work-a-like system doesn’t, and there seem to be quite a few implementations floating around. I picked the first from Google, pretty carelessly, saving as myKmeans.m. These are notes from trying to reproduce this Matlab demo with Octave. Not rocket science but worth writing down so I can find it again.

M=4
W=2
H=4
S=500
a = M * [randn(S,1)+W, randn(S,1)+H];
b = M * [randn(S,1)+W, randn(S,1)-H];
c = M * [randn(S,1)-W, randn(S,1)+H];
d = M * [randn(S,1)-W, randn(S,1)-H];
e = M * [randn(S,1), randn(S,1)];
all_data = [a;b;c;d;e];
plot(a(:,1), a(:,2),'.');
hold on;
plot(b(:,1), b(:,2),'r.');
plot(c(:,1), c(:,2),'g.');
plot(d(:,1), d(:,2),'k.');
plot(e(:,1), e(:,2),'c.');
% using http://www.christianherta.de/kmeans.html as myKmeans.m
[centroid,pointsInCluster,assignment] = myKmeans(all_data,5)
scatter(centroid(:,1),centroid(:,2),'x');

Video Linking: Archives and Encyclopedias

This is a quick visual teaser for some archive.org-related work I’m doing with NoTube colleagues, and a collaboration with Kingsley Idehen on navigating it.

In NoTube we are trying to match people and TV content by using rich linked data representations of both. I love Archive.org and with their help have crawled an experimental subset of the video-related metadata for the Archive. I’ve also used a couple of other sources; Sean P. Aune’s list of 40 great movies, and the Wikipedia page listing US public domain films. I fixed, merged and scraped until I had a reasonable sample dataset for testing.

I wanted to test the Microsoft Pivot Viewer (a Silverlight control), and since OpenLink’s Virtuoso package now has built-in support, I got talking with Kingsley and we ended up with the following demo. Since not everyone has Silverlight, and this is just a rough prototype that may be offline, I’ve made a few screenshots. The real thing is very visual, with animated zooms and transitions, but screenshots give the basic idea.

Notes: the core dataset for now is just links between archive.org entries and Wikipedia/dbpedia pages. In NoTube we’ll also try Lupedia, Zemanta, Reuter’s OpenCalais services on the Archive.org descriptions to see if they suggest other useful links and categories, as well as any other enrichment sources (delicious tags, machine learning) we can find. There is also more metadata from the Archive that we should also be using.

This simple preview simply shows how one extra fact per Archived item creates new opportunities for navigation, discovery and understanding. Note that the UI is in no way tuned to be TV, video or archive specific; rather it just lets you explore a group of items by their ‘facets’ or common properties. It also reveals that wiki data is rather chaotic, however some fields (release date, runtime, director, star etc.) are reliably present. And of course, since the data is from Wikipedia, users can always fix the data.

You often hear Linked Data enthusiasts talk about data “silos”, and the need to interconnect them. All that means here, is that when collections are linked, then improvements to information on one side of the link bring improvements automatically to the other. When a Wikipedia page about a director, actor or movie is improved, it now also improves our means of navigating Archive.org’s wonderful collection. And when someone contributes new video or new HTML5-powered players to the Archive, they’re also enriching the Encyclopedia too.

Archive.org films on a timeline by release date according to Wikipedia.

One thing to mention is that everything here comes from the Wikipedia data that is automatically extracted from by DBpedia, and that currently the extractors are not working perfectly on all films. So it should get better in the future. I also added a lot of the image links myself, semi-automatically. For now, this navigation is much more factually-based than topic; however we do have Wikipedia categories for each film, director, studio etc., and these have been mapped to other category systems (formal and informal), so there’s a lot of other directions to explore.

What else can we do? How about flip the tiled barchart to organize by the film’s distributor, and constrain the ‘release date‘ facet to the 1940s:

That’s nice. But remember that with Linked Data, you’re always dealing with a subset of data. It’s hard to know (and it’s hard for the interface designers to show us) when you have all the relevant data in hand. In this case, we can see what this is telling us about the videos currently available within the demo. But does it tell us anything interesting about all the films in the Archive? All the films in the world? Maybe a little, but interpretation is difficult.

Next: zoom in to a specific item. The legendary Plan 9 from Outer Space (wikipedia / dbpedia).

Note the HTML-based info panel on the right hand side. In this case it’s automatically generated by Virtuoso from properties of the item. A TV-oriented version would be less generic.

Finally, we can explore the collection by constraining the timeline to show us items organized according to release date, for some facet. Here we show it picking out the career of one Edward J. Kay, at least as far as he shows up as composer of items in this collection:

Now turning back to Wikipedia to learn about ‘Edward J. Kay’, I find he has no entry (beyond these passing mentions of his name) in the English Wikipedia, despite his work on The Ape Man, The Fatal Hour, and other films.  While the German Wikipedia does honour him with an entry, I wonder whether this kind of Linked Data navigation will change the dynamics of the ‘deletionism‘ debates at Wikipedia.  Firstly by showing that structured data managed elsewhere can enrich the Wikipedia (and vice-versa), removing some pressure for a single Wiki to cover everything. Secondly it provides a tool to stand further back from the data and view things in a larger context; a context where for example Edward J. Kay’s achievements become clearer.

Much like Freebase Parallax, the Pivot viewer hints at a future in which we explore data by navigating from sets of things to other sets of things - like the set of film’s Edward J. Kay contributed to.  Pivot doesn’t yet cover this, but it does very vividly present the potential for this kind of navigation, showing that navigation of films, TV shows and actors may be richer when it embraces more general mechanisms.

Easier in RDFa: multiple types and the influence of syntax on semantics

RDF is defined as an abstract data model, plus a collection of practical notations for exchanging RDF descriptions (eg. RDF/XML, RDFa, Turtle/N3). In theory, your data modelling activities are conducted in splendid isolation from the sleazy details of each syntax. RDF vocabularies define classes of thing, and various types of property/relationship that link those things. And then instance data uses arbitrary combinations of those vocabularies to make claims about stuff. Nothing in your vocabulary design says anything about XML or text formats or HTML or other syntactic details.

All that said, syntactic considerations can mess with your modelling. I’ve just written this up for the Linked Library Data group, but since the point isn’t often made, I thought I’d do so here too.

RDF instance data, ie. descriptions of stuff, is peculiar in that it lets you use multiple independent schemas at the same time. So I might use SKOS, FOAF, Bio, Dublin Core and DOAP all jumbled up together in one document. But there are some considerations when you want to mention that something is in multiple classes. While you can do this in any RDF notation, it is rather ugly in RDF/XML, historically RDF’s most official, standard notation. Furthermore, if you want to mention that two things are related by two or more specified properties, this can be super ugly in RDF/XML. Or at least rather verbose. These practical facts have tended to guide the descriptive idioms used in real world RDF data. RDFa changes the landscape significantly, so let me give some examples.

Backstory – decentralised extensibility

RDF classes from one vocabulary can be linked to more general or specific classes in another; we use rdfs:subClassOf for this. Similarly, RDF properties can be linked with rdfs:subPropertyOf claims. So for example in FOAF we might define a class foaf:Organization, and leave it at that. Meanwhile over in the Org vocabulary, they care enough to distinguish a subclass, org:FormalOrganization. This is great! Incremental, decentralised extensibility. Similarly, FOAF has foaf:knows as a basic link between people who know each other, but over in the relationship vocabulary, that has been specialized, and we see relationships like ‘livesWith‘, ‘collaboratesWith‘. These carry more specific meaning, but they also imply a foaf:knows link too.

This kind of machine-readable (RDFS/OWL) documentation of the patterns of meaning amongst properties (and classes) has many uses. It could be used to infer missing information: if Ian writes RDF saying “Alice collaboratesWith Bob” but doesn’t explicitly say that Alice also knows Bob, a schema-aware processor can add this in. Or it can be used at query time, if someone asks “who does Alice know?”. But using this information is not mandatory, and this creates a problem for publishers. Should they publish redundant information to make it easier for simple data consumers to understand the data without knowing about the more detailed (and often more recent) vocabulary used?

Historically, adding redundant triples to capture the more general claims has been rather expensive – both in terms of markup beauty, and also file size. RDFa changes this.

Here’s a simple RDF/XML description of something.

<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:foaf="http://xmlns.com/foaf/0.1/">
<foaf:Person rdf:about="#fred">
 <foaf:name>Fred Flintstone</foaf:name>
</foaf:Person>
</rdf:RDF>

…and here is how it would have to look if we wanted to add a 2nd type:

<foaf:Person rdf:about="#fred"
 rdf:type="http://example.com/vocab2#BiblioPerson">
  <foaf:name>Fred Flintstone</foaf:name>
</foaf:Person>
</rdf:RDF>

To add a 3rd or 4th type, we’d need to add in extra subelements eg.

<rdf:type rdf:resource="http://example.com/vocab2#BiblioPerson"/>

Note that the full URI for the vocabulary needs to be used at every occurence of the type.  Here’s the same thing, with multiple types, in RDFa.

<html>
<head><title>a page about Fred</title></head>
<body>
<div xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:vocab2="http://example.com/vocab2#"
 about="#fred" typeof="foaf:Person vocab2:BiblioPerson" >
<span property="foaf:name">Fred Flintstone</span>
</div>
</body>
</html>

RDFa 1.0 requires the second vocabulary’s namespace to be declared, but after that it is pretty concise if you want to throw in a 2nd or a 3rd type, for whatever you’re describing. If you’re talking about a relationship between people, instead of ” rel=’foaf:knows’ ” you could put “rel=’foaf:knows rel:livesWith’ “; if you wanted to mention that something was in the class not just of organizations, but formal organizations, you could write “typeof=’foaf:Organization org:FormalOrganization'”.

Properties and classes serve quite different social roles in RDF. The classes tend towards being dull, boring, because they are the point of connection between different datasets and applications. The detail, personality and real information content in RDF lives in the properties. But both classes and properties fall into specialisation hierarchies that cross independent vocabularies. It is quite a common experience to feel stuck, not sure whether to use a widely known but vague term, or a more precise but ‘niche’, new or specialised vocabulary. As RDF syntaxes improve, this tension can melt away somewhat. In RDFa it is significantly easier to simply publish both, allowing smart clients to understand your full detail, and simple clients to find the patterns they expect without having to do schema-based processing.