Learning WebGL on your iPhone: Radial Blur in GLSL

A misleading title perhaps, since WebGL isn’t generally available to iOS platform developers. Hacks aside, if you’re learning WebGL and have an iPhone it is still a very educational environment. WebGL essentially wraps OpenGL ES in a modern Web browser environment. You can feed data in and out as textures associated with browser canvas areas, manipulating data objects either per-vertex or per-pixel by writing ‘vertex’ and ‘fragment’ shaders in the GLSL language. Although there are fantastic tools out there like Three.js to hide some of these details, sooner or later you’ll encounter GLSL. The iPhone, thanks to tools like GLSL Studio and Paragraf, is a great environment for playing with GLSL. And playing is a great way of learning.

Radial Blur GLSL

GLSL fragment shaders are all about thinking about visuals “per-pixel”. You can get a quick feel for what’s possible by exploring the GLSL Sandbox site. The sandbox lets you live-edit GLSL shaders, which are then applied to a display area with trivial geometry – the viewing area is just two big triangles. See Iñigo Quilez’s livecoding videos or ‘rendering worlds with two triangles‘ for more inspiration.

All of which is still rocket science to me, but I was surprised at how accessible some of these ideas and effects can be. Back to the iPhone: using Paragraf, you can write GLSL fragment shaders, whose inputs include multi-touch events and textures from device cameras and photo galleries. This is more than enough to learn the basics of GLSL, even with realtime streaming video. Meanwhile, back in your Web browser, the new WebRTC video standards work is making such streams accessible to WebGL.

Here is a quick example based on Thibaut Despoulain‘s recent three.js-based tutorials showing techniques for compositing, animation and glow effects in WebGL.  His Volumetric Light Approximation post provides a fragment shader for computing radial blur, see his live demo for a control panel showing all the parameters that can be tweaked. Thanks to Paragraf, we can also adapt that shader to run on a phone, blurring the camera input around the location of the last on-screen touch (‘t1′). Here is the original, embedded within a .js library. And here is a cut down version adapted to use the pre-declared structures from Paragraf (or see gist for cleaner copy):

vec3 draw() {
  vec2 vUv = p;
  float fX=t1.x, fY=t1.y, illuminationDecay = 1.0,
  fExposure = 0.2, fDecay = 0.93,
  fDensity = .3, fWeight = 0.4, fClamp = 1.0;
  const int iSamples = 8;
  vec2 delta = vec2(vUv-vec2(fX,fY))/float(iSamples)*fDensity,
coord = vUv;
  vec4 FragColor = vec4(0.0);
  for(int i=0; i < iSamples ; i++)  {
    coord -= delta;
    vec4 texel = vec4( cam(coord), 0.0);
    texel *= illuminationDecay * fWeight;
    FragColor += texel;
    illuminationDecay *= fDecay;
  }
  FragColor *= fExposure;
  FragColor = clamp(FragColor, 0.0, fClamp);
  return(vec3(FragColor));
}

Cat photo

Blur

As I write this, I realise I’m blurring the lines between ‘radial blur’ and its application to create ‘god-rays’ in a richer setting. As I say, I’m not an expert here (and I just post a quick example and two hasty screenshots). My main purpose was rather to communicate that tools for learning more about such things are now quite literally in many people’s hands. And also that using GLSL for real-time per-pixel processing of smartphone camera input is a really fun way to dig deeper.

At this point I should emphasis that draw() here and other conventions are from Paragraf; see any GLSL or WebGL docs, or the original example here, for details.

Schema.org and One Hundred Years of Search

A talk from London SemWeb meetup hosted by the BBC Academy in London, Mar 30 2012….

Slides and video are already in the Web, but I wanted to post this as an excuse to plug the new Web History Community Group that Max and I have just started at W3C. The talk was part of the Libraries, Media and the Semantic Web meetup hosted by the BBC in March. It gave an opportunity to run through some forgotten history, linking Paul Otlet, the Universal Decimal Classification, schema.org and some 100 year old search logs from Otlet’s Mundaneum. Having worked with the BBC Lonclass system (a descendant of Otlet’s UDC), and collaborated with the Aida Slavic of the UDC on their publication of Linked Data, I was happy to be given the chance to try to spell out these hidden connections. It also turned out that Google colleagues have been working to support the Mundaneum and the memory of this early work, and I’m happy that the talk led to discussions with both the Mundaneum and Computer History Museum about the new Web History group at W3C.

So, everything’s connected. Many thanks to W. Boyd Rayward (Otlet’s biographer) for sharing the ancient logs that inspired the talk (see slides/video for a few more details). I hope we can find more such things to share in the Web History group, because the history of the Web didn’t begin with the Web…

Inmaps

From LinkedIn’s networking graphing service; see also my map

I’ve been digging around in graph-mining and visualization tools lately, and this use at LinkedIn is one of the few cases where such things actually break through into mainstream usefulness. Well, perhaps not useful, but it’s nice to see how groups overlap.

In my chart here, the big tight-knit, self-referential cluster on the left is Joost, the TV startup I joined in 2006/7. At the top there is another tightly-linked community: the W3C team, where I worked 1999-2005. In between is a fuzzier cluster that I can only label ‘Web 2′, ‘Social Web’, … lots of Web technology standards sort of people. Then there are the linkers, like Max Froumentin and Robin Berjon between the W3C and Joost worlds, or Libby Miller and folk from the Asemantics and Apache scene (Alberto Reggiori, Stefano Mazzocchi) who link Joost through to the Semantic Web scene in the lower right.

The LinkedIn analysis finds distinct clusters that are fairly easy to identify as “Digital Libraries (Museums, Archives…)” and “Linked Data / RDF / Semantic Web”, even while being richly interconnected. I’m not suprised there’s a cluster for the VU University Amsterdam (even though well-linked to SW and digital libraries). However the presence of a BBC cluster was a surprise; either it shows how closely-knit the BBC community is, or just how much I’ve been hanging around with them. And that’s the intriguing thing; each individual map is just a per-person view, a thin slice through the bigger picture. It must be fun to see the whole dataset…

For more on all this, see LinkedIn or the inmaps site.

Talis

Most of us around RDF and the Semantic Web have by now probably heard the news about Talis; if not, see Leigh Dodds’ blog post. Talis are shutting down their general activities around Semantic Web and Linked Data, including the Kasabi data marketplace. Failures are usually complex and Twitter is already abuzz with punditry, speculation and ill-judged extrapolation. I just wanted to take a minute aside from all that to say something that I’ve not got around to before: “thanks!”.

Regardless of the business story, we ought to appreciate on a personal level all the hard work that the team (past and present) at Talis have put into popularising the ideas and technology around Linked Data. Talis had an extraordinarily bright, energetic and committed team, who put great passion into their work – and into supporting the work of others. All of us in the community around Linked Data have benefitted enormously from this, and will continue to benefit from the various projects and initiatives that Talis have supported.  Perhaps in a nearby parallel universe, there is a thriving alternate Talis whose efforts benefited the business more, and the commons less. We can only speculate. In this universe, the most appropriate word at this point is just “thanks”…

Everything Still Looks Like A Graph (but graphs look like maps)

Last October I posted a writeup of some experiments that illustrate item-to-item similarities from Apache Mahout using Gephi for visualization. This was under a heading that quotes Ben Fry, “Everything looks like a graph” (but almost nothing should ever be drawn as one). There was also some followup discussion on the Gephi project blog.

I’ve just seen a cluster of related Gephi experiments, which are reinforcing some of my prejudices from last year’s investigations:

These are all well worth a read, both for showing the potential and the limitations of Gephi. It’s not hard to find critiques of the intelligibility or utility of squiggly-but-inspiring network diagrams; Ben Fry’s point was well made. However I think each of the examples I link here (and my earlier experiments) show there is some potential in such layouts for showing ‘similarity neighbourhoods’ in a fairly appealing and intuitive form.

In the case of the history of Philosophy it feels a little odd using a network diagram since the chronological / timeline aspect is quite important to the notion of a history. But still it manages to group ‘like with like’, to the extent that the inter-node connections probably needn’t even be shown.

I’m a lot more comfortable with taking the ‘everything looks like a graph’ route if we’re essentially generating a similarity landscape. Whether these ‘landscapes’ can be made to be stable in the face of dataset changes or re-generation of the visualization is a longer story. Gephi is currently a desktop tool, and as such has memory issues with really large graphs, but I think it shows the potential for landscape-oriented graph visualization. Longer term I expect we’ll see more of a split between something like Hadoop+Mahout for big data crunching (e.g. see Mahout’s spectral clustering component which takes node-to-node affinities as input) and something WebGL and browser-based UI for the front-end. It’s a shame the Gephi efforts in this direction (GraphGL) seem to have gone quiet, but for those of you with modern graphics cards and browsers, take a look at alterqualia’s ‘dynamic terrain‘ WebGL demo to get a feel for how landscape-shaped datasets could be presented…

Also btw look at the griffsgraphs landscape of literature; this was built solely from ‘influences’ relationships from Wikipedia… then compare this with the landscapes I was generating last year from Harvard bibliographic data. They were both built solely using subject classification data from Harvard. Now imagine if we could mutate the resulting ‘map’ by choosing our own weighting composited across these two sources. Perhaps for the music or movies or TV areas of the map we might composite in other sources, based on activity data analysed by recommendation engine, or just different factual relationships.

There’s no single ‘correct’ view of the bibliographic landscape; what makes sense for a phd researcher, a job seeker or a schoolkid will naturally vary. This is true also of similarity measures in general, i.e. for see-also lists in plain HTML as well as fancy graph or landscape-based visualizations. There are more than metaphorical comparisons to be drawn with the kind of compositing tools we see in systems like Blender, and plenty of opportunities for putting control into end-user rather than engineering hands.

In just the last year, Harvard (and most recently OCLC) have released their bibliographic dataset for public re-use, the Wikidata project has launched, and browser support for WebGL has been improving with every release. Despite all the reasonable concerns out there about visualizing graphs as graphs, there’s a lot to be said for treating graphs as maps…