Learning WebGL on your iPhone: Radial Blur in GLSL

A misleading title perhaps, since WebGL isn’t generally available to iOS platform developers. Hacks aside, if you’re learning WebGL and have an iPhone it is still a very educational environment. WebGL essentially wraps OpenGL ES in a modern Web browser environment. You can feed data in and out as textures associated with browser canvas areas, manipulating data objects either per-vertex or per-pixel by writing ‘vertex’ and ‘fragment’ shaders in the GLSL language. Although there are fantastic tools out there like Three.js to hide some of these details, sooner or later you’ll encounter GLSL. The iPhone, thanks to tools like GLSL Studio and Paragraf, is a great environment for playing with GLSL. And playing is a great way of learning.

Radial Blur GLSL

GLSL fragment shaders are all about thinking about visuals “per-pixel”. You can get a quick feel for what’s possible by exploring the GLSL Sandbox site. The sandbox lets you live-edit GLSL shaders, which are then applied to a display area with trivial geometry – the viewing area is just two big triangles. See Iñigo Quilez’s livecoding videos or ‘rendering worlds with two triangles‘ for more inspiration.

All of which is still rocket science to me, but I was surprised at how accessible some of these ideas and effects can be. Back to the iPhone: using Paragraf, you can write GLSL fragment shaders, whose inputs include multi-touch events and textures from device cameras and photo galleries. This is more than enough to learn the basics of GLSL, even with realtime streaming video. Meanwhile, back in your Web browser, the new WebRTC video standards work is making such streams accessible to WebGL.

Here is a quick example based on Thibaut Despoulain‘s recent three.js-based tutorials showing techniques for compositing, animation and glow effects in WebGL.  His Volumetric Light Approximation post provides a fragment shader for computing radial blur, see his live demo for a control panel showing all the parameters that can be tweaked. Thanks to Paragraf, we can also adapt that shader to run on a phone, blurring the camera input around the location of the last on-screen touch (‘t1′). Here is the original, embedded within a .js library. And here is a cut down version adapted to use the pre-declared structures from Paragraf (or see gist for cleaner copy):

vec3 draw() {
  vec2 vUv = p;
  float fX=t1.x, fY=t1.y, illuminationDecay = 1.0,
  fExposure = 0.2, fDecay = 0.93,
  fDensity = .3, fWeight = 0.4, fClamp = 1.0;
  const int iSamples = 8;
  vec2 delta = vec2(vUv-vec2(fX,fY))/float(iSamples)*fDensity,
coord = vUv;
  vec4 FragColor = vec4(0.0);
  for(int i=0; i < iSamples ; i++)  {
    coord -= delta;
    vec4 texel = vec4( cam(coord), 0.0);
    texel *= illuminationDecay * fWeight;
    FragColor += texel;
    illuminationDecay *= fDecay;
  }
  FragColor *= fExposure;
  FragColor = clamp(FragColor, 0.0, fClamp);
  return(vec3(FragColor));
}

Cat photo

Blur

As I write this, I realise I’m blurring the lines between ‘radial blur’ and its application to create ‘god-rays’ in a richer setting. As I say, I’m not an expert here (and I just post a quick example and two hasty screenshots). My main purpose was rather to communicate that tools for learning more about such things are now quite literally in many people’s hands. And also that using GLSL for real-time per-pixel processing of smartphone camera input is a really fun way to dig deeper.

At this point I should emphasis that draw() here and other conventions are from Paragraf; see any GLSL or WebGL docs, or the original example here, for details.

Video Linking: Archives and Encyclopedias

This is a quick visual teaser for some archive.org-related work I’m doing with NoTube colleagues, and a collaboration with Kingsley Idehen on navigating it.

In NoTube we are trying to match people and TV content by using rich linked data representations of both. I love Archive.org and with their help have crawled an experimental subset of the video-related metadata for the Archive. I’ve also used a couple of other sources; Sean P. Aune’s list of 40 great movies, and the Wikipedia page listing US public domain films. I fixed, merged and scraped until I had a reasonable sample dataset for testing.

I wanted to test the Microsoft Pivot Viewer (a Silverlight control), and since OpenLink’s Virtuoso package now has built-in support, I got talking with Kingsley and we ended up with the following demo. Since not everyone has Silverlight, and this is just a rough prototype that may be offline, I’ve made a few screenshots. The real thing is very visual, with animated zooms and transitions, but screenshots give the basic idea.

Notes: the core dataset for now is just links between archive.org entries and Wikipedia/dbpedia pages. In NoTube we’ll also try Lupedia, Zemanta, Reuter’s OpenCalais services on the Archive.org descriptions to see if they suggest other useful links and categories, as well as any other enrichment sources (delicious tags, machine learning) we can find. There is also more metadata from the Archive that we should also be using.

This simple preview simply shows how one extra fact per Archived item creates new opportunities for navigation, discovery and understanding. Note that the UI is in no way tuned to be TV, video or archive specific; rather it just lets you explore a group of items by their ‘facets’ or common properties. It also reveals that wiki data is rather chaotic, however some fields (release date, runtime, director, star etc.) are reliably present. And of course, since the data is from Wikipedia, users can always fix the data.

You often hear Linked Data enthusiasts talk about data “silos”, and the need to interconnect them. All that means here, is that when collections are linked, then improvements to information on one side of the link bring improvements automatically to the other. When a Wikipedia page about a director, actor or movie is improved, it now also improves our means of navigating Archive.org’s wonderful collection. And when someone contributes new video or new HTML5-powered players to the Archive, they’re also enriching the Encyclopedia too.

Archive.org films on a timeline by release date according to Wikipedia.

One thing to mention is that everything here comes from the Wikipedia data that is automatically extracted from by DBpedia, and that currently the extractors are not working perfectly on all films. So it should get better in the future. I also added a lot of the image links myself, semi-automatically. For now, this navigation is much more factually-based than topic; however we do have Wikipedia categories for each film, director, studio etc., and these have been mapped to other category systems (formal and informal), so there’s a lot of other directions to explore.

What else can we do? How about flip the tiled barchart to organize by the film’s distributor, and constrain the ‘release date‘ facet to the 1940s:

That’s nice. But remember that with Linked Data, you’re always dealing with a subset of data. It’s hard to know (and it’s hard for the interface designers to show us) when you have all the relevant data in hand. In this case, we can see what this is telling us about the videos currently available within the demo. But does it tell us anything interesting about all the films in the Archive? All the films in the world? Maybe a little, but interpretation is difficult.

Next: zoom in to a specific item. The legendary Plan 9 from Outer Space (wikipedia / dbpedia).

Note the HTML-based info panel on the right hand side. In this case it’s automatically generated by Virtuoso from properties of the item. A TV-oriented version would be less generic.

Finally, we can explore the collection by constraining the timeline to show us items organized according to release date, for some facet. Here we show it picking out the career of one Edward J. Kay, at least as far as he shows up as composer of items in this collection:

Now turning back to Wikipedia to learn about ‘Edward J. Kay’, I find he has no entry (beyond these passing mentions of his name) in the English Wikipedia, despite his work on The Ape Man, The Fatal Hour, and other films.  While the German Wikipedia does honour him with an entry, I wonder whether this kind of Linked Data navigation will change the dynamics of the ‘deletionism‘ debates at Wikipedia.  Firstly by showing that structured data managed elsewhere can enrich the Wikipedia (and vice-versa), removing some pressure for a single Wiki to cover everything. Secondly it provides a tool to stand further back from the data and view things in a larger context; a context where for example Edward J. Kay’s achievements become clearer.

Much like Freebase Parallax, the Pivot viewer hints at a future in which we explore data by navigating from sets of things to other sets of things - like the set of film’s Edward J. Kay contributed to.  Pivot doesn’t yet cover this, but it does very vividly present the potential for this kind of navigation, showing that navigation of films, TV shows and actors may be richer when it embraces more general mechanisms.

Lonclass and RDF

Lonclass is one of the BBC’s in-house classification systems – the “London classification”. I’ve had the privilege of investigating lonclass within the NoTube project. It’s not currently public, but much of what I say here is also applicable to the Universal Decimal Classification (UDC) system upon which it was based. UDC is also not fully public yet; I’ve made a case elsewhere that it should be, and I hope we’ll see that within my lifetime. UDC and Lonclass have a fascinating history and are rich cultural heritage artifacts in their own right, but I’m concerned here only with their role as the keys to many of our digital and real-world archives.

Why would we want to map Lonclass or UDC subject classification codes into RDF?

Lonclass codes can be thought of as compact but potentially complex sentences, built from the thousands of base ‘words’ in the Lonclass dictionary. By mapping the basic pieces, the words, to other data sources, we also enrich the compound sentences. We can’t map all of the sentences as there can be infinitely many of them – it would be an expensive and never-ending task.

For example, we might have a lonclass code for “Report on the environmental impact of the decline of tin mining in Sweden in the 20th century“. This would be an jumble of numbers and punctuation which I won’t trouble you with here. But if we parsed out that structure we can see the complex code as built from primitives such as ‘tin mining’ (itself e.g. ‘Tin’ and ‘Mining’), ‘Sweden’, etc. By linking those identifiable parts to shared Web data, we also learn more about the complex composite codes that use them. Wikipedia’s Sweden entry tells us in English, “Sweden has land borders with Norway to the west and Finland to the northeast, and water borders with Denmark, Germany, and Poland to the south, and Estonia, Latvia, Lithuania, and Russia to the east.”. Increasingly this additional information is available in machine-friendly form. Although right now we can’t learn about Sweden’s borders from the bits of Wikipedia reflected into DBpedia’s Sweden entry, but UN FAO’s geopolitical ontology does have this information and more in RDF form.

There is more, much more, to know about Sweden than can possibly be represented directly within Lonclass or UDC. Yet those facts may also be very useful for the retrieval of information tagged with Sweden-related Lonclass codes. If we map the Lonclass notion of ‘Sweden’ to identified concepts described elsewhere, then whenever we learn more about the latter, we also learn more about the former, and indirectly, about anything tagged with complex lonclass codes using that concept. Suddenly an archived TV documentary tagged as covering a ‘report on the environmental impact of the decline of tin mining in Sweden’ is accessible also to people or machines looking under Scandinavia + metal mining. Environmental matters, after all, often don’t respect geo-political borders; someone searching for coverage of environmental trends in a neighbouring country might well be happy to find this documentary. But should Lonclass or UDC maintain an index of which countries border which others? Surely not!

Lonclass and UDC codes have a rich hidden structure that is rarely exploited with modern tools. Lonclass by virtue of its UDC heritage, does a lot of work itself towards representing complex conceptual inter-relationships. It embodies a conceptual map of our world, with mysterious codes (well known in the library world) for topics such as ‘622 – mining’, but also specifics e.g. ‘622.3 Mining of specific minerals, ores, rocks’, and combinations (‘622.3:553.9 Extraction of carbonaceous minerals, hydrocarbons’). By joining a code for ‘mining a specific mineral…’ to a code for ‘553.9 Deposits of carbonaceous rocks. Hydrocarbon deposits’ we get a compound term. So Lonclass/UDC “knows” about the relationship between “Tin Mining” and “Mining”, “metals” etc., and quite likely between “Sweden” and “Scandinavia”. But it can’t know everything! Sooner or later, we have to say, “Sorry, it’s not reasonable to expect the classification system to model the entire world; that’s a bigger problem”.

Even within the closed, self-supporting universe of UDC/Lonclass, this compositional semantics system is a very powerful tool for describing obscure topics in terms  of well known simpler concepts. But it’s too much for any single organisation (whether the BBC, the UDC Consortium, or anyone) to maintain and extend such a system to cover all of modern life; from social, legal and business developments to new scientific innovations. The work needs to be shared, and RDF is currently our best bet on how to create such work sharing, meaning sharing, information-linking systems in the Web. The hierarchies in UDC and Lonclass don’t attempt to represent all of objective reality; they instead show paths through information.

If the metaphor of a ‘conceptual map’ holds up, then it’s clear that at some point it’s useful to have our maps made by different parties, with different specialised knowledge. The Web now contains a smaller but growing Web of machine readable descriptions. Over at MusicBrainz is a community who take care of describing the entities and relationships that cover much of music, or at least popular music. Others describe countries, species, genetics, languages, historical events, economics, and countless other topics. The data is sometimes messy or an imperfect fit for some task-in-hand, but it is actively growing, curated and connected.

I’m not arguing that Lonclass or UDC should be thrown out and replaced by some vague ‘linked cloud’. Rather, that there are some simple steps that can be taken towards making sure each of these linked datasets contribute to modernising our paths into the archives. We need to document and share opensource tools for an agreed data model for the arcane numeric codes of UDC and Lonclass. We need at least the raw pieces, the simplest codes, to be described for humans and machines in public, stable Web pages, and for their re-use, mapping, data mining and re-combination to be actively encouraged and celebrated. Currently, it is possible to get your hands on this data if you work with the BBC (Lonclass), pay license fees (UDC) or exchange USB sticks with the right party in some shady backstreet. Whether the metaphor of choice is ‘key to the archives’ or ‘conceptual map of…’, this is a deeply unfortunate situation, both for the intrinsic public value of these datasets, but also for the collections they index. There’s a wealth of meaning hidden inside Lonclass and UDC and the collections they index, a lot that can be added by linking it to other RDF datasets, but more importantly there are huge communities out there who’ll do much of the work when the data is finally opened up…

I wrote too much. What I meant to say is simple. Classification systems with compositional semantics can be enriched when we map their basic terms using identifiers from other shared data sets. And those in the UDC/Lonclass tradition, while in some ways they’re showing their age (weird numeric codes, huge monolithic, hard-to-maintain databases), … are also amongst the most interesting systems we have today for navigating information, especially when combined with Linked Data techniques and companion datasets.

WOT in RDFa?

(This post is written in RDFa…)

To the best of my knowledge, Ludovic Hirlimann‘s PGP fingerprint is 6EFBD26FC7A212B2E093 B9E868F358F6C139647C. You might also be interested in his photos on flickr, or his workplace, Mozilla Messaging. The GPG key details were checked over a Skype video call with me, Ludo and Kaare A. Larsen.

This blog post isn’t signed, the URIs it referenced don’t use SSL, and the image could be switched by evildoers at any time! But the question’s worth asking: is this kind of scruffy key info useful, if there’s enough of it? If I wrote it somehow in Thunderbird’s editor instead, would it be easier to sign? Will 99.9% of humans ever know enough of what’s going on to understand what signing a bunch of complex markup means?

For earlier discussion of this kind of thing, see Joseph Reagle’s Key-free Trust piece (“Does Google Show How the Semantic Web Could Replace Public Key Infrastructure?”). It’s more PKI-free trust than PK-free.

My ’70s Schoolin’ (in RDFa)

I went to Hamsey Green school in the 1970s.

Looking in the UK Govt datasets, I see it is listed there with a homepage of ‘http://www.hamsey-green-infant.surrey.sch.uk’ (which doesn’t seem to work).

Some queries I’m trying via the SPARQL dataset (I’ll update this post if I make them work…)

First a general query, from which I found the URL manually, …

select distinct ?x ?y where { ?x <http ://education.data.gov.uk/def/school/websiteAddress> ?y . }

Then I can go back into the data, and find other properties of the school:

PREFIX sch-ont:  <http://education.data.gov.uk/def/school/>
select DISTINCT ?x ?p ?z WHERE
{
?x sch-ont:websiteAddress "http://www.hamsey-green-infant.surrey.sch.uk" .
?x ?p ?z .
}

Results in json

How to make this presentable? I can’t get output=html to work, but if I run this ‘construct’ query it creates a simple flat RDF document:

PREFIX sch-ont:  <http://education.data.gov.uk/def/school/>
CONSTRUCT {
 ?x ?p ?z .
}
WHERE
{
?x sch-ont:websiteAddress "http://www.hamsey-green-infant.surrey.sch.uk" .
?x ?p ?z .
}

So, where are we here? We see two RDF datasets about the same school. One is the simple claim that I attended the

school at some time in the past (1976-1978, in fact). The other describes many of its current attributes; most of which may be different now from in the past. In my sample RDFa, I used the most popular Web link for the school to represent it; in the Edubase government data, it has a Web site address but it seems not to be current.

Assuming we’d used the same URIs for the school’s homepage (or indeed for the school itself) then these bits of data could be joined.

Perhaps a more compelling example of data linking would be to show this schools data mixed in with something like MySociety’s excellent interactive travel maps? Still, the example above shows that basic “find people who went to my school” queries should be very possible…

Apple Remote events – a quick howto

Anyone who has recently bought an Apple computer probably has one or more Apple Remotes.

I have been learning how to access them. Conclusion: iremoted does 95% of what you probably need, and the discussion over on cocoadev.com tells you more than you probably wanted to know. My experiments are written up in a corner of the FOAF wiki, and have a large collection of related links stored under the ‘buttons’ tag at delicious.com.

Summary: If you run iremoted, it calls OSX APIs and gets notified whenever button presses on the Apple Remote are detected.

There are 6 buttons: a menu button, a play/pause main button, and four others surrounding the main button. To the left and right there are ‘back’ and ‘fwd'; and above/below they are labelled ‘+’ and ‘-‘.

Interestingly from a protocol and UI design view, those last buttons behave differently. When you press and release the other 4 buttons, the 2 events are not reported until after you release; but with ‘+’ and ‘-‘, the initial press is reported immediately. Furthermore, only ‘+’ and ‘-‘ do anything interesting if you press for longer than a second or so; the others result in your action being silently dropped, even when you release the button.

I am looking at this since I am revisiting my old Jqbus FOAF experiments with XMPP and SPARQL for data access; but this time building in some notion of behaviour too, hence the new name: Buttons. The idea is to use XMPP as a universal communication environment between remote controls and remote-controllable things, initially in the world around TV. So I am exploring adaptors for existing media centres and media players (Boxee, XBMC, iTunes etc).

There are various levels of abstraction at which XMPP could be used as a communications pipe: it could stream clicks directly, by simply reflecting the stream of Infra Red events, translating them into IQ messages sent to JID-identified XMPP accounts. Or we could try to interpret them closer to their source, and send more meaningful events. For example, if the “+” is held for 3 seconds, do we wait until it is released before sending a “Maximize volume” message out across the ‘net. Other approaches (eg. as suggested on the xmpp list, and in some accessibility work) include exporting bits of UI to the remote. Or in the case of really dumb remotes with no screen, at least some sort of invisible proxy/wrapper that tries to avoid sprawing every user click out across XMPP.

Anyway, I thought this worth a writeup. Here’s a link to my copy of iremoted, iremoted_buttons.c. The main change (apart from trivial syntax edits for its output) is the use of kIOHIDOptionsTypeSeizeDevice. This is pretty important, as it stops other processes on your mac receiving the same events; eg. FrontRow taking over the laptop UI whenever ‘menu’ is pressed, or the local volume being changed by ‘+’ and ‘-‘:

    ioReturnValue = (*hidDeviceInterface)->open(hidDeviceInterface, kIOHIDOptionsTypeSeizeDevice);

What kind of Semantic Web researcher are you?

It’s hard to keep secrets in today’s increasingly interconnected, networked world. Social network megasites, mobile phones, webcams and  inter-site syndication can broadcast and amplify the slightest fragment of information. Data linking and interpretation tools can put these fragments together, to paint a detailed picture of your life, both online and off.

This online richness creates offline risk. For example, if you’re going away on holiday, there are hundreds of ways in which potential thieves could learn that your home is vacant and therefore a target for crime: shared calendars, twittered comments from friends or family, flickr’d photographs. Any of these could reveal that your home and possessions sit unwatched, unguarded, presenting an easy target for criminals.

Q: What research challenge does this present to the Semantic Web community? How can we address the concern that Semantic and Social Web technology have more to offer Burglar Bill than to his victims?

A1: We need better technology for limiting the flow of data, proving a right to legitimate access to information, cross-site protocols for deleting leaked or retracted data that flows between sites, and calculating trust metrics for parties requesting data access.

A2: We need to find ways to reconnect people with their neighbours and neighbourhoods, so that homes don’t sit unwatched when their occupants are away.

ps. Dear Bill, I have my iphone, laptop, piggy bank and camera with me…

Twitter Iran RT chaos

From Twitter in the last few minutes, a chaos of echo’d posts about army moves. Just a few excerpts here by copy/paste, mostly without the all-important timestamps. Without tools to trace reports to their source, to claims about their source from credible intermediaries, or evidence, this isn’t directly useful. Even grassroots journalists needs evidence. I wonder how Witness and Identi.ca fit into all this. I was thinking today about an “(person) X claims (person) Y knows about (topic) Z” notation, perhaps built from FOAF+SKOS. But looking at this “Army moving in…” claim, I think something couched in terms of positive claims (along lines of the old OpenID showcase site Jyte) might be more appropriate.

The following is from my copy/paste from Twitter a few minutes ago. It gives a flavour of the chaos. Note also that observations from very popular users (such as stephenfry) can echo around for hours, often chased by attempts at clarification from others.

(“RT” is Twitter notation for re-tweet, meaning that the following content is redistributed, often in abbreviated or summarised form)

plotbunnytiff: RT @suffolkinace: RT From Iran: CONFIRMED!! Army moving into Tehran against protestors! PLEASE RT! URGENT! #IranElection
r0ckH0pp3r: RT .@AliAkbar: RT From Iran: CONFIRMED!! Army moving into Tehran against protesters! PLEASE RT! URGENT! #IranElection
jax3417: RT @ktyladie: RT @GennX: RT From Iran: CONFIRMED!! Army moving into Tehran against protesters! PLEASE RT! URGENT! #IranElection #iran
ktladie: RT @GennX: RT From Iran: CONFIRMED!! Army moving into Tehran against protesters! PLEASE RT! URGENT! #IranElection #iran
MellissaTweets: RT @AliAkbar: RT From Iran: CONFIRMED!! Army moving into Tehran against protesters! PLEASE RT! URGENT! #IranElection
GennX: RT @MelissaTweets: RT @AliAkbar: RT From Iran: CONFIRMED!! Army moving into Tehran against protesters! PLEASE RT! URGENT! #IranElection

The above all arrived at around the same time, and cite two prior “sources”:

suffolkinnace: RT From Iran: CONFIRMED!! Army moving into Tehran against protestors! PLEASE RT! URGENT! #IranElection   18 minutes ago from web

Who is this? Nobody knows of course, but there’s a twitter bio:

http://twitter.com/suffolkinace # Bio Some-to-be Royal Military Policeman in the British Army. Also a massive Xbox geek and part-time comedian

The other “source” seems to be http://twitter.com/AliAkbar
AliAkbar: RT From Iran: CONFIRMED!! Army moving into Tehran against protesters! PLEASE RT! URGENT! #IranElection
about 1 hour ago from web
url http://republicmodern.com

This leads us to   http://republicmodern.com/about where we’re told
“Ali Akbar is the founder and president of Republic Modern Media. A conservative blogger, he is a contributor to Right Wing News, Hip Hop Republican, and co-host of The American Resolve online radio show. He was also the editor-in-chief of Blogs for McCain.”

I should also mention that a convention emerged in the last day two replace the names of specific local Twitter users in Tehran with a generic “from Iran”, to avoid getting anyone into trouble. Which makes plenty of sense, but without any in the middle vouching for sources makes it even harder to know which reports to take seriously.
More… back to twitter search, what’s happened since I started this post?

http://twitter.com/#search?q=iranelection%20army

badmsm: RT @dpbkmb @judyrey: RT From Iran: CONFIRMED!! Army moving into Tehran against protesters! PLZ RT! URGENT! #IranElection #gr88
SimaoC: RT @parizot: CONFIRMÉ! L’armée se dirige vers Téhéran contre les manifestants! #IranElection #gr88
SpanishClash: RT @mytweetnickname: RT From Iran:ARMY movement NOT confirmed in last 2:15, plz RT this until confrmed #IranElection #gr88
artzoom: RT @matyasgabor @humberto2210: RT CONFIRMED!! Army moving into Tehran against protesters! PLEASE RT! #IranElection #iranrevolution
sjohnson301: RT @RonnyPohl From Iran: CONFIRMED!! Army moving into Tehran against protestors! PLEASE RT! URGENT! #IranElection #iran9
dauni: RT @withoutfield: RT: @tspe: CONFIRMED!! Army moving into Tehran against protestors! PLEASE RT! URGENT! #IranElection
interdigi: RT @ivanpinozas From Iran: CONFIRMED!! Army moving into Tehran against protestors! PLEASE RT! URGENT! #IranElection
PersianJustice: Once again, stop RT army movements until source INSIDE Iran verifies! Paramilitary is the threat anyway. #iranelection #gr88
Klungtveit Anyone: What’s the origin of reports of “army moving in” on protesters? #iranelection
Eruethemar: RT @brianlltdhq: RT @lumpuckaroo: Only IRG moving, not national ARMY… this is confirmed for real #IranElection #gr88
SAbbasRaza: RT @bymelissa: RT @alexlobov: RT From Iran: CONFIRMED!! Army moving into Tehran against protestors! PLEASE RT! URGENT! #IranElection
timnilsson: RT @Iridium24: CONFIRMED!! Army moving into Tehran against protesters! PLEASE RT! URGENT! #IranElection
edmontalvo: RT @jasona: RT @Marble68: RT From Iran: CONFIRMED!! Army moving into Tehran against protestors! PLEASE RT! URGENT! #IranElection
stevelabate: RT army moving into Tehran against protesters. Please RT. #iranelection
ivanpinozas: From Iran: CONFIRMED!! Army moving into Tehran against protestors! PLEASE RT! URGENT! #IranElection
bschh: CONFIRMED!! Army moving into Tehran against protestors! PLEASE RT! URGENT! #IranElection (via @dlayphoto)
dlayphoto: RT From Iran: CONFIRMED!! Army moving into Tehran against protestors! PLEASE RT! URGENT! #IranElection

In short … chaos!

Is this just a social / information problem, or can different tooling and technology help filter out what on earth is happening?