Linked Literature, Linked TV – Everything Looks like a Graph

cloud

Ben Fry in ‘Visualizing Data‘:

Graphs can be a powerful way to represent relationships between data, but they are also a very abstract concept, which means that they run the danger of meaning something only to the creator of the graph. Often, simply showing the structure of the data says very little about what it actually means, even though it’s a perfectly accurate means of representing the data. Everything looks like a graph, but almost nothing should ever be drawn as one.

There is a tendency when using graphs to become smitten with one’s own data. Even though a graph of a few hundred nodes quickly becomes unreadable, it is often satisfying for the creator because the resulting figure is elegant and complex and may be subjectively beautiful, and the notion that the creator’s data is “complex” fits just fine with the creator’s own interpretation of it. Graphs have a tendency of making a data set look sophisticated and important, without having solved the problem of enlightening the viewer.

markets

Ben Fry is entirely correct.

I suggest two excuses for this indulgence: if the visuals are meaningful only to the creator of the graph, then let’s make everyone a graph curator. And if the things the data attempts to describe — for example, 14 million books and the world they in turn describe — are complex and beautiful and under-appreciated in their complexity and interconnectedness, … then perhaps it is ok to indulge ourselves. When do graphs become maps?

I report here on some experiments that stem from two collaborations around Linked Data. All the visuals in the post are views of bibliographic data, based on similarity measures derrived from book / subject keyword associations, with visualization and a little additional analysis using Gephi. Click-through to Flickr to see larger versions of any image. You can’t always see the inter-node links, but the presentation is based on graph layout tools.

Firstly, in my ongoing work in the NoTube project, we have been working with TV-related data, ranging from ‘social Web’ activity streams, user profiles, TV archive catalogues and classification systems like Lonclass. Secondly, over the summer I have been working with the Library Innovation Lab at Harvard, looking at ways of opening up bibliographic catalogues to the Web as Linked Data, and at ways of cross-linking Web materials (e.g. video materials) to a Webbified notion of ‘bookshelf‘.

In NoTube we have been making use of the Apache Mahout toolkit, which provided us with software for collaborative filtering recommendations, clustering and automatic classification. We’ve barely scratched the surface of what it can do, but here show some initial results applying Mahout to a 100,000 record subset of Harvard’s 14 million entry catalogue. Mahout is built to scale, and the experiments here use datasets that are tiny from Mahout’s perspective.

gothic_idol

In NoTube, we used Mahout to compute similarity measures between each pair of items in a catalogue of BBC TV programmes for which we had privileged access to subjective viewer ratings. This was a sparse matrix of around 20,000 viewers, 12,500 broadcast items, with around 1.2 million ratings linking viewer to item. From these, after a few rather-too-casual tests using Mahout’s evaluation measure system, we picked its most promising similarity measure for our data (LogLikelihoodSimilarity or Tanimoto), and then for the most similar items, simply dumped out a huge data file that contained pairs of item numbers, plus a weight.

There are many many smarter things we could’ve tried, but in the spirit of ‘minimal viable product‘, we didn’t try them yet. These include making use of additional metadata published by the BBC in RDF, so we can help out Mahout by letting it know that when Alice loves item_62 and Bob loves item_82127, we also via RDF also knew that they are both in the same TV series and Brand. Why use fancy machine learning to rediscover things we already know, and that have been shared in the Web as data? We could make smarter use of metadata here. Secondly we could have used data-derrived or publisher-supplied metadata to explore whether different Mahout techniques work better for different segments of the content (factual vs fiction) or even, as we have also some demographic data, different groups of users.

markets

Anyway, Mahout gave us item-to-item similarity measures for TV. Libby has written already about how we used these in ‘second screen’ (or ‘N-th’ screen, aka N-Screen) prototypes showing the impact that new Web standards might make on tired and outdated notions of “TV remote control”.

What if your remote control could personalise a view of some content collection? What if it could show you similar things based on your viewing behavior, and that of others? What if you could explore the ever-growing space of TV content using simple drag-and-drop metaphors, sending items to your TV or to your friends with simple tablet-based interfaces?

medieval_society

So that’s what we’ve been up to in NoTube. There are prototypes using BBC content (sadly not viewable by everyone due to rights restrictions), but also some experiments with TV materials from the Internet Archive, and some explorations that look at TED’s video collection as an example of Web-based content that (via ted.com and YouTube) are more generally viewable. Since every item in the BBC’s Archive is catalogued using a library-based classification system (Lonclass, itself based on UDC) the topic of cross-referencing books and TV has cropped up a few times.

new_colonialism

Meanwhile, in (the digital Public Library of) America, … the Harvard Library Innovation Lab team have a huge and fantastic dataset describing 14 million bibliographic records. I’m not sure exactly how many are ‘books'; libraries hold all kinds of objects these days. With the Harvard folk I’ve been trying to help figure out how we could cross-reference their records with other “Webby” sources, such as online video materials. Again using TED as an example, because it is high quality but with very different metadata from the library records. So we’ve been looking at various tricks and techniques that could help us associate book records with those. So for example, we can find tags for their videos on the TED site, but also on delicious, and on youtube. However taggers and librarians tend to describe things quite differently. Tags like “todo”, “inspirational”, “design”, “development” or “science” don’t help us pin-point the exact library shelf where a viewer might go to read more on the topic. Or conversely, they don’t help the library sites understand where within their online catalogues they could embed useful and engaging “related link” pointers off to TED.com or YouTube.

So we turned to other sources. Matching TED speaker names against Wikipedia allows us to find more information about many TED speakers. For example the Tim Berners-Lee entry, which in its Linked Data form helpfully tells us that this TED speaker is in the categories ‘Japan_Prize_laureates’, ‘English_inventors’, ‘1955_births’, ‘Internet_pioneers’. All good to know, but it’s hard to tell which categories tell us most about our speaker or video. At least now we’re in the Linked Data space, we can navigate around to Freebase, VIAF and a growing Web of data-sources. It should be possible at least to associate TimBL’s TED talks with library records for his book (so we annotate one bibliographic entry, from 14 million! …can’t we map areas, not items?).

tv

Can we do better? What if we also associated Tim’s two TED talk videos with other things in the library that had the same subject classifications or keywords as his book? What if we could build links between the two collections based not only on published authorship, but on topical information (tags, full text analysis of TED talk transcripts). Can we plan for a world where libraries have access not only to MARC records, but also full text of each of millions of books?

Screen%20shot%202011-10-11%20at%2010.15.07%20AM

I’ve been exploring some of these ideas with David Weinberger, Paul Deschner and Matt Phillips at Harvard, and in NoTube with Libby Miller, Vicky Buser and others.

edu

Yesterday I took the time to make some visual sanity check of the bibliographic data as processed into a ‘similarity space’ in some Mahout experiments. This is a messy first pass at everything, but I figured it is better to blog something and look for collaborations and feedback, than to chase perfection. For me, the big story is in linking TV materials to the gigantic back-story of context, discussion and debate curated by the world’s libraries. If we can imagine a view of our TV content catalogues, and our libraries, as visual maps, with items clustered by similarity, then NoTube has shown that we can build these into the smartphones and tablets that are increasingly being used as TV remote controls.

Screen%20shot%202011-10-11%20at%2010.12.25%20AM

And if the device you’re using to pause/play/stop or rewind your TV also has access to these vast archives as they open up as Linked Data (as well as GPS location data and your Facebook password), all kinds of possibilities arise for linked, annotated and fact-checked TV, as well as for showing a path for libraries to continue to serve as maps of the entertainment, intellectual and scientific terrain around us.

Screen%20shot%202011-10-11%20at%2010.16.46%20AM

A brief technical description. Everything you see here was made with Gephi, Mahout and experimental data from the Library Innovation Lab at Harvard, plus a few scripts to glue it all together.

Mahout was given 100,000 extracts from the Harvard collection. Just main and sub-title, a local ID, and a list of topical phrases (mostly drawn from Library of Congress Subject Headings, with some local extensions). I don’t do anything clever with these or their sub-structure or their library-documented inter-relationships. They are treated as atomic codes, and flattened into long pseudo-words such as ‘occupational_diseases_prevention_control’ or ‘french_literature_16th_century_history_and_criticism’,
‘motion_pictures_political_aspects’, ‘songs_high_voice_with_lute’, ‘dance_music_czechoslovakia’, ‘communism_and_culture_soviet_union’. All of human life is there.

David Weinberger has been calling this gigantic scope our problem of the ‘Taxonomy of Everything’, and the label fits. By mushing phrases into fake words, I get to re-use some Mahout tools and avoid writing code. The result is a matrix of 100,000 bibliographic entities, by 27684 unique topical codes. Initially I made the simple test of feeding this as input to Mahout’s K-Means clustering implementation. Manually inspecting the most popular topical codes for each cluster (both where k=12 to put all books in 12 clusters, or k=1000 for more fine-grained groupings), I was impressed by the initial results.

Screen%20shot%202011-10-11%20at%2010.22.37%20AM

I only have these in crude text-file form. See hv/_k1000.txt and hv/_twelve.txt (plus dictionary, see big file
_harv_dict.txt ).

For example, in the 1000-cluster version, we get: ‘medical_policy_united_states’, ‘health_care_reform_united_states’, ‘health_policy_united_states’, ‘medical_care_united_states’,
‘delivery_of_health_care_united_states’, ‘medical_economics_united_states’, ‘politics_united_states’, ‘health_services_accessibility_united_states’, ‘insurance_health_united_states’, ‘economics_medical_united_states’.

Or another cluster: ‘brain_physiology’, ‘biological_rhythms’, ‘oscillations’.

How about: ‘museums_collection_management’, ‘museums_history’, ‘archives’, ‘museums_acquisitions’, ‘collectors_and_collecting_history’?

Another, conceptually nearby (but that proximity isn’t visible through this simple clustering approach), ‘art_thefts’, ‘theft_from_museums’, ‘archaeological_thefts’, ‘art_museums’, ‘cultural_property_protection_law_and_legislation’, …

Ok, I am cherry picking. There is some nonsense in there too, but suprisingly little. And probably some associations that might cause offense. But it shows that the tooling is capable (by looking at book/topic associations) at picking out similarities that are significant. Maybe all of this is also available in LCSH SKOS form already, but I doubt it. (A side-goal here is to publish these clusters for re-use elsewhere…).

Screen%20shot%202011-10-11%20at%2010.23.22%20AM

So, what if we take this, and instead compute (a bit like we did in NoTube from ratings data) similarity measures between books?

Screen%20shot%202011-10-11%20at%2010.24.12%20AM

I tried that, without using much of Mahout’s sophistication. I used its ‘rowsimilarityjob’ facility and generated similarity measures for each book, then threw out most of the similarities except the top 5, later the top 3, from each book. From this point, I moved things over into the Gephi toolkit (“photoshop for graphs”), as I wanted to see how things looked.

Screen%20shot%202011-10-11%20at%2010.37.06%20AM

First results shown here. Nodes are books, links are strong similarity measures. Node labels are titles, or sometimes title + subtitle. Some (the black-background ones) use Gephi’s “modularity detection” analysis of the link graph. Others (white background) I imported the 1000 clusters from the earlier Mahout experiments. I tried various of the metrics in Gephi and mapped these to node size. This might fairly be called ‘playing around’ at this stage, but there is at least a pipeline from raw data (eventually Linked Data I hope) through Mahout to Gephi and some visual maps of literature.

1k_overview

What does all this show?

That if we can find a way to open up bibliographic datasets, there are solid opensource tools out there that can give new ways of exploring the items described in the data. That those tools (e.g. Mahout, Gephi) provide many different ways of computing similarity, clustering, and presenting. There is no single ‘right answer’ for how to present literature or TV archive content as a visual map, clustering “like with like”, or arranging neighbourhoods. And there is also no restriction that we must work dataset-by-dataset, either. Why not use what we know from movie/TV recommendations to arrange the similarity space for books? Or vice-versa?

I must emphasise (to return to Ben Fry’s opening remark) that this is a proof-of-concept. It shows some potential, but it is neither a user interface, nor particularly informative. Gephi as a tool for making such visualizations is powerful, but it too is not a viable interface for navigating TV content. However these tools do give us a glimpse of what is hidden in giant and dull-sounding databases, and some hints for how patterns extracted from these collections could help guide us through literature, TV or more.

Next steps? There are many things that could be tried; more than I could attempt. I’d like to get some variant of these 2D maps onto ipad/android tablets, loaded with TV content. I’d like to continue exploring the bridges between content (eg. TED) and library materials, on tablets and PCs. I’d like to look at Mahout’s “collocated terms” extraction tools in more details. These allow us to pull out recurring phrases (e.g. “Zero Sum”, “climate change”, “golden rule”, “high school”, “black holes” were found in TED transcripts). I’ve also tried extracting bi-gram phrases from book titles using the same utility. Such tools offer some prospect of bulk-creating links not just between single items in collections, but between neighbourhood regions in maps such as those shown here. The cross-links will never be perfect, but then what’s a little serendipity between friends?

As full text access to book data looms, and TV archives are finding their way online, we’ll need to find ways of combining user interface, bibliographic and data science skills if we’re really going to make the most of the treasures that are being shared in the Web. Since I’ve only fragments of each, I’m always drawn back to think of this in terms of collaborative work.

A few years ago, Netflix had the vision and cash to pretty much buy the attention of the entire machine learning community for a measly million dollars. Researchers love to have substantive datasets to work with, and the (now retracted) Netflix dataset is still widely sought after. Without a budget to match Netflix’, could we still somehow offer prizes to help get such attention directed towards analysis and exploitation of linked TV and library data? We could offer free access to the world’s literature via a global network of libraries? Except everyone gets that for free already. Maybe we don’t need prizes.

Nearby in the Web: NoTube N-Screen, Flickr slideshow

A Penny for your thoughts: New Year wishes from mechanical turkers

I wanted to learn more about Amazon’s Mechanical Turk service (wikipedia), and perhaps also figure out how I feel about it.

Named after a historical faked chess-playing machine, it uses the Web to allow people around the world to work on short low-pay ‘micro-tasks’. It’s a disturbing capitalist fantasy come true, echoing Frederick Taylor’s ‘Scientific Management‘ of the 1880s. Workers can be assigned tasks at the touch of the button (or through software automation); and rewarded or punished at the touch of other buttons.

Mechanical Turk has become popular for outsourcing large scale data cleanup tasks, image annotation, and other topics where human judgement outperforms brainless software. It’s also popular with spammers. For more background see ‘try a week as a turker‘ or this Salon article from 2006. Turk is not alone, other sites either build on it, or offer similar facilities. See for example crowdflower, txteagle, or Panos Ipeirotis’ list of micro-crowdsourcing services.

Crowdflower describe themselves as offering “multiple labor channels…  [using] crowdsourcing to harness a round-the-clock workforce that spans more than 70 countries, multiple languages, and can access up to half-a-million workers to dispatch diverse tasks and provide near-real time answers.”

Txteagle focuses on the explosion of mobile access in the developing world, claiming that “txteagle’s GroundSwell mobile engagement platform provides clients with the ability to communicate and incentivize over 2.1 billion people“.

Something is clearly happening here. As someone who works with data and the Web, it’s hard to ignore the potential. As someone who doesn’t like treating others as interchangeable, replaceable and disposable software components, it’s hard to feel very comfortable. Classic liberal guilt territory. So I made an account, both as a worker and as a ‘requester’ (an awkward term, but it’s clear why ‘employer’ is not being used).

I tried a few tasks. I wrote 25-30 words for a blog on some medieval prophecies. I wrote 50 words as fast as I could on “things I would change in my apartment”. I tagged some images with keywords. I failed to pass a ‘qualification’ test sorting scanned photos into scratched, blurred and OK. I ‘like’d some hopeless Web site on Facebook for 2 cents. In all I made 18 US cents. As a way of passing the time, I can see the appeal. This can compete with daytime TV or Farmville or playing Solitaire or Sudoko. I quite enjoyed the mini creative-writing tasks. As a source of income, it’s quite another story, and the awful word ‘incentivize‘ doesn’t do justice to the human reality.

Then I tried the other role: requester. After a little more liberal-guilt navelgazing (“would it be inappropriate to offer to buy people’s immortal souls? etc.”), I decided to offer a penny (well, 2 cents) for up to 100 people’s new year wish thoughts, or whatever of those they felt like sharing for the price.

I copy the results below, stripped of what little detail (eg. time in seconds taken) each result came with. I don’t present this as any deep insight or sociological analysis or arty meditation. It’s just what 100 people somewhere else in the Web responded with, when asked what they wish for 2011. If you want arty, check out the sheep market. If you want more from ‘turkers’ in their own voice, do visit the ‘Turker Nation’ forum. Also Turkopticon is essential reading,  “watching out for the crowd in crowdsourcing because nobody else seems to be.”

The exact text used was “Make a wish for 2011. Anything you like, just describe it briefly. Answers will be made public.”, and the question was asked with a simple Web form, “Make a wish for 2011, … any thought you care to share”.


Here’s what they said:

When you’re lonely, I wish you Love! When you’re down, I wish you Joy! When you’re troubled, I wish you Peace! When things seem empty, I wish you Hope! Have a Happy New Year!

wish u a happy new year…………

happy new year 2011. may this year bring joy and peace in your life

My wish for 2011 is i want to mary my Girlfriend this year.

I wish I will get pregnant in 2011!

i wish juhi becomes close to me

wish you a wonderful happy new year

wish you happy new year

for new year 2011 I wish Love of God must fill each human heart
Food inflation must be wiped off quickly
corruption must be rooted out smartly
Terrorism must be curtailed quickly
All People must get love, care, clothes, shelter & food
Love of God must fill each human heart…

Happy life.All desires to be fulfilled.

wish to be best entrepreneur of the year 2011

dont work hard if it is possible to do the same smarter way..
Be happy!

New year is the time to unfold new horizons,realise new dreams,rejoice in simple pleasures and gear up for new challenges.wishing a fulfilling 2011.

Remember that the best relationship is one where your love for each other is greater than your need for each other. Happy New Year

To get a newer car, and have less car problems. and have more income

I wish that my son’s health problems will be answered

Be it Success & Prosperity, Be it Fun and Frolic…

A new year is waiting for you. Go and enjoy the New Year on New Thought,”Rebirth of My Life”.

Let us wish for a world as one family, then we can overcome all the problems man made and otherwise.

My wish is to gain/learn more knowledge than in 2010

My new years wish for 2011 is to be happier and healthier.

I wish that I would be cured of heartache.

I am really very happy to wish you all very happy new year…..I wish you all the things to be success in your life and career…….. Just try to quit any bad habit within you. Just forgot all the bad incidents happen within your friends and try to enjoy this new year with pleasant……

Wish you a happy and prosperous new year.

I wish for a job.

I would hope that people will end the wars in the world.

Discontinue smoking and restrict intake of alcohol

I wish that my retail store would get a bigger client base so I can expand.

I Wish a wish for You Dear.Sending you Big bunch of Wishes from the Heart close to where.Wish you a Very Very Happy New Year

I wish for 2011 to be filled with more love and happiness than 2010.

Everything has the solution Even IMPOSSIBLE Makes I aM POSSIBLE. Happy Journey for New Year.

May each day of the coming year be vibrant and new bringing along many reasons for celebrations & rejoices. Happy New year

I have just moved and want to make some great new friends! Would love to meet a special senior (man!!) to share some wonderful times with!!!

My wish is that i wanna to live with my “Pretty girl” forever and also wanna to meet her as well,please god please, finish my this wish, no more aspire from me only once.

that people treat each other more nicely and with greater civility, in both their private and public lives.

that we would get our financial house in order

Year’s end is neither an end nor a beginning but a going on, with all the wisdom that experience can instill in us. Wish u very happy new year and take care

Wish you a very happy And prosperous new year 2011

Tom Cruise
Angelina Jolie
Aishwarya Rai
Arnold
Jennifer Lopez
Amitabh Bachhan
& me..
All the Stars wish u a Very Happy New Year.

Oh my Dear, Forget ur Fear,
Let all ur Dreams be Clear,
Never put Tear, Please Hear,
I want to tell one thing in ur Ear
Wishing u a very Happy “NEW YEAR”!

May The Year 2011 Bring for You…. Happiness,Success and filled with Peace,Hope n Togetherness of your Family n Friends….

i want to be happy

Good health for my family and friends

I wish my husband’s children would stop being so mean and violent and act like normal children. I want to love my husband just as much as before we got full custody.

to get wonderful loving girl for me.. :))

Keep some good try. Wish u happy new year

happy new year to all

My wish is to find a good job.

i wish i get a big outsourcing contract this year that i can re-set up my business and get back on track.

I wish that I be firm in whatever I do. That I can do justice to all my endeavors. That I give my 100%, my wholehearted efforts to each and every minutest work I do.

My wish for 2011, is a little patience and understanding for everyone, empathy always helps.

To be able to afford a new house

“NEW YEAR 2011″
+NEW AIM + NEW ACHIEVEMENT + NEW DREAM +NEW IDEA + NEW THINKING +NEW AMBITION =NEW LIFE+SUCCESS HAPPY NEW YEAR!

let this year be terrorist free world

Wish the world walk forward in time with all its innocence and beauty where prevails only love, and hatred no longer found in the dictionary.

no

Wish u a very happy New Year Friends and make this year as a pleasant days…

I wish the economy would get better, so people can afford to pay their bills and live more comfortably again.

i wish, god makes life beautiful and very simple to all of us. and happy new year to world.

Be always at war with your vices, at peace with your neighbors, and let each new year find you a better man and I wish a very very prosperous new year.

i wish i would buy a house and car for my mom

I wish to have a new car.
This new year will be full of expectation in the field of investment.We concerned about US dollar. Hope this year will be a good for US dollar.

this year is very enjoyment life

Cheers to a New Year and another chance for us to get it right

to get married

Wishing all a meaningful,purposeful,healthier and prosperous New Year 2011.

WISH YOU A HAPPY NEW YEAR 2011 MAY BRING ALL HAPPINESS TO YOU

RAKKIMUTHU

In 2011 I wish for my family to get in a better spot financially and world peace.

Wish that economic conditions improve to the extent that the whole spectrum of society can benefit and improve themselves.

I want my divorce to be final and for my children to be happy.

This 2011 year is very good year for All with Health & Wealth.

I wish that things for my family would get better. We have had a terrible year and I am wishing that we can look forward to a better and brighter 2011.

This year bring peace and prosperity to all. Everyone attain the greatest goal of life. May god gives us meaning of life to all.

This new year will bring happy in everyone’s life and peace among countries.

I hope for bipartisanship and for people to realize blowing up other people isn’t the best way to get their point across. It just makes everyone else angry.

A better economy would be nice too

I wish that in 2011 the government will work together as a TEAM for the betterment of all. Peace in the world.

i wish you all happy new year. may god bless all……

no i wish for you

I wish that my family will move into our own house and we can be successful in getting good jobs for our future.

I wish my girl comes back to me

Wish You Happy New Year for All, especially to the workers and requester’s of Mturk.

Greetings!!!

Wishing you and your family a very happy and prosperous NEW YEAR – 2011

May this New Year bring many opportunities your way, to explore every joy of life and may your resolutions for the days ahead stay firm, turning all your dreams into reality and all your efforts into great achievements.

Wish u a Happy and Prosperous New Year 2011….

Wishing u lots of happiness..Success..and Love

and Good Health…….

Wish you a very very happy new year

WISHING YOU ALL A VERY HAPPY & PROSPEROUS NEW YEAR…….

I wish in this 2011 is to be happy,have a good health and also my family.

I pray that the coming year should bring peace, happiness and good health.

I wish for my family to continue to be healthy, for my cars to continue running, and for no 10th Anniversary attacks this upcoming September.

be a good and help full for my family .

Happy and Prosperous New Year

New day new morning new hope new efforts new success and new feeling,a new year a new begening, but old friends are never forgotten, i think all who touched my life and made life meaningful with their support, i pray god to give u a verry “HAPPY AND SUCCESSFUL NEW YEAR”.

Be a good person,as good as no one

wish this new year brings cheers and happiness to one and all.

For the year 2011 I simply wish for the ability to support my family properly and have a healthier year.

I wish I have luck with getting a better job.

Greater awareness of climate change, and a recovering US economy.

this new year 2011 brings you all prosperous and happiness in your life…….

happy newyear wishes to all the beautiful hearts in the world in the world.god bless you all.

wishing every happy new year to all my pals and relatives and to all my lovely countrymen

NoTube scenario: Facebooks groups and TV recommendation

Short version: If the Web knows I like a TV show, why can’t my TV be more useful?

So I have just joined a Facebook group, “Spaced Appreciation Society“:

Basic Info
Type: Common Interest – Pets & Animals
Description: If you’ve ever watched (and therefore loved) the TV series Spaced, then come and pay homage to the great Simon Pegg and Jess Stevenson. “You f’ing plum”
Contact Details
Website: http://www.spaced-out.org.uk/
Location: Meteor Street

That URL is (as with many of these groups) from a site whose primary topic is the thing the group’s about. In this case, about a TV show. It’s even in the public page for that group:

<tr><td class=”label”>Website:</td>
<td class=”data”><div class=”datawrap”><a href=”http://www.spaced-out.org.uk/” onmousedown=”return wait_for_load(this, event, function() { UntrustedLink.bootstrap($(this), &quot;&quot;, event) });” target=”_blank” rel=”nofollow”>http://www.spaced-out.org.uk/</a></div></td></tr>

If I search Google (Yahoo BOSS might be wiser, they have APIs) with:

link:http://www.spaced-out.org.uk/ site:wikipedia.org

It finds me:

http://en.wikipedia.org/wiki/Spaced

Although “link:http://www.spaced-out.org.uk/ site:dbpedia.org” doesn’t find anything, some URL rewriting gets me to:

http://dbpedia.org/page/Spaced

“Spaced is a British television situation comedy written by and starring Simon Pegg and Jessica Stevenson, and directed by Edgar Wright. It is noted for its rapid-fire editing, frequent dropping of pop-culture references, and occasional displays of surrealism. Two series of seven episodes were broadcast in 1999 and 2001 on Channel 4.”

dbpedia-owl:author
* dbpedia:Jessica_Hynes
* dbpedia:Simon_Pegg

dbpedia-owl:completionDate
* 2001-04-13 (xsd:date)

dbpedia-owl:director
* dbpedia:Edgar_Wright

dbpedia-owl:episodenumber
* 14

dbpedia-owl:executiveproducer
* dbpedia:Humphrey_Barclay

dbpedia-owl:genre
* dbpedia:Situation_comedy

dbpedia-owl:language
* dbpedia:English_language

dbpedia-owl:network
* dbpedia:Channel_4

dbpedia-owl:producer
* dbpedia:Gareth_Edwards
* dbpedia:Nira_Park

dbpedia-owl:releaseDate
* 1999-09-24 (xsd:date)

dbpedia-owl:runtime
* 24

dbpedia-owl:starring
* dbpedia:Jessica_Hynes
* dbpedia:Simon_Pegg

There are also links from here to Cyc (but an incorrect match) and to Freebase (to http://www.freebase.com/view/en/spaced).

Unfortunately, the Wikipedia “external links” section, with the URL for http://www.spaced-out.org.uk/ (marked “offical, fan-operated site” is not part of the DBpedia RDF export. I guess as it is not in an infobox. Extracting these external-link URLs at least for the TV, Actor and Movie related sections of Wikipedia might be worthwhile. And DBpedia would be useful for identifying the relevant subset to re-extract.

This idea of using such URLs as keys into Wikipedia/dbpedia data would also work with Identi.ca groups and others. In fact the matching might be easier in Identi.ca – I’m not sure how the Facebook APIs expose this stuff.

Anyway, if a show is about to be broadcast that includes eg. an interview with dbpedia:Jessica_Hynes or dbpedia:Simon_Pegg I’d like to hear about it.

So… is there any way I can use BBC’s /programmes to get upcoming information about who will be on the radio or telly, in a way that could be matched against dbpedia URIs?

Edit: I should’ve mentioned that Facebook in particular also has a more explicit “is a fan of” construct, with Products, Celebs, TV shows and Stores as types of thing you can be a fan of. Furthermore these show up on your public page, eg. here’s mine. I’m certainly interested in using that data, but also in a model that uses  general groups, since it is applicable to other sites that allow a group to indicate itself with a topical URL.

On the internet, no-one knows.

“Because most of the targeted employees were male between the ages of 20 and 40 we decided that it would be best to become a very attractive 28 year old female. We found a fitting photograph by searching google images and used that photograph for our fake Facebook profile. We also populated the profile with information about our experiences at work by using combined stories that we collected from real employee facebook profiles.” [...]

Facebook problem statement

People want full ownership and control of their information so they can turn off access to it at any time. At the same time, people also want to be able to bring the information others have shared with them—like email addresses, phone numbers, photos and so on—to other services and grant those services access to those people’s information. These two positions are at odds with each other. There is no system today that enables me to share my email address with you and then simultaneously lets me control who you share it with and also lets you control what services you share it with.
“On Facebook, People Own and Control Their Information”, Mark Zuckerberg, Facebook blog.

Google Data APIs (and partial YouTube) supporting OAuth

Building on last month’s announcement of OAuth for the Google Contacts API, this from Wei on the oauth list:

Just want to let you know that we officially support OAuth for all Google Data APIs.

See blog post:

You’ll now be able to use standard OAuth libraries to write code that authenticates users to any of the Google Data APIs, such as Google Calendar Data API, Blogger Data API, Picasa Web Albums Data API, or Google Contacts Data API. This should reduce the amount of duplicate code that you need to write, and make it easier for you to write applications and tools that work with a variety of services from multiple providers. [...]

There’s also a footnote, “* OAuth also currently works for YouTube accounts that are linked to a Google Account when using the YouTube Data API.”

See the documentation for more details.

On the YouTube front, I have no idea what % of their accounts are linked to Google; lots I guess. Some interesting parts of the YouTube API: retrieve user profiles, access/edit contacts, find videos uploaded by a particular user or favourited by them plus of course per-video metadata (categories, keywords, tags, etc). There’s a lot you could do with this, in particular it should be possible to find out more about a user by looking at the metadata for the videos they favourite.

Evidence-based profiles are often better than those that are merely asserted, without being grounded in real activity. The list of people I actively exchange mail or IM with is more interesting to me than the list of people I’ve added on Facebook or Orkut; the same applies with profiles versus tag-harvesting. This is why the combination of last.fm’s knowledge of my music listening behaviour with the BBC’s categorisation of MusicBrainz artist IDs is more interesting than asking me to type my ‘favourite band’ into a box. Finding out which bands I’ve friended on MySpace would also be a nice piece of evidence to throw into that mix (and possible, since MusicBrainz also notes MySpace URIs).

So what do these profiles look like? The YouTube ‘retrieve a profile‘ API documentation has an example. It’s Atom-encoded, and beyond the video stuff mentioned above has fields like:

  <yt:age>33</yt:age>
  <yt:username>andyland74</yt:username>
  <yt:books>Catch-22</yt:books>
  <yt:gender>m</yt:gender>
  <yt:company>Google</yt:company>
  <yt:hobbies>Testing YouTube APIs</yt:hobbies>
  <yt:location>US</yt:location>
  <yt:movies>Aqua Teen Hungerforce</yt:movies>
  <yt:music>Elliott Smith</yt:music>
  <yt:occupation>Technical Writer</yt:occupation>
  <yt:school>University of North Carolina</yt:school>
  <media:thumbnail url='http://i.ytimg.com/vi/YFbSxcdOL-w/default.jpg'/>
  <yt:statistics viewCount='9' videoWatchCount='21' subscriberCount='1'
    lastWebAccess='2008-02-25T16:03:38.000-08:00'/>

Not a million miles away from the OpenSocial schema I was looking at yesterday, btw.

I haven’t yet found where it says what I can and can’t do with this information…

OpenSocial schema extraction: via Javascript to RDF/OWL

OpenSocial’s API reference describes a number of classes (‘Person’, ‘Name’, ‘Email’, ‘Phone’, ‘Url’, ‘Organization’, ‘Address’, ‘Message’, ‘Activity’, ‘MediaItem’, ‘Activity’, …), each of which has various properties whose values are either strings, references to instances of other classes, or enumerations. I’d like to make them usable beyond the confines of OpenSocial, so I’m making an RDF/OWL version. OpenSocial’s schema is an attempt to provide an overarching model for much of present-day mainstream ‘social networking’ functionality, including dating, jobs etc. Such a broad effort is inevitably somewhat open-ended, and so may benefit from being linked to data from other complementary sources.

With a bit of help from the shindig-dev list, #opensocial IRC, and Kevin Brown and Kevin Marks, I’ve tracked down the source files used to represent OpenSocial’s data schemas: they’re in the opensocial-resources SVN repository on code.google.com. There is also a downstream copy in the Apache Shindig SVN repo (I’m not very clear on how versioning and evolution is managed between the two). They’re Javascript files, structured so that documentation can be generated via javadoc. The Shindig-PHP schema diagram I posted recently is a representation of this schema.

So – my RDF version. At the moment it is merely a list of classes and their properties (expressed using via rdfs:domain), written using RDFa/HTML. I don’t yet define rdfs:range for any of these, nor handle the enumerated values (opensocial.Enum.Smoker, opensocial.Enum.Drinker, opensocial.Enum.Gender, opensocial.Enum.LookingFor, opensocial.Enum.Presence) that are defined in enum.js.

The code is all in the FOAF SVN, and accessible via “svn co http://svn.foaf-project.org/foaftown/opensocial/vocab/”. I’ve also taken the liberty of including a copy of the OpenSocial *.js files, and Mozilla’s Rhino Javascript interpreter js.jar in there too, for self-containedness.

The code in schemarama.js will simply generate an RDFA/XHTML page describing the schema. This can be checked using the W3C validator, or converted to RDF/XML with the pyRDFa service at W3C.

I’ve tested the output using the OwlSight/pellet service from Clark & Parsia, and with Protege 4. It’s basic but seems OK and a foundation to build from. Here’s a screenshot of the output loaded into Protege (which btw finds 10 classes and 99 properties).

An example view from protege, showing the class browser in one panel, and a few properties of Person in another.

OK so why might this be interesting?

  • Using OpenSocial-derrived vocabulary, OpenSocial-exported data in other contexts
    • databases (queryable via SPARQL)
    • mixed with FOAF
    • mixed with Microformats
    • published directly in RDFa/HTML
  • Mapping OpenSocial terms with other contact and social network schemas

This suggests some goals for continued exploration:

It should be possible to use “OpenSocial markup” in an ordinary homepage or blog (HTML or XHTML), drawing on any of the descriptive concepts they define, through using RDFa’s markup notation. As Mark Birbeck pointed out recently, RDFa is an empty vessel – it does not define any descriptive vocabulary. Instead, the RDF toolset offers an environment in which vocabulary from multiple independent sources can be mixed and merged quite freely. The hard work of the OpenSocial team in analysing social network schemas and finding commonalities, or of the Microformats scene in defining simple building-block vocabularies … these can hopefully be combined within a single environment.

Anti-me

I can’t explain how weird this is to read, and only really because my name is obscure enough that I don’t run into other Dan Brickleys very often:

Chimes in Dan Brickley, of TLC’s A Makeover Story: “Although Hillary’s made some major fashion faux pas in the past (anyone remember a certain wedding dress?), we wouldn’t see her in a pair of pegged acid washed jeans, peddling a bicycle built for two with Bill…

“Yet, as far as fashion’s concerned, Obama’s putting function before fashion,” Brickley adds. “Unfortunately, we’ll all have to wait and see if Barack’s helmet can stand up against Cindy McCain’s hair helmet this November — Now that’s not a headpiece to mess with!”

That said, I’m “friends” with two other DBs on Facebook. Maybe it’s time to start a group and agree some basics for not bringing our name into disrepute…