A tale of two business models

Glancing back at 1998, thanks to the Wayback Machine.

W3C has Royalty Free licensing requirements and a public Process Document for good reason. I’ve been clicking back through the papertrails around the W3C P3P vs InterMind patent issue as a reminder. Here is the “Appendix C: W3C Licensing Plan Summary” from the old Intermind site:

We expect to license the patents to practice the P3P standard as it evolves over time as follows:

User Agents: For User Agent functionality, all commercial licensees will pay a royalty of 1% of revenues directly associated with the use of P3P or 0.1% of all revenues directly associated with the product employing the User Agent at the licensee’s option. We expect to measure revenues in a confidential and mutually acceptable manner with the licensee.

Service Providers: For Service Provider functionality, all commercial licensees will pay a royalty of 1% of revenues directly associated with the use of P3P. We expect to measure these revenues through the use of Web site logs, which will determine the percentage of P3P-related traffic and apply that percentage to the relevant Web site revenues (i.e., advertising-based revenues or transaction-based revenues). We expect to determine a method for monitoring or auditing such logs in a confidential and mutually acceptable manner with the licensee.

[...]

Intermind Corporation also expects to license the patents for CDF, ICE, and other XML-based agent technology on non-discriminatory terms. Members interested in further information on the relationship of Intermind’s patents to these technologies can contact Drummond Reed at drummond@intermind.com or 206-812-6000.

Nearby in the Web:

Cover Pages on Extensible Name Service (XNS):

Background: “In January 1999, the first of Intermind’s web agent patents began being issued (starting with U.S. patent No. 5,862,325). At the heart of this patent was a new naming and addressing service based on web agent technology. With the emergence of XML as a new global data interchange language — one perfectly suited to the requirements of a global ‘language’ for web agents — Intermind changed its name to OneName Corporation, built a new board and management team, and embarked on the development of this new global naming and addressing service. Because its use of XML as the foundation for all object representation and interchange led to the platform, yet had the same distributed architecture as DNS, it was christened eXtensible Name Service, or XNS. Recognizing the ultimate impact such a system may have on Internet infrastructure, and the crucial role that privacy, security, and trust must play, OneName also made the commitment to building it with open standards, open source software, and an open independent governance organization. Thus was born the XNS Public Trust Organization (XNSORG), the entity charged with setting the technical, operational, and legal standards for XNS.”

Over on XDIORG – Licenses and Agreements:

Summary of the XDI.ORG Intellectual Property Rights Agreement

NOTE: This summary is provided as a convenience to readers and is not intended in any way to modify or substitute for the full text of the agreement.

The purpose of the XDI.ORG IPR Agreement between XDI.ORG and OneName Corporation (dba Cordance) is to facilitate and promote the widespread adoption of XDI infrastructure by transfering the intellectual property rights underlying the XDI technology to a community-governed public trust organization.

The agreement grants XDI.ORG exclusive, worldwide, royalty-free license to a body of patents, trademarks, copyrights, and specifications developed by Cordance on the database linking technology underlying XDI. In turn, it requires that XDI.ORG manage these intellectual property rights in the public interest and make them freely available to the Internet community as royalty-free open standards. (It specifically adopts the definition provided by Bruce Perens which includes the ability for XDI.ORG to protect against “embrace and enhance” strategies.)

There is also a Global Service Provider aspect to this neutral, royalty-free standard. Another excerpt:

Summary of the XDI.ORG Global Service Provider Agreement

NOTE: This summary is provided as a convenience to readers and is not intended in any way to modify or substitute for the full text of the agreement.

Global Services are those XDI services offered by Global Service Providers (GSPs) based on the XRI Global Context Symbols (=, @, +, !) to facilitate interoperability of XDI data interchange among all users/members of the XDI community. XDI.ORG governs the provision of Global Services and has the authority to contract with GSPs to provide them to the XDI community.

For each Global Service, XDI.ORG may contract with a Primary GSP (similar to the operator of a primary nameserver in DNS) and any number of Secondary GSPs (similar to the operator of a secondary DNS nameserver). The Secondary GSPs mirror the Primary for loadbalancing and failover. Together, the Primary and Secondary GSPs operate the infrastructure for each Global Service according to the Global Services Specifications published and maintained by XDI.ORG.

The initial XDI.ORG GSP Agreement is between XDI.ORG and OneName Corporation (dba Cordance). The agreement specifies the rights and obligations of both XDI.ORG and Cordance with regard to developing and operating the first set of Global Services. For each of these services, the overall process is as follows:

  • If Cordance wishes to serve as the Primary GSP for a service, it must develop and contribute an initial Global Service Specification to XDI.ORG.
  • XDI.ORG will then hold a public review of the Global Service Specification and amend it as necessary.
  • Once XDI.ORG approves the Global Service Specification, Cordance must implement it in a commercially reasonable period. If Cordance is not able to implement or operate the service as required by the Global Service Specification, XDI.ORG may contract with another party to be the primary GSP.
  • XDI.ORG may contract with any number of Secondary GSPs.
  • If XDI.ORG desires to commence a new Global Service and Cordance does not elect to develop the Global Service Specification or provide the service, XDI.ORG is free to contract with another party.

The contract has a fifteen year term and covers a specified set of Global Services. Those services are divided into two classes: cost-based and fee-based. Cost-based services will be supplied by Cordance at cost plus 10%. Fee-based services will be supplied by Cordance at annual fees not to exceed maximums specified in the agreement. These fees are the wholesale cost to XDI.ORG; XDI.ORG will then add fees from any other GSPs supplying the service plus its own overhead fee to determine the wholesale price to registrars (registrars then set their own retail prices just as with DNS). Cordance’s wholesale fees are based on a sliding scale by volume and range from U.S. $5.40 down to $3.40 per year for global personal i-names and from U.S. $22.00 down to $13.50 per year for global organizational i-names.

The agreement also ensures all registrants of the original XNS Personal Name Service and Organizational Name Service have the right to convert their original XNS registration into a new XDI.ORG global i-name registration at no charge. This conversion period must last for at least 90 days after the commencement of the new global i-name service.

Over on inames.net, Become an i-broker:

What Is an I-Broker?

From Wikipedia:

“I-brokers are ‘bankers for data’ or ‘ISPs for identity services’–trusted third parties that help people and organizations share private data the same way banks help us exchange funds and ISPs help us exchange email and files.”

I-brokers are the core providers of XRI digital identity infrastructure. They not only provide i-name and i-number registration services, but also they provide i-services: a new layer of digital identity services that help people and business safely interact on the Internet. See the I-Service Directory for the first open-standard i-services that can be offered by any XDI.org-accredited i-broker.

How Does an I-Broker Become XDI.org-Accredited?

Cordance and NeuStar, together with XDI.org, have published a short guide to the process, “Becoming an I-Broker,” which includes all the information necessary to get started. It also includes contact information for the i-broker support teams at both companies.

Download Becoming an I-Broker

In addition the following two zip files contain all the documents needed by an i-broker. The first one contains the i-broker application and agreements, which are appendicies F and I of the XDI.org Global Services Specifications (GSS), located in their complete form at http://gss.xdi.org. The second one contains all the rest of the GSS documents referenced by the application and agreement.

From that downloadable PDF,

What is the application fee?
The standard application fee is USD $2500. However between the GRS opening on June 20th and the
start of Digital ID World 2006 on September 11, 2006, you may apply to Cordance to have the fee offset
by development and marketing commitments. For more information, contact Cordance or NeuStar at the
addresses below.

In the XDI.org GSS wiki,

6.1. Appendix B: Fee Schedules

Taking a look at that, we see:

Global Services Specification V1.0
Appendix B: Fee Schedule
Revised Version: 1 September, 2007

[...]

This and all contributions to XDI.org are open, public, royalty-free specifications licensed
under the XDI.org license available at http://www.xdi.org/docref/legal/xdi-org-license.html.

…where I find I can buy a personal i-name for $5/year, or a business i-name for 29.44, as in individual. Or as “become an i-broker” points out,

For volume pricing, contact Cordance or NeuStar at the addresses below.

X-UA-Combustible

I share the concerns expressed by Norm Walsh and Jeremy Keith regarding the X-UA-Compatible mechanism for IE8. It’s certainly an important issue. By my inner tinfoil hat wearer can’t help but wonder whether this is a kind of feint or indirection that serves to distract from the real issue. Instead of being concerned directly that the world’s most popular Web browser doesn’t handle modern Web standards like SVG, we are left debating about how IE should communicate its inabilities with the rest of the Web. I fully appreciate that IE8 is a vast engineering project, but still … have our expectations of Microsoft sunk so low that nobody expects them to implement things like SVG? If Opera, Firefox, Safari can make progress on SVG, so can IE.

Waving not Drowning? groups as buddylist filters

I’ve lately started writing up and prototyping around a use-case for the “Group” construct in FOAF and for medium-sized, partially private data aggregators like SparqlPress. I think we can do something interesting to deal with the social pressure and information load people are experiencing on sites like Flickr and Twitter.

Often people have rather large lists of friends, contacts or buddys – publically visible lists – which play a variety of roles in their online life. Sometimes these roles are in tension. Flickr, for example, allow their users to mark some of their contacts as “friend” or “family” (or both). Real life isn’t so simple, however. And the fact that this classification is shared (in Flickr’s case with everyone) makes for a potentially awkward dynamic. Do you really want to tell everyone you know whether they are a full “friend” or a mere “contact”? Let alone keep this information up to date on every social site that hosts it. I’ve heard a good few folk complain about the stress of adding yet another entry to their list of “twitter follows” people.  Their lists are often already huge through some sense of social obligation to reciprocate. I think it’s worth exploring some alternative filtering and grouping mechanisms.

On the one hand, we have people “bookmarking” people they find interesting, or want to stay in touch with, or “get to know better”. On the other, we have the bookmarked party sometimes reciprocating those actions because they feel it polite; a situation complicated by crude categories like “friend” versus “contact”. What makes this a particularly troublesome combination is when user-facing features, such as “updates from your buddies” or “photos from your friends/contacts” are built on top of these buddylists.

Take my own case on Flickr, I’m probably not typical, but I’m a use case I care about. I have made no real consistent use of the “friend” versus “contact” distinction; to even attempt do so would be massively time consuming, and also a bit silly. There are all kinds of great people on my Flickr contacts list. Some I know really well, some I barely know but admire their photography, etc. It seems currently my Flickr account has 7 “family”, 79 “friends” and 604 “contacts”.

Now to be clear, I’m not grumbling about Flickr. Flickr is a work of genius, no nitpicking. What I ask for is something that goes beyond what Flickr alone can do. I’d like a better way of seeing updates from friends and contacts. This is just a specific case of a general thing (eg. RSS/Atom feed management), but let’s stay specific for now.

Currently, my Flickr email notification settings are:

  • When people comment on your photos: Yes
  • When your friends and family upload new photos: Yes (daily digest)
  • When your other contacts upload new photos: No

What this means is that I selected to see notifications photos from those 86 people who I haveflagged as “friend” or “family”. And I chose not to be notified of new photos from the other 604 contacts. Even though that list contains many people I know, and would like to know better. The usability question here is: how can we offer more subtlety in this mechanism, without overwhelming users with detail? My guess is that we approach this by collecting in a more personal environment some information that users might not want to state publically in an online profile. So a desktop (or weblog, e.g. SparqlPress) aggregation of information from addressbooks, email send/receive patterns, weblog commenting behaviours, machine readable resumes, … real evidence of real connections between real people.

And so I’ve been mining around for sources of “foaf:Group” data. As a work in progress testbed I have a list of OpenIDs of people whose comments I’ve approved in my blog. And another, of people who have worked on the FOAF wiki. I’ve been looking at the machine readable data from W3C’s Tech Reports digital archive too, as that provides information about a network of collaboration going back to the early days of the Web (and it’s available in RDF). I also want to integrate my sent-mail logs, to generate another group, and extract more still from my addressbook. On top of this of course I have access to the usual pile of FOAF and XFN, scraped or API-extracted information from social network sites and IM accounts. Rather than publish it all for the world to see, the idea here is to use it to generate simple personal-use user interfaces couched in terms of these groups. So, hopefully I shall be able to use those groups as a filter against the 600+ members of my flickr buddylist, and make some kind of basic RSS-filtered view of their photos.

If this succeeds, it may help people to have huge buddylists without feeling they’ve betrayed their “real friends” by doing so, or forcing them to put their friends into crudely labelled public buckets marked “real friend” and “mere contact”. The core FOAF design principle here is our longstanding bias towards evidence-based rather than proclaimed information.  Don’t just ask people to tell you who their friends are, and how close the relationship is (especially in public, and most especially in a format that can be imported into spreadsheets for analysis). Instead… take the approach of  “don’t say it, show it“. Create tools based on the evidence that friendship and collaboration leaves in the Web. The public Web, but also the bits that don’t show up in Google, and hopefully never will. If we can find a way to combine all that behind a simple UI, it might go a little way towards “waving not drowning” in information.

Much of this is stating the obvious, but I thought I’d leave the nerdly details of SPARQL and FOAF/RDF for another post.

Instead here’s a pointer to a Scoble article on Mr Facebook, where they’re grappling with similar issues but from a massive aggregation rather than decentralised perspective:

Facebook has a limitation on the number of friends a person can have, which is 4,999 friends. He says that was due to scaling/technical issues and they are working on getting rid of that limitation. Partly to make it possible to have celebrities on Facebook but also to make it possible to have more grouping kinds of features. He told me many members were getting thousands of friends and that those users are also asking for many more features to put their friends into different groups. He expects to see improvements in that area this year.

ps. anyone know if Facebook group membership data is accessible through their APIs? I know I can get group data from LiveJournal and from My Opera in FOAF, but would be cool to mix in Facebook too.

MySpace open data oopsie

Latest megasite privacy screwup, this time from MySpace who appear to have allowed users to consider photos “private” when associated with a private profile, while (as far as I can make out) have the URLs visible of guessable. Whoopsadaisy.  Predictably enough someone has crawled and shared many of the images. Wired reports that the site knew about the flaw for months, and didn’t address it. I hope that’s not true.

Embedding queries in RDF – FOAF Group example

Is this crazy or useful? Am not sure yet.

This example uses FOAF vocabulary for groups and openid. So the basic structure here is that Agents (including persons) can have an :openid and can be a :member of a :Group.

From an openid-augmented WordPress, we get a list of all the openids my blog knows about. From an openid-augmented MediaWiki, we get a list of all the openids that contribute to the FOAF project wiki. I dumped each into a basic RDF file (not currently an automated process). But the point here is to explore enumerated groups using queries.

<rdf:RDF xmlns:rdf=”http://www.w3.org/1999/02/22-rdf-syntax-ns#” xmlns=”http://xmlns.com/foaf/0.1/”>
<Group rdf:about=’#both’>
<!– enumerated membership –>
<member><Agent><openid rdf:resource=’http://danbri.org/’/></Agent></member>
<member><Agent><openid rdf:resource=’http://tommorris.org/’/></Agent></member>
<member><Agent><openid rdf:resource=’http://kidehen.idehen.net/dataspace/person/kidehen’/></Agent></member>
<member><Agent><openid rdf:resource=’http://www.wasab.dk/morten/’/></Agent></member>
<member><Agent><openid rdf:resource=’http://kronkltd.net/’/></Agent></member>
<member><Agent><openid rdf:resource=’http://www.kanzaki.com/’/></Agent></member>

<!– rule-based membership –>

<constructor><![CDATA[
PREFIX : <http://xmlns.com/foaf/0.1/>
CONSTRUCT {
<http://danbri.org/yasns/danbri/both.rdf#thegroup> a :Group; :member [ a :Agent; :openid ?id ]
}
WHERE {
GRAPH <http://wiki.foaf-project.org/_users.rdf> { [ a :Group; :member [ a :Agent; :openid ?id ]. ] }
GRAPH <http://danbri.org/yasns/danbri/_group.rdf> { [ a :Group; :member [ a :Agent; :openid ?id ]. ] }
}
]]></constructor>
</Group>
</rdf:RDF>

This RDF description does it both ways. It enumerates (for simple clients) a list of members of a group whose members are those individuals that are both commentators on my blog, and contributors to the FOAF wiki. At least, to the extent they’re detectable via common use of OpenID URIs. But the RDF group description also embeds a SPARQL query, the kind which generates RDF rather than an SQL-like resultset. The RDF essentially regenerates the enumerated list, assuming the query is run against an RDF dataset with the data graphs appropriately populated.

Now I sorta like this, and I sorta don’t. It may be incredibly powerful, or it may be a bit to clever for its own good.

Certainly there’s scope overlap with the W3C RIF rules work, and with the capabilities of OWL. FOAF has long contained an experimental method for using OWL to do something similar, but it hasn’t found traction. The motivation I have here for trying SPARQL here is that it has built-in machinery for talking about the provenance of data; so I could write a group description this way that says “members are anyone listed as a colleague in http://myworkplace.example.com/stafflist.rdf”. Or I could mix in arbitrary descriptive vocabularies; family tree stuff, XFN, language abilities (speaks-reads-writes) etc.

Where I think this could fall down is in the complexity of the workflow. The queries need executing against some SPARQL installation with a configured dataset, and the query lists URIs of data graphs. But I doubt database admins will want to randomly load any/every RDF file mentioned in these shared queries. Perhaps something like SparqlPress, attached to one’s weblog, and social filters to load only files in queries eg. from friends? Also, authoring these kinds of query isn’t something non-geek users are going to do often, and the sorts of queries that will work will depend of course on the data actually available. Sure I could write a query based on matching the openids of former colleagues, but the group will be empty unless the data listing people as former colleagues is actually out there and in the Web, and written in the terms anticipated by the query.

On the other hand, this mechanism does appeal, and could go way beyond FOAF group definitions. We could see a model where people post data in the Web but also post queries, eg. revisiting the old work Libby and I explored around RSS query. On the other other hand, who wants to make their Web queries public? All that said, the same goes for the data being queried. And since this technique embeds queries inside ordinary RDF data, however we deal with the data visibility issue for RDF/FOAF should also work for the query stuff. Perhaps. Can’t blame me for trying…
I realise this isn’t the clearest of explanations. Let’s try again:

RDF is normally for publishing collections of simple claims about the world. This is an experiment in embedding data-generating-queries amongst these claims, where the query is configured to output more RDF claims (aka statements, triples etc), but only when executed against some appropriate body of RDF data. Since the query is written in SPARQL, it allows the data-generation rules to mention interesting things, such as properties of the source of the data being queried.

This particular experiment is couched in terms of FOAF’s “Group” construct, but the technique is entirely general. The example above defines a group of agents called the “both” group, by saying that an Agent is in that group if it its OpenID URI is listed in each of two RDF documents specified, ie. both a commentator on my blog, and a contributor to the FOAF Wiki. Other examples could be “(fe)male employees” or “family members sharing a blood type” or in fact, any descriptive pattern that can match against the data to hand and be expressed in SPARQL.

Open IDiomatic? Dada engine hosting as OpenID client app

All of us are dumber than some of us.

Various folk are concerned that OpenID has more provider apps than consumer apps, so here is my little website idea for an OpenID-facilitated collaborative thingy. I’ve loved the Dada Engine for years. The Dada Engine is the clever-clogs backend for grammar-driven nonsense generators such as the wonderful Postmodernism Generator.

Since there are many more people capable of writing funny prose to configure this machine, than there are who can be bothered to do the webhosting sysadmin, I reckon it’d be worthwhile to make a general-purpose hosted version whereby users could create new content through a Web interface. And since everyone would forget their passwords, this seems a cute little project to get my hands dirty with OpenID. To date, all I’ve done server-side with OpenID is install patches to MediaWiki and WordPress. That went pretty smoothly. My new hacking expedition, however, hit a snag already: the PHP libraries on my EC2 sandbox server didn’t like authenticating against LiveJournal. I’m new to PHP so when something stops working I panic and whine in IRC instead of getting to the bottom of the problem.

Back to the app idea, the Dada engine basically eats little config files that define sentence structures, and spews nonsense. Here’s a NSFW subgenial rant:

FlipFlip:scripts danbri$ pb < brag.pb
Fuck ‘em if they can’t take a joke! I’m *immune*! *Yip, yip, YEEEEEEE!*
*Backbone Blowout*! I do it for *fun*! YAH-HOOOO! Now give me some more of…

And here’s another:

FlipFlip:scripts danbri$ pb < brag.pb
I’m a fission reactor, I fart plutonium, power plants are fueled by the breath
of my brow; when they plug *me* in, the lights go out in Hell County! Now give
me some more of…

A fragment of the grammar:

“I say, `” slogan “‘. By God, `” slogan “‘, I say! ” |
“I am ” entity “, I am ” entity “! ” |
“I’ll drive a mile so as not to walk a foot; I am ” entity “! ” |
“Yes, I’m ” entity “! ” |
“I drank *” being “* under ” number ” tables, I am too ” adjective ” to die, I’m insured for acts o’ God *and* Satan! ” |
“I was shanghaied by ” entities ” and ” entities ” from ” place “, and got away with their hubcaps! ” |
“I *cannot* be tracked on radar! ” |
“I wear nothing uniform, I wear *no* ” emphatic ” uniform! ” | ….

To be crystal clear, this hosted app is total vapourware currently. And I’m not sure if it would be a huge piece of work to make sure the dada engine didn’t introduce security holes. Certainly the version that uses the C preprocessor scares me, but the “pb” utility shown above doesn’t use that. Opinions on the safety of this welcomed. But never mind the details, smell the vision! It could always use Novewriting (a Python “knock-off of the Dada Engine”) instead.

The simplest version of a hosting service for this kind of user generated meta-content would basically be textarea editing and a per-user collection of files that are flagged public or private. But given the nature of the system, it would be great if the text generator could be run against grammars that multiple people could contribute to. Scope for infinite folly…