Interesting stuff from around the web 2009-04-22

Amazing render job by Alessandro Prodan
Amazing render job by Alessandro Prodan

The open web

Does OpenID need to be hard? []
Chris considers “the big fat stinking elephant in the room: OpenID usability and the paradox of choice” as usual it’s a good read.

I wonder whether restricting the OpenID providers displayed based on visited link would help? i.e. hide those that haven’t been visited? It clearly wouldn’t be perfect – Google isn’t my OpenID provider but I visit lots, but it should cut down some of the clutter.

Security flaw leads Twitter, others to pull OAuth support []
The hole makes it possible for a hacker to use social-engineering tactics to trick users into exposing their data. The OAuth protocol itself requires tweaking to remove the vulnerability, and a source close to OAuth’s development team said that there have been no known violations, that it has been aware of it for a few days now, and has been coordinating responses with vendors. A solution should be announced soon.

Twitter and social networks

Relationship Symmetry in Social Networks: Why Facebook will go Fully Asymmetric []
Asymmetric model better mimics how real attention works…and how it has always worked. Any person using Twitter can have a larger number of followers than followees, effectively giving them more attention than they give. This attention inequality is the foundation of the Twitter service… The IA of Facebook does not allow this. Facebook has designed a service that forces you to keep track of your friends, whether you want to or not. Facebook is modeling personal relationships, not relationships based on attention. That’s the crucial difference between Facebook and Twitter at the moment.

When Twitter Gets Weird… [Dave Gorman]
“The difference between following someone and replying to them is the difference between stopping to chat with someone in the street or giving them a badge declaring that you know them. One is actual interaction. The other is just something you can show your friends.” Blimey – Dave Gorman clearly has a much better grasp of life, the web and being a human than the two people who attacked him for not following them on Twitter. As Dave points out he hopes that Twiiter doesn’t descend into the MySpace “thanks for the add’ nonsense”. Me too.

Google profiles included in search results [googleblog]
A new “Profile results” section will appear at the bottom of a Google search page, when it finds a strong match in response to a name-based search. But only in the US. To help things along remember to use rel=me elsewhere (here’s how).

Shortlisted for a BAFTA, launch of clickable tracklistings and the start of BBC Earth

Look, look clickable tracklistings, w00t!
Few will every know the pain to get this useful little (cross domain) feature live.

We’ve been shortlisted for an Interactive Innovation BAFTA
The /programmes aka Automated Programme Support project. So proud.

Out of the Wild []
Our first tentative steps towards improving the BBC’s online natural history offering. Out of The Wild seeks to bring you stories from BBC crews on location. Eventually this should all form part of an integrated programme offer.


Biological Taxonomy Vocabulary
An RDF vocabulary for the taxonomy of all forms of life.

On url shorteners []
Joshua Schachter considers the issues associated with URL shortening. Similar argument to the one I put forward in “The URL shortening antipattern” but with some useful recommendations: “One important conclusion is that services providing transit (or at least require a shortening service) should at least log all redirects, in case the shortening services disappear. If the data is as important as everyone seems to think, they should own it. And websites that generate very long URLs, such as map sites, could provide their own shortening services. Or, better yet, take steps to keep the URLs from growing monstrous in the first place.”

Identity, relationships and why OAuth and OpenID matter

Twitter hasn’t had a good start to 2009, it was hacked via a phishing scam and then there were concerns that your passwords were up for sale and that’s not a good thing; except there may be a silver lining to Twitter’s cloud because it has also reopened the password anti-pattern debate and the use of OAuth as a solution to the problem. Indeed it does now looks like Twitter will be implementing OAuth as a result. W00t!

touch by Meredith Farmer (Flickr). Some rights reserved.
Day 68 :: touch by Meredith Farmer (Flickr). Some rights reserved.

However, while it is great news that Twitter will be implementing OAuth soon, they haven’t yet and there are plenty of other services that don’t use it, it’s therefore worth pausing for a moment to consider how we’ve got here and what the issues are, because while it will be great — right now — it’s a bit rubbish.

We shouldn’t assume that either Twitter or the developers responsible for the third-party apps (those requesting your credentials) are trying to do anything malicious — far from it — as Chris Messina explains:

The difference between run-of-the-mill phishing and password anti-pattern cases is intent. Most third parties implement the anti-pattern out of necessity, in order to provide an enhanced service. The vast majority don’t do it to be malicious or because they intend to abuse their customers — quite the contrary! However, by accepting and storing customer credentials, these third parties are putting themselves in a potentially untenable situation: servers get hacked, data leaks and sometimes companies — along with their assets — are sold off with untold consequences for the integrity — or safety — of the original customer data.

The folks at Twitter are very aware of the risks associated with their users giving out usernames and passwords. But they also have concerns about the fix:

The downside is that OAuth suffers from many of the frustrating user experience issues and phishing scenarios that OpenID does. The workflow of opening an application, being bounced to your browser, having to login to, approving the application, and then bouncing back is going to be lost on many novice users, or used as a means to phish them. Hopefully in time users will be educated, particularly as OAuth becomes the standard way to do API authentication.

Another downside is that OAuth is a hassle for developers. BasicAuth couldn’t be simpler (heck, it’s got “basic” in the name). OAuth requires a new set of tools. Those tools are currently semi-mature, but again, with time I’m confident they’ll improve. In the meantime, OAuth will greatly increase the barrier to entry for the Twitter API, something I’m not thrilled about.

Alex also points out that OAuth isn’t a magic bullet.

It also doesn’t change the fact that someone could sell OAuth tokens, although OAuth makes it easier to revoke credentials for a single application or site, rather than changing your password, which revokes credentials to all applications.

This doesn’t even begin to address the phishing threat that OAuth encourages – its own “anti-pattern”. Anyone confused about this would do well to read Lachlan Hardy’s blog post about this from earlier in 2008: -fools/.

All these are valid points — and Ben Ward has written an excellent post discussing the UX issues and options associated with OAuth — but it also misses something very important. You can’t store someone’s identity without having a relationship.

Digital identities exist to enable human experiences online and if you store someone’s Identity you have a relationship. So when you force third party apps into collecting usernames, passwords (and any other snippet of someone’s Identity) it forces those users into having a relationship with that company — whether the individual or the company wants it. If you store someones identity you have a relationship with them. 

With technology we tend not to enable trust in the way most people use the term. Trust is based on relationships. In close relationships we make frequent, accurate observations that lead to a better understanding and close relationships, this process however, requires investment and commitment. That said a useful, good relationship provides value for all parties. Jamie Lewis has suggested that there are three types of relationship (on the web):

  1. Custodial Identities — identities are directly maintained by an organisation and a person has a direct relationship with the organisation;
  2. Contextual Identities — third parties are allowed to use some parts of an identity for certain purposes;
  3. Transactional Identities — credentials are passed for a limited time for a specific purpose to a third party.

Of course there are also some parts to identity which are shared and not wholly owned by any one party.

This mirrors how real world identities work. Our banks, employers and governments maintain custodial identities; whereas a pub, validating your age before serving alcohol need only have the yes/no question answered — are you over 18?

Twitter acts as a custodian for part of my online identity and I don’t want third party applications that use the Twitter API to also act as custodians but the lack of OAuth support means that whether I or they like it they have to. They should only have my transactional identity. Forcing them to hold a custodial identity places both parties (me and the service using the Twitter API) at risk and places unnecessary costs on the third party service (whether they realise it or not!).

But, if I’m honest, I don’t really want Twitter to act as Custodian for my Identity either — I would rather they held my Contextual Identity and my OpenID provider provided the Custodial Identity. That way I can pick a provider I trust to provide a secure identity service and then authorise Twitter to use part of my identity for a specific purpose, in this case micro-blogging. Services using the Twitter API then either use a transactional identity or reuse the contextual identity. I can then control my online identity, those organisations that have invested in appropriate security can provide Custodial Identity services and an ecosystem of services can be built on top of that.


Just wanted to correct a couple of mistakes, as pointed out by Chris, below:

1. Twitter was hacked with a dictionary attack against an admin’s account. Not from phishing, and not from a third-party’s database with Twitter credentials.
2. The phishing scam worked because it tricked people into thinking that they received a real email from Twitter.

Neither OpenID nor OAuth would have prevented this (although that not to say Twitter shouldn’t implement OAuth). Sorry about that.

Interesting stuff from around the web 2008-12-06

Online Identity just got really interesting and really competitive… lets hope the open stack wins not the proprietary

Biggest Battle Yet For Social Networks: You, Your Identity And Your Data On The Open Web
Facebook makes their big press push for their ‘Facebook Connect‘ service, MySpace have ‘Data Availability‘ and Google ‘Friend Connect‘. Sites that use these services make life a bit easier for them, but the real value goes to the social networks. These services make users begin to think about their identity in terms of their MySpace profile, or Facebook login as they use it to sign into their favorite services. That makes it even more likely the users will maintain their profiles on those services, add friends, etc. The real risk with Facebook is the proprietary login and data sharing standards, Myspace is so much better with its use of open standards including OpenID and their willingness to work with Google (Facebook have prohibited Google from getting in the middle).

Crime fighting team by ittybittiesforyou. Some rights reserved.
Crime fighting team by ittybittiesforyou. Some rights reserved.

David Recordon considers “Getting OpenID Into the Browser” [O’Reilly Radar]
Google Chrome did a smart thing: Less. They unified the search box and address bar, since that’s what people do anyway. That gives us back precious pixels for the only thing that’s as important to an average web user as where they’re going: Who they are. Identity belongs in the browser.

Some interesting thoughts on near future of the web

User Styling – bit of custom css and you can get the site to look the way you want [24 ways via @fantasticlife]
Override a publishers styling, remove ads whatever you like. It’s interesting to consider the implications of this if, as @fantasticlife suggests, this goes more mainstream since it will change the role of design – the publisher gives you the data you presented as you want it.

Going Hyper-Local – Location Based Internet []
Fire Eagle, Flickr, Twitter, Dopplr, BrightKite and many more help you tell the web about where you are – and then find people near you.

The enterprise is about control and the web is about emergence but for how long? [O’Reilly Radar]
I suspect it’s more likely the result of large scale system dynamics, where the culture of control follows from other constraints. If multiverse advocates are right and there are infinite parallel universes, I bet most of them have IT enterprises just like ours; at least in those shards that have similar corporate IT boundary conditions. Once you have GAAP, Sarbox, domain-specific regulation like HIPAA, quarterly expectations from “The Street,” decades of MIS legacy, and the talent acquisition realities that mature companies in mature industries face, the strange attractors in the system will pull most of those shards to roughly the same place. In other words, the IT enterprise is about control because large businesses in mature industries are about control. On the other hand, the web is about emergence because in this time, place, and with this technology discontinuity, emergence is the low energy state.

The Future of Ephemeral Conversation [Schneier on Security]
The Internet is the greatest generation gap since rock and roll. We’re now witnessing one aspect of that generation gap: the younger generation chats digitally, and the older generation treats those chats as written correspondence. Until our CEOs blog, our Congressmen Twitter, and our world leaders send each other LOLcats – until we have a Presidential election where both candidates have a complete history on social networking sites from before they were teenagers– we aren’t fully an information age society.

Some photo stuff

LIFE photo archive hosted by Google
Search millions of photographs from the LIFE photo archive, stretching from the 1750s to today. Most were never published and are now available for the first time through the joint work of LIFE and Google.

Find Flickr photos my colour [Multicolr Search Lab]
They are extracting the colours from 10 million of the most “interesting” Creative Commons images on Flickr and then use “visual similarity technology” so you can navigate the collection by colour.

Some BBC stuff

BBC Programmes iPhone webapp experiment []
Another nice bit of hacking from Duncan – browse BBC TV and Radio schedules on your iPhone, the iPhone way – living further out of London with longer train journeys has improved his hacking output.

BBC builders: Tom Scott, and the team behind /programmes and /music []
That’s me! Jemima Kiss has started interviewing folk at the BBC who are helping to build projects that people don’t hear about. She started with me, which was jolly nice.

How to help the network effect

Following my recent post considering BBC public value in the online world I was asked to write a piece for the BBC’s internal staff paper ariel. Here it is:

Front cover of ariel
Front cover of ariel

IF YOU READ the BBC’s internet blog you will know that we are considering the use of OpenID, an interesting though widely misunderstood, technology that could benefit everyone using the web by extending the generative nature of the web.

Technologies such as OpenID and it’s sister technology OAuth and, techniques such as Linked Data provide benefits that the BBC should be helping the web at large to adopt.

It might seem a bit geeky and not something that most people get right now, but then almost nobody gets Transport Layer Security either but I’m pleased that hasn’t stopped my bank implementing it; most people don’t understand HTTP but we all use it. The BBC, could help foster the adoption of these technologies for the benefit of the web at large by adopting them, by promoting best practice and by actively engaging in their development.

Tim Berners-Lee, creator of the web, has proposed a set of simple rules ‘to do the web right’ to achieve a semantically interlinked web of resources, accessible to man and machine. These rules are know as Linked Data.

But how does following these principles help the BBC? And how does that help the web at large? How does it add public value? The short answer is it provides a platform that allows others to build upon and provides our audience with a more coherent user experience.

If data is unconnected (as most of is) it is likely that those websites and the journeys across them will be incoherent. The web’s power comes from being interconnected. The value of any piece of content online is greatly enhanced if it is interconnected. This is due to the network effect, the classic example being the telephone. The more people who own a telephone, the more valuable each telephone becomes. Adding a telephone to a network makes every other telephone more useful. Adding semantically meaningful links to the web adds context and allows others to discover more information.

For example, by building and in this fashion the new artist pages will become more useful by being joined to programmes – directly linking artist pages to those episodes that feature that artist. And the network effect goes both ways. Linking artists to programmes makes the programme pages more valuable – because there is more context, more discovery and more serendipity. The network effect really explodes once programmes and music are joined to the rest of the web.

The BBC has a role beyond its business needs because it can help create public value around useful technologies – and around its content for others to benefit.

BBC public value in the online world

The BBC is an interesting organisation – it isn’t motivated by profit and unlike other public service broadcasters, elsewhere in the world, it is very much part of the country’s mainstream broadcast entertainment ecosystem. Indeed Stephen Fry has suggested that this mix of out and out public service broadcasting, more mainstream programming and stuff somewhere in the middle is vital if public service broadcasting is to have meaning. Stephen argues that if you want people to find and value public service programmes, then it needs to be part of a broader entertainment offering.

In the broadcast world the BBC has a clear (albeit, in some quarters, controversial) public service role and a clear and well developed modus operandi. That’s not to say that it might not or indeed should not change, but rather to say that right now the consensus is it’s doing the right thing, in the right way. But in the online space I don’t think things are as clear, even though the public purposes for all platforms are the same, namely:

One reason the Web is different from broadcast media is because it’s so new and so fluid. The web is not yet 20 years old and it is still evolving at a phenomenal rate, both in terms of technologies and in terms of its application. This means that, with the web, one needs to deal with both the technology and the content. Treating the two separately or assuming that the platform is sorted – in the way one can do with traditional media – is impossible or at least a foolish mistake.

Clearly, from a content perspective there is much the BBC could and indeed is doing on the web – much as it does in the broadcast space. But because there is something very special about the Web the BBC could also be adding real value over and above its content offering. It seems to me that there are at least two additional, distinct areas where the BBC could add public value. Firstly, through its size, the power of its brand and its non-commercial status, it could help with the adoption of technologies that benefit the Web population at large; and secondly by helping to semantically link up parts of the web.

Last week Zac posted an article about the recent OpenID Foundation Content Provider Advisory Committee which the BBC hosted. Unfortunately OpenID is a widely misunderstood piece of technology, partially I suspect because people have got so use to the email+password per site paradigm, and partially because the name OpenID doesn’t really help people grok what it’s about. But that doesn’t mean that it doesn’t provide real benefit to people using the web.

As I’ve discussed previously emails are for contacting people not for identifying them. Using emails for identification means the affordance is the wrong way round – I can send you an email but I can’t see who you are, what you’ve said about yourself nor who’s in your social graph. In the real world this would be a bit like handing over a scrap of paper with your home address or telephone number on it as a means of identification. You wouldn’t do that, so why do it online?

As Zac points out, OpenID has yet to hit the mainstream – it’s still the preserve of Generation @. But if, as I do, you believe that technologies such as OpenID and OAuth provide genuine end user benefits then it is something that the BBC should be helping everyone else to adopt.

Sure it might seem a bit geeky and not something that most people get right now but then almost nobody gets transport layer security either but I’m pleased it hasn’t stop my bank implementing it; most people don’t understand HTTP but we all use it. The BBC, could help foster the adoption of these technologies for the benefit of the web at large by, for example, adopting these technologies itself, by promoting best practice and by actively engaging in their development.

Tim Berners-Lee has put forward four simple rules to do the web right:

  • Use URIs to identify things on the web as resources
  • Use HTTP so people can dereference them
  • Provide information about the resource when it is dereferenced
  • Include onward links so people can discover more things

If you follow these rules what you get is a highly interlinked web of resources – where each resource is linked to other resources that are contextually/semantically relevant. And if you also provide those resources in machine readable formats (as we have done with programmes and music) then you provide a platform that allows others to reuse your data.

Unfortunately it appears that there is a nasty habit developing on the web whereby websites aren’t doing this and instead are only linking to themselves.

Follow Jay’s link and you come to a story that indeed doesn’t have any outbound links, except to other Times stories. Now, I understand the value of linking to other articles on your own site — everyone does it — but to do so exclusively is a small tear in the fabric of the web, a small tear that will grow much larger if it remains unchecked.

That’s bad for users. But how does following the Linking Open Data principles help the BBC? And more importantly how does that help the web at large? How does it add public value?

If data is unconnected it is very likely that those websites and journeys across the web will be incoherent. The web’s power comes from interconnected data. Publishing a web page or any other piece of content online is useful but if its interconnected with other resources then its value is greatly enhanced. This is due to the Network Effect. The classic example of the Network Effect is the telephone. The more people that own a telephone, the more valuable each telephone is becomes.

One consequence of the network effect is that the addition of a node by one individual indirectly benefits others who are also part of the network, so in the telephone example, adding a telephone to a network makes every other telephones more useful. On the web adding semantically meaningful links adds context to the page you are reading and allows you to discover other resources and read more information about a given subject.

For example, by building and in this fashion our new artist pages will become much more useful by being joined to programmes – directly linking to those programmes that feature that artist. And of course the network effect goes both ways; it goes all ways. Linking artists to programmes also makes the programme pages more valuable – because there is now more context, more discovery and more serendipity. And that’s just within the BBC.

By joining BBC data, in this fashion, with the rest of the web the Network Effect is magnified yet further. That does benefit to the BBC, but it also benefits the web at large and that is important. The BBC has a role that transcends its business needs – it can help create public value around its content for others to benefit from (assuming, of course, there remains one, non-discriminatory, free and open internet).

links for 2008-04-07