Interesting stuff from around the web 2008-10-06

The map is scaled to the number of radios per capita. The most radios per person are in Norway - at more than 3 per person.
The map is scaled to the number of radios in each territory. The most radios per person are in Norway - at more than 3 per person.

Lots and lots of clouds

Stephen Fry explains the principles of cloud computing and recommends a few services
Clever man Stephen Fry, perhaps he could write a piece on OpenID next.

Richard Stallman on Cloud Computing: “Stupidity” [autonomo.us]
“I’m very supportive of [Stallman’s] concern about cloud computing, and I agree that it’s something that the Free Software and Free Culture communities need to address. But in rejecting all network computing, I think RMS has thrown out the baby with the bathwater.”

Can’t Open Your E-Mailbox? Good Luck [NYTimes.com]
Amidst all the hype around cloud computing, The New York Times points out that if Google locks down your Gmail login for whatever reason (like someone tried the wrong password too many times), you’re basically screwed.

Some lovely visualisations, one odd one

Worldmapper: The world as you’ve never seen it before
Interesting collection of maps, where territories are re-sized on each map according to the subject of interest. There are now nearly 600 maps.

AirTraffic Worldwide [YouTube]
A map of the world showing a simulation of all of the air traffic in a 24-hour period

Flickr Panda – strange, very strange
Panda vomiting photos – why the Panda? Who knows. Something to do with this.

Height – the observable universe from top to bottom [xkcd]
I don’t normally link to xkcd because, to be honest, I would simply be linking to every addition. But this one is particularly good.

Listen to TimBL: Link your Data, give it context

Is Linking to Yourself the Future of the Web? [O’Reilly Radar]
“Follow Jay’s link and you come to a story that indeed doesn’t have any outbound links, except to other Times stories. Now, I understand the value of linking to other articles on your own site — everyone does it — but to do so exclusively is a small tear in the fabric of the web, a small tear that will grow much larger if it remains unchecked.”

…and listen to Martin: don’t fall for BDUF

‘Requirement’ is inherently waterfallish. Agile methods violate this underlying assumption by intending to discover the ‘requirements’ during construction and after delivery. [martinfowler.com]
Everyone knows how big the difference is between what people say they want and what people actually need and use. By watching what people actually do with your application, you can find out what actually happens with the software – which can give you much more direct information than other sources. As a result I think more teams should consider adding this approach to their toolkit.

Cloud computing going full circle

Richard Stallman, GNU’s founder, recently warned that Cloud Computing is a trap.

One reason you should not use web applications to do your computing is that you lose control, it’s just as bad as using a proprietary program. Do your own computing on your own computer with your copy of a freedom-respecting program. If you use a proprietary program or somebody else’s web server, you’re defenceless. You’re putty in the hands of whoever developed that software.

'IBM's $10 Billion Machine' by jurvetson. Used under License.
IBM's $10 Billion Machine by jurvetson. Used under license.

Before we go any futher I should probably try to explain what I mean by Cloud Computing, especially since Larry Ellison has described it as “complete gibberish“:

Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?

For starters it’s important to understand that Cloud computing isn’t about doing anything new, instead it’s about applications that run in the web rather than your desktop. There are four components that make up Cloud Computing, moving down the stack from consumer facing products we have:

Applications – stuff like GMail, Flickr and del.icio.us (yes I know they’ve changed the name).

Application environments – frameworks where you can deploy your own code like Google’s App Engine and Microsoft’s Live Mesh.

Infrastructure, including storage – lower level services that let you run your own applications, virtualized servers, stuff like Amazon’s EC3 and S3.

And then there are also clients – hardware devices that have been specifically designed to deliver cloud services, for example the iPhone and Google’s Android phones.

The reason Richard Stallman dislikes Cloud Computing is the same reason Steven Pemberton suggested we should all have a website at this year’s XTech.

There are inherent dangers for users of Web 2.0. For a start, by putting a lot of work into a Web site, you commit yourself to it, and lock yourself into their data formats. This is similar to data lock-in when you use a proprietary program. You commit yourself and lock yourself in. Moving comes at great cost.

…[Metcalf’s law] postulates that the value of a network is proportional to the square of the number of nodes in the network. Simple maths shows that if you split a network into two, its value is halved. This is why it is good that there is a single email network, and bad that there are many instant messenger networks. It is why it is good that there is only one World Wide Web.

Web 2.0 partitions the Web into a number of topical sub-Webs, and locks you in, thereby reducing the value of the network as a whole.

So does this mean that user contributed content is a Bad Thing? Not at all, it is the method of delivery and storage that is wrong. The future lies in better aggregators.

But we’ve been here before haven’t we? It certainly sounds similar to the pre Web era. Initially with IBM and then with closed networks like CompuServe and America Online we had companies that retained complete control of the environment. Third party developers had limited or no access to the platform and users of the system stored all their data on someone elses hardware. For sure this model provided advantages. If something went wrong there was only one person you needed to contact to get it sorted, someone else (who knew more about this stuff than you) could worry about keeping the system running, backing up your data and so on.

But there was a price to this convenience. You were effectively tied to the one provider (or at the very least it was expensive to move to a different provider), there was very little innovation nor development of new applications – you had email, forums and content, what more would you want? And of course there was censorship – if one of these networks didn’t like what was being said it could pull it.

At the other end of the spectrum there were highly specialised appliances like the Friden Flexowriter. They were designed to do one job and one job only, they couldn’t be upgraded but they were reliable and easy to learn. A bit like the iPhone.

Then along came generalised PC – computers that provided a platform that anyone could own, anyone could write an application for and anyone could use to manage their data. And relatively soon after the advent of pre-assembled computers along came the Web. The ultimate generalised platform, one that provided an environment for anyone to build their own idea on and exploit data in a way never before realised. But there was a problem. Security and stability suffered.

PCs are a classic Disruptive Technology – in the early days they were pretty rubbish, but they let hobbyist tinker and play with the technology. Over time PCs got better (at a faster rate than people’s expectations) and soon you were able to do as much with a PC as you could with a Mainframe but with the added advantage of freedom and much richer application ecosystem.

Another implication of Clayton’s Disruptive Technology theory is that as a technology evolves it moves thought cycles. Initially a technology is unable to meet most people’s expectation and as a result the engineers need to push the limits of what’s possible. The value is in the platform. But as the technology gets better and better so the engineers no longer need to push the limits of what’s possible and the value switches from the platform to the components and speed to market.

That is where we are now – the value is no longer with the platform – it’s with the components, that run on the platform. And it’s no longer about functionality it’s more about performance and reliability. And because the value is with the applications it makes sense for application developers to use Infrastructure or Application Environments supplied by others. And it makes sense for customers to use Computing Cloud Applications because they are reliable and they let you focus on what interests you. A bit like the companies that used IBM Mainframes. But if we make that deal I suspect we will be in the same situation as previous generations found themselves in – we won’t like the deal we’ve made and we will move back to generalised, interoperable systems that let us retain control.

The mobile computing cloud needs OAuth

As Paul Miller notes Cloud Computing is everywhere – we are pushing more and more data and services into the cloud. Particularly when accessed from mobile devices this creates an incredibly powerful and useful user experience. I love it. The way that I can access all sorts of services from my iPhone means that an already wonderful appliance becomes way more powerful. But not all is well in the land of mobile-cloud computing; a nasty anti-pattern is developing. Thankfully there is a solution and it’s OAuth.

"Mobile phone Zombies" by Edward B. Used under licence.
"Mobile phone Zombies" by Edward B. Used under licence.

So what’s the problem then? Since Apple opened up the iPhone to third party developers we have seen a heap of applications that connect you to your online services – there are apps that let you upload photos to Flickr, post to Twitter, see what’s going on in Facebook land all sorts of stuff. The problem is the way some of them are gaining access to these services by making you enter your credentials in the applications rather than seeking to authorise the application from the service.

Probably the best way to explain what I mean is to look at how it should work. The Pownce app is an example of doing it right as is Mobile Foto – these applications rely on OAuth. This is how it works: rather than entering your user-name and password in the application you are sent over to Safari to log into the website and from there you authorise (via OAuth) the application to do its thing.

This might not sound so great – you could argue that the user experience would be better if you were kept within the application. But that would mean that your login credentials would need to be stored on your ‘phone, and that means that you need to disclose those credentials to a third party (the folks that wrote the app).

By using OAuth you log into Flickr, Pownce etc. and from there authorise the application to user the site – your credentials are kept safe and if your iPhone gets stolen you can visit the site and disable access. Everything is where it should be and that means your login details are safe.

To be fair to the iPhone app developers this type of delegated authorisation isn’t always possible. Twitter, for example, still hasn’t implement OAuth and as a result if you want to use one of the growing number of iPhone Twitter app you need to give up your user-name and password. I find this incredible frustrating – especially from a service like Twitter where (according to Biz Stone, Twitter’s co-founder) “the API… has easily 10 times more traffic than the website“.