One reason you should not use web applications to do your computing is that you lose control, it’s just as bad as using a proprietary program. Do your own computing on your own computer with your copy of a freedom-respecting program. If you use a proprietary program or somebody else’s web server, you’re defenceless. You’re putty in the hands of whoever developed that software.
Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?
For starters it’s important to understand that Cloud computing isn’t about doing anything new, instead it’s about applications that run in the web rather than your desktop. There are four components that make up Cloud Computing, moving down the stack from consumer facing products we have:
Applications – stuff like GMail, Flickr and del.icio.us (yes I know they’ve changed the name).
Infrastructure, including storage – lower level services that let you run your own applications, virtualized servers, stuff like Amazon’s EC3 and S3.
And then there are also clients – hardware devices that have been specifically designed to deliver cloud services, for example the iPhone and Google’s Android phones.
There are inherent dangers for users of Web 2.0. For a start, by putting a lot of work into a Web site, you commit yourself to it, and lock yourself into their data formats. This is similar to data lock-in when you use a proprietary program. You commit yourself and lock yourself in. Moving comes at great cost.
…[Metcalf’s law] postulates that the value of a network is proportional to the square of the number of nodes in the network. Simple maths shows that if you split a network into two, its value is halved. This is why it is good that there is a single email network, and bad that there are many instant messenger networks. It is why it is good that there is only one World Wide Web.
Web 2.0 partitions the Web into a number of topical sub-Webs, and locks you in, thereby reducing the value of the network as a whole.
So does this mean that user contributed content is a Bad Thing? Not at all, it is the method of delivery and storage that is wrong. The future lies in better aggregators.
But we’ve been here before haven’t we? It certainly sounds similar to the pre Web era. Initially with IBM and then with closed networks like CompuServe and America Online we had companies that retained complete control of the environment. Third party developers had limited or no access to the platform and users of the system stored all their data on someone elses hardware. For sure this model provided advantages. If something went wrong there was only one person you needed to contact to get it sorted, someone else (who knew more about this stuff than you) could worry about keeping the system running, backing up your data and so on.
But there was a price to this convenience. You were effectively tied to the one provider (or at the very least it was expensive to move to a different provider), there was very little innovation nor development of new applications – you had email, forums and content, what more would you want? And of course there was censorship – if one of these networks didn’t like what was being said it could pull it.
At the other end of the spectrum there were highly specialised appliances like the Friden Flexowriter. They were designed to do one job and one job only, they couldn’t be upgraded but they were reliable and easy to learn. A bit like the iPhone.
Then along came generalised PC – computers that provided a platform that anyone could own, anyone could write an application for and anyone could use to manage their data. And relatively soon after the advent of pre-assembled computers along came the Web. The ultimate generalised platform, one that provided an environment for anyone to build their own idea on and exploit data in a way never before realised. But there was a problem. Security and stability suffered.
PCs are a classic Disruptive Technology – in the early days they were pretty rubbish, but they let hobbyist tinker and play with the technology. Over time PCs got better (at a faster rate than people’s expectations) and soon you were able to do as much with a PC as you could with a Mainframe but with the added advantage of freedom and much richer application ecosystem.
Another implication of Clayton’s Disruptive Technology theory is that as a technology evolves it moves thought cycles. Initially a technology is unable to meet most people’s expectation and as a result the engineers need to push the limits of what’s possible. The value is in the platform. But as the technology gets better and better so the engineers no longer need to push the limits of what’s possible and the value switches from the platform to the components and speed to market.
That is where we are now – the value is no longer with the platform – it’s with the components, that run on the platform. And it’s no longer about functionality it’s more about performance and reliability. And because the value is with the applications it makes sense for application developers to use Infrastructure or Application Environments supplied by others. And it makes sense for customers to use Computing Cloud Applications because they are reliable and they let you focus on what interests you. A bit like the companies that used IBM Mainframes. But if we make that deal I suspect we will be in the same situation as previous generations found themselves in – we won’t like the deal we’ve made and we will move back to generalised, interoperable systems that let us retain control.