Recessions silver lining is innovation

Following the dot.com boom of the late 1990’s, when anyone and everybody who could code worked all hours to realise their ideas, there was a collapse and loads of people were unemployed. Prior to the collapse some people made loads of money, of course some of them then went and lost it and some people just worked long hours for no real benefit. But that’s not really the point, the point is that after the dot.com bubble burst we saw the emergence of new tech companies that layed the foundation for the whole web 2.0 thing.

"That was supposed to be going up wasn't it?" by rednuht. Some rights reserved.
"That was supposed to be going up wasn't it?" by rednuht. Some rights reserved.

During the late ’90s everyone was  busy, busy, busy doing stuff for paying clients and certainly in the early days there was some genuine innovation. But there was also a lot of dross and all those client demands meant we didn’t always have the time to play with the medium and try out new ideas. But following the collapse in 2001 people were suddenly able to explore the medium, develop and innovate new ideas. That’s not to say that there weren’t economic pressures on those development teams — in many ways the pressures where more acute because there wasn’t a VC buffering your cash flow. And as one of those companies put it you needed to get real:

Getting Real is about skipping all the stuff that represents real (charts, graphs, boxes, arrows, schematics, wireframes, etc.) and actually building the real thing. […]

Getting Real delivers just what customers need and eliminates anything they don’t.

But the lack of other people’s deadlines and the massive number of unemployed geeks playing with stuff and working on what interested them, rather that what was required for the next deadline gave us an amazing range of technologies, including: blogging as we now know it, a rather popular photosharing site, the first social bookmarking site utilising a new design pattern a new MVC framework using an obscure language. Indeed Ruby was itself created during Japan’s recession in the ’90s. And on a much smaller scale Dom Sagolla’s recent post about how Twitter was born shows how similar forces within a company created similar results.

It seems that when we have unemployment among geeks we see true innovation, genuinely new ideas coming to market. During the good times when employment is high we tend to see a raft of “me-toos” commissioned by people that too often don’t really understand the medium, and aren’t motivated to come up with the new ideas instead tending to focus how to make the existing ideas bigger, better, faster because that’s lower risk and for sure this period has it’s advantages, unfortunatley it seems innovation isn’t one of them.

The current economic depression is clearly bad news, potentially very bad news indeed, but what it might mean is we’re in for another period of innovation as more and more geeks find themselves unemployed and starting setting up on their own.

Online communities are about people stupid

Flickr, Twitter and Facebook all work because they are primarily about people. Photos, status updates, messages and comments are all secondary, they are the social glue that help make the community work. And if you doubt me then consider this – Heather Powazek Champ, the Director of Community at Flickr has reported that:

People have fallen in love on Flickr. Some have proposed over Flickr. It’s just a delightful thing for so many people, and I get to spend my days with them.

Liverpool Street station crowd blur. By David Sims, some rights reserved.
Liverpool Street station crowd blur. By David Sims, some rights reserved.

Flickr is about the social nature of photography. Strangers meet online to comment on each others’ photography, form and join groups based on common interests and share photos that document and categorize the visible world. Likewise Twitter isn’t simply a stream of the world’s consciousness, it’s a semi-overlapping stream of activity – some public, some private and some semi-public.

It seems to me that it is the semi-public, semi-overlapping aspects that make services like Flickr and Twitter work so well because they help reinforce the social. Consider the alternative: YouTube for all it’s success as a video uploading and publishing service it is a mess when it comes to its community. In fact there’s no community, there are just banal comments which often don’t get much better than “LOL”.

Flickr on the other hand doesn’t try to be an all purpose photo publishing service, it’s a photo-sharing service primarily aimed at sharing photos with your friends, family and others with a common interest. That’s not to say that there isn’t also a public sharing aspect to Flickr; indeed most of the photos on this blog (including the one used in this post) are from Flickr, and in the main, from people I don’t know. There is a public aspect to Flickr, just as there is a public aspect to Twitter, but these aren’t the primary use cases. The primary use cases are those associated with the semi-public: finding and connecting to friends; sharing photos, ideas and your thoughts with friends, that sort of thing.

The semi-public nature of these services also means that the community can, and does, develop and enforce community rules. With Flickr these are site-wide rules, as Heather Powazek Champ puts it:

“We don’t need to be the photo-sharing site for all people. We don’t need to take all comers. It’s important to me that Flickr was built on certain principles.” And so they’re defended — and evaluated — constantly.

With Twitter the rules are more personal, more contextual and as a result so are the communities. You get to choose who you follow and only those people are then part of your timeline. If you don’t follow someone then you won’t be bothered with their updates (and they can’t direct message you).

This shouldn’t be surprising since this is pretty much what happens in the real world. You have networks of friends whose conversations overlap, and whose conversations are sometimes held in private and sometimes semi-public.

So what’s all this mean? Well for one thing it means that unless you want banal comments and no real community you need to build people into your service as primary objects, rather than treating their comments, content and stuff as primary objects. You also need to work out how to allow semi-overlapping activity streams. It also probably means that you shouldn’t design for ‘user generated content’ since this will tend to make you think about the user’s content rather than the people and their community.

Coffee houses and civil liberty

For 11 days in 1675 King Charles II tried to suppress London’s coffee houses because they were regarded as “places where the disaffected met, and spread scandalous reports concerning the conduct of His Majesty and his Ministers“.

Spectateur by Jessie Romaneix. Used under license.
Spectateur by Jessie Romaneix. Used under license.

Seventeenth century coffee houses were great social levellers, open to all men from all walks of life, whatever their social status, and as a result were associated with equality and republicanism. Because they became meeting places where business could be carried on, news exchanged they were influential places – they provided bankers, intellectuals and artists with a forum to discuss political development and to carry out business. Indeed Lloyd’s of London and the London Stock Exchange both owe their very existence to the London coffee houses.

But with their popularity came controversy.

In 1674 The Women’s Petition Against Coffee was set up in London. Women complained that men were never to be found at home during times of domestic crises, since they were always enjoying themselves in the coffee houses. They circulated a petition protesting “the grand inconveniences accruing to their sex from the excessive use of the drying and enfeebling liquor”.

Strange to think that something so everyday as a coffee shop could on the one hand stir up such emotion and political fear and on the other provide a platform for some of the oldest and most successful companies in the world. Stranger yet that we are still making the same, wrong headed, decisions 333 years later.

Yesterday saw BBC newsbeat report that the “US Army warns of Twitter danger“:

US intelligence agencies are worried that terrorists might start to use new communication technologies like the blogging site Twitter to plan and organise attacks.

It goes on to quote the US Army report saying that:

Twitter is already used by some members to post and support extremist ideologies and perspectives.

Terrorists could theoretically use Twitter social networking in the US as an operational tool.

This follows the UK government’s desire to develop a central database of all mobile phone and internet traffic giving the police and security services easier access to the data.

As with coffee houses during the 17th century, the Internet is currently a new thing – one that is challenging and scary for some while at the same time providing an environment where communication and commerce can flourish for others. And unfortunately for us this presents a challenge for society today. The Internet is something that has happened to our current generation of policy makers – rather than something that they have grown up with – and while that is true it will be seen through the glasses of those that see it as something that is special and different – just as coffee shops were to King Charles II. Or as Douglas Adams puts it:

…it’s [the Internet] very new to us. Newsreaders still feel it is worth a special and rather worrying mention if, for instance, a crime was planned by people ‘over the Internet.’ They don’t bother to mention when criminals use the telephone or the M4, or discuss their dastardly plans ‘over a cup of tea,’ though each of these was new and controversial in their day.

But there are difference, for starters in 1675 King Charles II realised his mistake and reversed his decision after 11 days. Today’s politicians don’t appears to be as humble as 17th century kings – which is a little worrying. But more importantly today’s technologies provide massive leverage – and in situations like this that’s a problem.

When a government gives a QUANGO, the police or security services a new power that doesn’t necessarily  mean that that power can be acted upon. Indeed there are lots of pieces of legislation that aren’t acted upon, because they are just silly but there are also laws that aren’t acted upon because they are too difficult or too expensive to do so, or at least too expensive to do so indiscriminately.

Society has to date had a useful safety value – the police need to apply common sense and intelligence when apply their powers. There is no practical way in which they can apply all laws, as written, indiscriminately instead they needed to decide where and how best to apply those laws. And in return society and individuals regulated their activity – taking responsibility for their actions. Most people choose not to break the law, not because they think they will get caught and punished but because we moderate our actions based on social norms and our own moral compass. The police and security services provide a backstop should this go wrong.

But things are changing – big centralised databases that record everyone’s phone calls and email, keep track of DNA profiles, or otherwise store your Identity makes it much, much easier for a government to enforce a piece of legislation universally, indiscriminately. The cost associated with running a query across a database of phone calls is practically nil and this means a government no longer need prioritise its searches as it once did. There’s no point – you might as well just search the database for suspicious patterns in the data, since it costs next to nothing to do so.

Yes people use the Internet to do bad thing, and quite possibly Twitter is one of those services that bad people use. But they also plan bad things in coffee house but for the last 300 odd years we’ve realised that trying to legislate against coffee houses is a bad thing for society. I suspect in generations to come we will view the Internet in the same way – recognising that bad people, do bad thing and one of the place they do bad things is on the Internet but the Internet is just another platform, like coffee houses.

Interesting Semantic Web links

Below are the links recommended by friendly Twitter folk.

The Semantic Web in Action [Scientific American]
A set of technologies that provide a common language for representing data that could be understood by all kinds of software agents; ontologies—sets of statements—that translate information from disparate databases into common terms; and rules that allow software agents to reason about the information described in those terms. The data format, ontologies and reasoning software would operate like one big application on the World Wide Web, analyzing all the raw data stored in online databases as well as all the data about the text, images, video and communications the Web contained. Like the Web itself, the Semantic Web would grow in a grassroots fashion, only this time aided by working groups within the World Wide Web Consortium, which helps to advance the global medium.

The Giant Global Graph
The WWW increases the power we have as users. The realization was “It isn’t the computers, but the documents which are interesting”. Now you could browse around a sea of documents without having to worry about which computer they were stored on. Simpler, more powerful. Obvious, really.

Now, people are making another mental move. There is realization now, “It’s not the documents, it is the things they are about which are important”. Obvious, really.

Sir Tim Berners-Lee: Semantic Web is open for business [ZDNet.com]
A write up of an interview with TimBL. Take away story: little steps using technologies such as SPARQL and approaches such as LOD we are already seeing the Semantic Web taking hold.

Interview with Tim Berners-Lee on the Semantic Web [YouTube]
TimBL discussing the itch the semantic web will scratch. How making data available the webby way will allow a whole new class of applications to be developed and how those might be used.

Intro to the Semantic Web [YouTube]
Nice introductionary video.

Does the Semantic Web matter? Paul Miller thinks so [ZDNet.com]
Much that was once amazing is now taken for granted. Many that were once ‘the next big thing’ are no more. The number of people connected, the ways in which they connect, and the things they seek to do once online grow every day, yet the fundamental means of connection between all of these people, all of these places, and all of these things remains the dumb hyperlink. A simple ‘look here.’ A blind pointer into the Void. An impediment to further progress. This is what the so-called Semantic Web sets out to address. All of the specifications, all of the technology, are about enabling the description of ’stuff’ – and the connections between one piece of stuff and another – to be declared in ways that are explicit, intelligible and actionable to both humans and software applications acting on their behalf.

Native to a Web of Data [Tom Coates, plasticbag.org]
Tom’s presentation on the web of data… full of lots of good stuff.

Following your nose to the web of data [inkdroid]
The philosophy is quite different from other data discovery methods, such as the typical web2.0 APIs of Flickr, Amazon, YouTube, Facebook, Google, etc., which all differ in their implementation details and require you to digest their API documentation before you can do anything useful. Contrast this with the Web of Data which uses the ubiquitous technologies of URIs and HTTP plus the secret sauce of the RDF triple.

Tim O’Reilly: Web 2.0 Is About Controlling Data [wired.com]
Why, despite many attempts, have we seen nobody able to dethrone eBay? Well, it’s because there are network effects at work in auctions. You have a critical mass of buyers and sellers. We’re seeing that with Google AdWords — it’s just a bigger and better marketplace. There are these tipping points where these services really become monopolistic.

Many thanks to…

Yves Raimond, Chris Sizemore, Richard Northover, Michael Smethurst, Zach Beauvais and Leigh Dodds.

The mobile computing cloud needs OAuth

As Paul Miller notes Cloud Computing is everywhere – we are pushing more and more data and services into the cloud. Particularly when accessed from mobile devices this creates an incredibly powerful and useful user experience. I love it. The way that I can access all sorts of services from my iPhone means that an already wonderful appliance becomes way more powerful. But not all is well in the land of mobile-cloud computing; a nasty anti-pattern is developing. Thankfully there is a solution and it’s OAuth.

"Mobile phone Zombies" by Edward B. Used under licence.
"Mobile phone Zombies" by Edward B. Used under licence.

So what’s the problem then? Since Apple opened up the iPhone to third party developers we have seen a heap of applications that connect you to your online services – there are apps that let you upload photos to Flickr, post to Twitter, see what’s going on in Facebook land all sorts of stuff. The problem is the way some of them are gaining access to these services by making you enter your credentials in the applications rather than seeking to authorise the application from the service.

Probably the best way to explain what I mean is to look at how it should work. The Pownce app is an example of doing it right as is Mobile Foto – these applications rely on OAuth. This is how it works: rather than entering your user-name and password in the application you are sent over to Safari to log into the website and from there you authorise (via OAuth) the application to do its thing.

This might not sound so great – you could argue that the user experience would be better if you were kept within the application. But that would mean that your login credentials would need to be stored on your ‘phone, and that means that you need to disclose those credentials to a third party (the folks that wrote the app).

By using OAuth you log into Flickr, Pownce etc. and from there authorise the application to user the site – your credentials are kept safe and if your iPhone gets stolen you can visit the site and disable access. Everything is where it should be and that means your login details are safe.

To be fair to the iPhone app developers this type of delegated authorisation isn’t always possible. Twitter, for example, still hasn’t implement OAuth and as a result if you want to use one of the growing number of iPhone Twitter app you need to give up your user-name and password. I find this incredible frustrating – especially from a service like Twitter where (according to Biz Stone, Twitter’s co-founder) “the API… has easily 10 times more traffic than the website“.

The URL shortening anti pattern

Along with others I’ve recently started to grok Twitter – it took a while – but I now find it a fantastic way to keep in touch with folk that I know or respect, or catch up on snippets of info from news services around the web. It’s great.

What makes Twitter particularly useful, as a way of keeping in touch with a large number of people, is the limit of 140 characters per ‘tweet’. That’s it, each tweet is 140 character or less. But what this also means is that if you tweet about a URL that URL eats up a lot of those 140 character. To help solve this problem Twitter uses TinyURL to shorten the URL. This is a solution to the problem but unfortunately it also creates a new one.

Example of poor url design

URLs are important. They are at the very heart of the idea behind Linked Data, the semantic web and Web 2.0 because if you can’t point to a resource on the web then it might as well not exist and this means URLs need to be persistent. But URLs are important because they also tell you about the provenance of the resource and that helps you decide how important or trustworthy a resource is likely to be.

URL shortening service such as TinyURL or RURL are very bad news because they break the web. They don’t provide stable references because they are Single Points of Failure acting as they do as another single level of indirection. URL shortening services then are an anti pattern:

In computer science, anti-patterns are specific repeated practices that appear initially to be beneficial, but ultimately result in bad consequences that outweigh the hoped-for advantages.

URL shortening services create opaque URLs – the ultimate destination of the URL is hidden form the user and software. This might not sound such a big deal – but it does mean that it’s easier to send people to spam or malware sites (which is why Qurl and jtty closed – breaking all their shortened URLs in the process). And that highlights the real problem – they introduce a dependency on a third-party that might go belly up. If that third-party closes down all the URLs using that service break, and because they are opaque you’ve no idea where there originally pointed.

And even if the service doesn’t shut down there would be nothing you could do if that service decided to censor content. For example the Chinese Communist Party might demand that TinyURL remap all the URLs it decided were inappropriate to state propaganda pages. You couldn’t stop them.

But of course we don’t need to evoke such Machiavellian scenarios to still have a problem. URL shortening services have a finite number of available URLs. Some shortening services like RURL use 3 character (e.g. http://rurl.org/lbt), this means these more aggressive RUL shortening services have about 250,000 possible unique three-character short URLs, once they’ve all been used they either need to add more characters to their URLs or start to recycle old one. And once you’ve started to recycle old URLs your karma really will suffer (TinyURL uses 6 characters so this problem will take a lot longer to materialise!)

There is an argument that on services such as Twitter the permanence of the URL isn’t such an issue – after all the whole point of Twitter is to provide a transitory, short lived announcement – Twitter isn’t intended to provide an archive. And the fact that the provenance of the URL is obfuscated maybe doesn’t matter too much either, since you know who posted the link. All that’s true, but it still causes a problem when TinyURL goes down, as it did last November and it also reinforces the anti-pattern and that is bad.

Bottom line, URLs should remain naked, providing this level of indirection is just wrong. The Internet isn’t supposed to work via such intermediate services; the Internet was designed to ensure there wasn’t a single single point of failure that can so easily break such large parts of the web.

Of course simply saying don’t use these URL shortening services isn’t going to work. Especially when using services such as Twitter, where there is a need for short URLs. However, what it does mean is that if you’re designing a website you need to think about URL design and that includes the length of the URL. And if you’re linking to something on a forum, wiki, blog or anything that has permenance please don’t shorten the URL, keep them naked. Thanks.

Photo: Example of poor URL design, by Frank Farm. Used under licence.