UGC its rude, its wrong and it misses the point

Despite recent reports that blogging is dead traditional media companies are still rushing to embrace UGC – User Generated Content – and in many ways that’s great. Except User Generated Content is the wrong framing and so risks failing to deliver the benefits it might. I also find it a rather rude term.


Newspapers and media companies are all trying to embrace UGC — they are blogging and letting folk comment on some of their articles — and if Adam Dooley of is right with good reason, he suggests that UGC might be saving the newspapers.

I don’t think it’s coincidental that this [growth in] popularity has come since many papers have embraced both the Internet’s immediacy (real time news is the thing) and its ability to foster debate and discussion with readers. It’s also come since major papers such as the New York Times have taken the locks off their content making most or all of it free online.

But depressingly UGC is also seen by some as no more than a way to get content on the cheap from a bunch of mindless amateurs, geeks and attention seekers. This view and indeed the very term itself helps to create a dichotomy between professional journalists and the like on one side and everybody else on the other. As Scott Karp points out:

There is a revolution in media because people who create blogs and MySpace pages ARE publishers, and more importantly, they are now on equal footing with the “big,” “traditional” publishers. There has been a leveling of the playing field that renders largely meaningless the distinction between “users” and “publishers” — we’re all publishers now, and we’re all competing for the finite pie of attention. The problem is that the discourse on trends in online media still clings to the language of “us” and “them,” when it is all about the breakdown of that distinction.

Sure most bloggers don’t have the audience of the online newspapers and media companies and there are plenty of people who, as the New Scientist article points out, are simply attention seekers. But that still doesn’t make them ‘users’ and nor does it mean that they’re ‘generating content’ anymore than any other publisher – indeed one might argue that they are less ‘content generators’ than professional journalists. As I sit here writing this post am I a user? If I am I have no idea what I’m using other than WordPress, and if I am then so must journalists be users of their CMS. I know one thing for sure, I don’t think of myself as a user of someone’s site and I don’t create content for them. I suspect most people are the same.

Bloggers, those that contribute to Wikipedia, or otherwise publish content on the Web are amateur publishers — in the same way that amateur sportsmen and women are amateur athletes, whatever their ability — until they give up their day job. But that doesn’t necessarily make them any less knowledgeable about the subject they are writing about. Indeed an ‘amateur publisher’ might well know much more about the subject they are writing about than a professional journalist because they have direct person experience of their subject matter. Whether that be a technical blog by someone who helps make the technology, a news story written on Wikinews or BreakingNewsOn by someone that was there and experienced the events being written about, or even the man that invented the Web. Are any of these people doing UGC? I don’t know what they think – but I know that when I write for this blog, or upload a photo to Flickr – I don’t think I’m generating user content, I’m not doing UGC.

It seems to me that newspapers and media companies need to work to understand how amateur publishers and others can contribute. Not that that is easy — the best bloggers know their subject inside-out, more so than any professional journalist — but equally there is plenty of drivel out there, in both the amateur and professional spheres. For sure there are dreadful blogs, YouTube is full of inane video and fatuous comments but equally partisan news outlets like Fox News, the Daily Mail present biased, misleading and often downright inaccurate reporting. In the week of the US Presidential Elections it is worth considering whether Barack Obama’s use of the Internet — including the role of amateur publishers, UGC if you like — helped dull the effect of such biased news reporting which has historically had a significant role.

The trick then is to find the best content, whoever has written it, and bring it to the fore for people to read and debate. To understand what it is about the Web that makes it an effective communication medium and to harness that in whatever way that that makes sense for each context. Considering the Web in the same patronising fashion as the Culture and Media Secretary Andy Burnham does, that is as “…an excellent source of casual opinion” fails to recognise the value that debate and discussion can bring to a subject.

Media companies should embrace the generative nature of the web

Generativity, the ability to remix different pieces of the web or deploy new code without gatekeepers (so that anyone can repurpose, remix or reuse the original content or service for a different purpose) is going to be at the heart of successful media companies.

Depth of field (Per Foreby)

As Jonathan Zittrain points out in The Future of the Internet (and how to stop it) the web’s success is largely because it is a generative platform.

The Internet is also a generative system to its very core as is each and every layer built upon this core. This means that anyone can build upon the work of those that went before them – this is why the Internet architecture, to this day, is still delivering decentralized innovation.

This is true at a technological level, for example, XMPP, OAuth and OpenID are all technologies that have been invented because the technology layers upon which they are built are open, adaptable and easy for others to reuse and master. It is also true at the content level – Wikipedia is only possible because it is built as a true web citizen, likewise blogging platforms and services such as MusicBrainz – these services allow anyone to create or modify content without the need for strict rules and controls.

But what has this got to do with the success or otherwise of any media company or any content publisher? After all just because the underlying technology stack is generative doesn’t mean that what you build must be generative. There are, after all, plenty of successful walled gardens and tethered appliances out there. The answer, in part, depends on what you believe the future of the Web will look like.

Tim Berners-Lee presents a pretty compelling view in his article on The Giant Global Graph. In it he explains how the evolution of the Internet has seen a move from a network of computers, through the Internet, to a  web of documents and we are now seeing a migration to a ‘web of concepts’.

[The Internet] made life simpler and more powerful. It made it simpler because of having to navigate phone lines from one computer to the next, you could write programs as though the net were just one big cloud, where messages went in at your computer and came out at the destination one. The realization was, “It isn’t the cables, it is the computers which are interesting”. The Net was designed to allow the computers to be seen without having
to see the cables. […]

The WWW increases the power we have as users again. The realization was “It isn’t the computers, but the documents which are interesting”. Now you could browse around a sea of documents without having to worry about which computer they were stored on. Simpler, more powerful. Obvious, really. […]

Now, people are making another mental move. There is realization now, “It’s not the documents, it is the things they are about which are important”. Obvious, really.

If you believe this, if you believe that there is a move from a web of documents to concepts, then you can start to see why media companies will need to start to publish data the right way. Publishing it so that they, and others, can help people find the things they are interested in. How does this happen then? For starters we need a mechanism by which we can identify things and identify the relationship between them – at a level above that of the document. And that’s just what the semantic web technologies are for – they allow different organisations a common way of describing the relationship between things. For example, the Programmes Ontology allows any media company to describe the nature of a programme; the music ontology any artist, release or label.

This implies a couple of different, but related things, firstly it highlights the importance of links. Links are an expression of a person’s interests. I choose what to link to from this blog – which words, which subjects to link from and where to – my choice of links provide you with a view onto how I view the subject beyond what I write here. The links give you insight into who I trust and what I read. And of course it allows others to aggregate my content around those subjects.

It also implies that we need a common way of doing things. A way of doing things that allows others to build with, on top of, the original publishers content. This isn’t about giving up your rights over your content, rather it is about letting it be connected to content from peer sites. It is about joining contextually relevant information from other sites, other applications. As Tim Berners-Lee points out this is similar to the transition we had to make in going from interconnected computers to the Web.

People running Internet systems had to let their computer be used for forwarding other people’s packets, and connecting new applications they had no control over. People making web sites sometimes tried to legally prevent others from linking into the site, as they wanted complete control of the user experience, and they would not link out as they did not want people to escape. Until after a few months they realized how the web works. And the re-use kicked in. And the payoff started blowing people’s minds.

Because the Internet is a generative system it means it has a different philosophy from most other data discovery systems and APIs (including some that are built with Internet technologies), as Ed Summers explains:

…which all differ in their implementation details and require you to digest their API documentation before you can do anything useful. Contrast this with the Web of Data which uses the ubiquitous technologies of URIs and HTTP plus the secret sauce of the RDF triple.

They also often require the owner of the service or API to give permission for third parties to use those services, often mediated via API keys. This is bad, had the Web or the Internet before that adopted a similar approach, rather than the generative approach it did take, we would not have seen the level of innovation we have; and as a result we would not have had the financial, social and political benefits we have derived from it.

Of course there are plenty of examples of where people have been able to work with the web of documents – everything from 800lb gorilla’s like Google through to sites like After Our Time and Speechification – both provide users with a new and distinctive service while also helping to drive traffic and raise brand awareness to the BBC. Just think what would also be possible if transcripts, permanent audio, and research notes where also made available not only as HTML but also as RDF joining content inside and outside the BBC to create a system which, in Zittrain words, provides “a system’s capacity to produce unanticipated change through unfiltered contributions from broad and varied audiences.”

Is Apple addictive?

Cover ArtWith the Macworld Expo upon us the usual hype and speculation is again rife; and the coming weeks will no doubt see the regular postmortems. While I don’t wish to join this particular line of speculation, I would like to consider why it happens; not why Apple is secretive but why there is such an industry in speculation? What is it that makes so many people wish to hypothesise about what Apple may or may not do?

I’m currently reading “Everything Bad is Good for You.” by Steven Johnson – which is proving to be an entertaining read. But more to the point in the opening chapter he discusses how games such as EverQuest, Simcity and Ultima manage to get kids to learn without realising they are learning and; why people stick with something that (to an outsider) appears repetitive and frustrating? He suggests that a games’ power to captivate involves the games ability to tap into the brain’s natural reward circuitry, specifically the neurotransmitter dopamine interacting with the part of the brain known as the nucleus accumbens:

“The dopamine system is a kind of accountant: keeping track of expected rewards, and sending out an alert – in the form of lowered dopamine levels – when those rewards don’t arrive as promised. When a pack-a-day smoker deprives himself of his morning cigarette; when the hotshot Wall Street trader doesn’t get the bonus he was planning on; when the late-night snacker opens the freezer to find someone’s pilfered all the Ben & Jerry’s – the disappointment and craving these people experience is triggered by lowered dopamine levels.

The neuroscientist Jaak Panksepp calls the dopamine system the brain’s ‘seeking’ circuitry, propelling us to seek out new avenues for reward in our environment. Where our brain wiring is concerned, the craving instinct triggers a desire to explore. The system says, in effect: ‘Can’t find the reward you were promised? Perhaps if you just look a little harder you’ll be in luck – it’s got to be around here somewhere.”

Like gamers, Wall Street traders and late-night snackers are Apple fans driven by their dopamine system to seek out the next fix? Perhaps the lack of official information from Apple drives fans to explore and speculate – until of course all is revealed at the next Expo.