Category Design

Publishing to the iPad

NPG recently launched a new iPad app Nature Journals – an app that allows us to distribute journal content to iPad users. I thought it might be interesting to highlight a few of the design decisions we took and discuss why we took them.

"Magazines to Read" by Long Nguyen. Some rights reserved.

“Magazines to Read” by Long Nguyen. Some rights reserved.

Most publishers when they make an iPad magazine tend to design a skeuomorphic digital facsimile of their printed magazine – they build in lots of interactive features but build it using similar production processes as for print and make it feel like a print magazine. They layout each page (actually they need to layout each page twice one for landscape and one for portrait view) and then produce a big file to be distributed via Apple’s app store.

This approach feels very wrong to me. For starters it doesn’t scale well – every issue needs a bunch of people to layout and produce it; from an end users point of view they get a very big file and I’ve seen nothing to convince me most people want all the extra stuff; and from an engineering point of view the lack of separation of concerns worries me. I just think most iPad Magazines are doing it wrong.

Now to be clear I’m not for a moment suggesting that what we’ve built is perfect – I know its not – but I think, I hope we’re on the right track.

So what did we do?

Our overarching focus was to create a clean, uncluttered user experience. We didn’t want to replicate print nor replicate the Website instead we wanted to take a path that focused on the content at the expense of ‘features’ while giving the reader the essence of the printed journals.

This meant we wanted decent typography, enough branding to connect the user to the journal but no more and the features we did build had to be justified in terms of benefits to a scientist’s understanding of the article. And even then we pushed most of the functionality away from the forefront of the interface so that the reader hopefully isn’t too aware of the app. The best app after all is no app.

In my experience most publishers tend to go the other way (although there are notable exceptions) – most iPad Magazines have a lot of app and a lot of bells and whistles, so many features in fact that many magazines need an instruction manual to help you navigate them! That can’t be right.

As Craig Mod put it – many publishers build a Homer.

TheHomer

When Homer Simpson was asked to design his ideal car, he made The Homer. Given free reign, Homer’s process was additive. He added three horns and a special sound-proof bubble for the children. He layered more atop everything cars had been. More horns, more cup holders.

We didn’t want to build a Homer! We tried to only include features where they really benefit the reader or their community. For example, we built a figure viewer which lets the reader see the figures within the article at any point and tap through to higher resolution images because that’s useful.

You can also bookmark or share an article, download the PDF but these are only there if you need them. The normal reading behaviour assumes you don’t need this stuff and so they are hidden away (until you tap the screen to pull then into focus).

Back to the content…

It’s hard to build good automated pagination unless the content is very simple and homogenous. Beautiful, fast pagination for most content is simply too hard unless you build each page by hand. Nasty, poorly designed and implemented pagination doesn’t help anyone. We therefore decided to go with scrolling within an article and pagination between articles.

Under the hood we wanted to build a system that would scale, could be automated and ensured separation of concerns.

On the server we therefore render EPUB files from the raw XML documents in MarkLogic and bundle those files along with all the images and other assets into a zip file and serve them to the iPad app.

From the readers point of view this means they can download whole issues for offline reading  and the total package is quite small – an issue of Nature is c. 30MB, the Review Journals can be as small as 5MB by way of comparison Wired is c. 250MB.

From our point of view the entire production is automated – we don’t need to have people laying out every page or issue. This also means that as we improve the layout so we can rollout those improvements to all the articles – both new content and the archive (although users would need to re download the content).

Our development manifesto

Manifesto’s are quite popular in the tech community — obviously there’s the agile manifesto and I’ve written before about the kaizen manifesto and then there’s the Manifesto for Software Craftsmanship. They all try to put forward a way of working, a way of raising professionalism and a way of improving the quality of what you do and build.

If at first you don't succeed - call an airstrike.

Banksy by rocor, some rights reserved.

Anyway when we started work on on the BBC’s Nature site we set out our development manifesto. I thought you might be interested in it:

  1. Peristence — only mint a new URIs if one doesn’t already exist: once minted, never delete it
  2. Linked open data — data and documents describe the real world; things in the real world are identified via HTTP URIs; links describe how those things are related to each other.
  3. The website is the API
  4. RESTful — the Web is stateless, work with this architecture, not against it.
  5. One Web – one canonical URI for each resource (thing), dereferenced to the appropriate representation (HTML, JSON, RDF, etc.).
  6. Fix the data don’t hack the code
  7. Books have pages, the web has links
  8. Do it right or don’t do it at all — don’t hack in quick fixes or ‘tactical solutions’ they are bad for users and bad for the code.
  9. Release early, release often — small, incremental changes are easy to test and proof.

It’s worth noting that we didn’t always live up to these standards — but at least when we broke our rules we did so knowingly and had a chance of fixing them at a later date.

The web as an ethical layer

If you’ve been around the web for any length of time you’ve probably seen a diagram similar to this:

It’s the classic internet hourglass with signal carriers down the bottom, IP in the middle and applications up top. You can see the World Wide Web perched atop HTTP, one more technical layer in a technical layer cake.

It’s maybe because I’m not that technical but I’ve never really seen the web as a technical layer on top of the internet. In terms of technical design there’s not that much there. The design decisions of the web always seemed more political / ethical than pure technical. So at least in my opinion the web is a political / ethical layer above the internet.

As Ben Ward recently pointed out we tend to obsess on new standards OAuth, OpenID, Contacts, Connect, Geolocation, microformats, widgets, AJAX, HTML5, local storage, SPDY, ‘The Cloud’ and lose track of what the web is actually for. In Ben’s words articles and poems and pictures and movies and music, everywhere! How brilliant is that! Or put in my simple terms the point of the web is universal access to information. Everything else is just window dressing and mostly leads to restrictions. I think just about every blog post I’ve written includes this quote from Tim Berners-Lee. Now doesn’t seem like a good time to break that habit so:

The Web is designed as a universal space. Its universality is its most important facet. I spend many hours giving talks just to emphasize this point. The success of the Web stems from its universality as do most of the architectural constraints.

And that’s more important than any tech spec. The web isn’t politically / ethically neutral and wasn’t designed by people who are / were politically / ethically neutral. Which is why the most important design decision of the web was statelessness and the most important architectural style is REST. Statelessness means everyone has equal access to information regardless of age or gender or ethnic background or physical location or physical ability etc etc etc. Because the web doesn’t care about who you are, only what you asked for.

Which is also why accessibility really matters. Anything that restricts access to information to any one group is bad. Which means accessibility also means mobile views (because that’s the main access point for many people in “less developed” countries) and data views (because for some people the access they want is to the raw data).

And it’s why anything that attempts to impose state on top of the web is, in general, bad. It just adds friction and any friction reduces people’s access to information. So walled gardens, paywalls, anything that requires you to log in, anything that forces you to accept cookies, anything that needs to know something about you before it gives you information.

At the risk of descending properly into freetard territory the other great thing about the web is once you’ve found what you’re looking for nothing is locked down (other than a few clumsy attempts at DRM). More or less anything you find (text, images, a/v files) can be taken away and played with and recontextualised and republished and taken again…

Which sometimes is bad. Like when someone posts a picture of their friend pulling a silly face to flickr. And fails to understand licencing and makes it available for commercial use. And some company takes it, adds a demeaning strapline and posts it on billboards across Australia causing some degree of pain and distress

But more often it’s a good thing. Because I could search for the TimBL quote above, find it, copy it and paste it in here. And when my daughter does her homework (actually she’s 4 so doesn’t really have any yet) she can go to the web and take a picture and paste it into the story she’s writing. And sometimes she’ll probably steal and sometimes probably give credit but in general what you can find you can borrow and take into your real life and reshape and recontextualise and make new meanings. And that’s good.

And there’s other ways it’s good. The election day Sun front page barely left the presses before pictures of it were winging round the web. And then people took that image, downloaded it (because the web makes that easy – it didn’t have to but it does), modified it and uploaded new versions. Which people commented on and talked about so more people made more versions and talked more about press bias and made jokes. And I think that’s healthy.

There have been occasional attempts to fragment the web. To create an academic space or a commercial space or a copyleft space or a ‘safe’ space. Apple’s shiny iThing app store model is just the latest attempt. Usually the motivations have been honourable. But the effect is always to create a something that’s less free than the open web; to take a public space and turn it into a policed enclosure. Or maybe like a public space in the same way a shopping mall might be thought to be a public space but is owned and controlled and often privately policed. Policing access is dangerous because it removes universality. And policing re-contextualisation is dangerous because it takes away the right to fair usage (my daughter’s homework…). But the people who really do want to steal will always find a way round any form of rights restriction that’s embodied in code and not in community norms. So you punish the “fair users” in an attempt to restrict the real “criminals” who get round the restrictions anyway. And end up building something that just frustrates.

So I really think the web (not the internet which is really just some pipes) is the greatest thing we’ve ever created. More than telly, more than radio, more than newspapers, more than books. Because it’s universal and because it’s open for reuse.

But there are problems. Anything that requires a computer and a phone line (or at least a web capable mobile) can’t quite be universal unless everyone has those things (or lives in a community with shared access to those things.) There’s a lot of talk about digital inclusion, about taxes to fund broadband and about universal access to the web. But it all misses the point. It was never just about having access to other people’s information. It was always about everybody, everywhere having the ability to add their thoughts, the things they know, to the web. Treating digital inclusion as a question of connecting pipes to homes is an easy mistake to make because it follows established patterns of water and gas and electricity and television aerials. But the web was never designed to be a broadcast / distribution mechanism. Digital inclusion doesn’t just mean everyone needs to have a receiver on their roof; it means they need access to a transmitter too. Without the ability to transmit, to publish, people just become passive consumers of other people’s information. And digital inclusion has to include the ability to produce as well as consume.

So physical access is only the first hurdle. Once you’re over that, the barrier to publishing is still too high. Owning your own publishing space means you have to start understanding domain names and DNS and server set ups and code installs and updates. Which for most people is just too difficult. It’s certainly too difficult for me which is why I end up publishing this here (wherever here turns out to be). Luckily “social media” sites arrived to fill the skills gap. But social media is a bit of a misnomer. The web was always supposed to be social and always meant to be open to contributions from everyone. The innovation of social media wasn’t really socialness. From Flickr to WordPress to Blogger to YouTube to Twitter the real innovation was the commoditisation of publishing technology. Now everyone could share what they knew. But at a price.

The most obvious price of commodity publishing is loss of control over your content. In almost all cases the hosting organisation will take a permissive licence on your content:

a worldwide, non-exclusive, royalty-free, transferable licence (with right to sub-licence) to use, reproduce, distribute, prepare derivative works of, display, and perform that User Submission in connection with the provision of the Services and otherwise in connection with the provision of the Website and YouTube’s business, including without limitation for promoting and redistributing part or all of the Website (and derivative works thereof) in any media formats and through any media channels

where for YouTube you can pretty much substitute any website with user submissions from Facebook to the BBC. It means you retain copyright but we we give ourselves so many rights your copyright is virtually useless. Content acquisition on the cheap. It’s a bigger problem than digital literacy because there’s no point educating people about the issues if they still can’t publish and avoid them.

The second major problem is privacy. You’d have had to be living life under stones to not notice that privacy has become the big issue of year. Facebook in particular have gotten regularly flamed for their ever decreasing privacy circle. Now they’re stepping outside the realms of knowing about your social network, your status and your photos and attempting to own the graph of what you like from elsewhere on the web. There are, as ever, arguments on both sides but the only one really worth reading is danah boyd‘s Privacy and Publicity in the Context of Big Data. There’s too much in there to really sum up in a one liner but my attempt would be: privacy issues aren’t about how much information you share; they’re about the gap between your perception of the context of sharing and the reality. Extrapolating from that, once you trust your personal information to “the cloud” you lose control over the context of use. Your data can be meshed with other data in ways you didn’t even begin to anticipate. And the rules around context can be nudged in whatever direction most benefits the cloud service.

It’s like building a giant Tesco loyalty card in the sky. Clive Humby (chairman of Dunnhumby (the people who run the real Tesco clubcard)) once said:

credit-card data tells you how they live generally, the supermarket data tells you their motivations, the media data tells you how to talk to them. If you have those three things, you’re in marketing nirvana

The social media “cloud” seems uncomfortably like Mr Humby’s dream web. And unlike Tesco it doesn’t even pay you for your data. Obviously there are worse fates than being the target of one of Clive’s targeted mail drops. Liberal democracies tend to assume they’ll always be liberal democracies. History seems to suggest otherwise. If the worst were to happen do you really want all that personal data out there outside your control? You might end up with more to worry about than whether your prospective boss sees you drunk on Facebook.

Is Clive’s web the one we really want to build? Or is there a fairer, more distributed solution that allows everyone to share the things they know on their terms? With the power to publish, redact, edit… I’m probably in danger of jumping on Steven Pemberton‘s bandwagon (who’s been saying this for several years now) but until everyone owns and controls their own publishing space we won’t really have built the web. And (with my day job hat on) until “the public” can “broadcast” without fear or favour we won’t really have built public service broadcasting.

I’ll leave with this:

It’s the original logo for the World Wide Web drawn by its co-creator Robert Cailliau. Until Dan Brickley pointed me at it I wasn’t even aware of its existence. The most important point is it doesn’t attempt to qualify the ‘us'; it just means everyone.

In my dream world everybody working with the web in any capacity would have this stapled above their desk. So when all the talk of product planning and sprint planning and deployment and test driven development and check ins and check outs and branded experience and user stories gets too tiring you can look up and remember why we’re doing this.

Follow

Get every new post delivered to your Inbox.

Join 1,337 other followers

%d bloggers like this: