The problem with breadcrumb trails

The other day I was chatting with some of the designers at work about secondary navigation and the subject of breadcrumb trails came up. Breadcrumb trails are those bits of navigation summed up by Jakob Nielsen as:

a single line of text to show a page’s location in the site hierarchy. While secondary, this navigation technique is increasingly beneficial to users.

and illustrated on Wikipedia by:

Home page > Section page > Subsection page

For reasons which will hopefully become clear the whole subject of breadcrumb trails vexes me and rather than shout into Twitter I thought I’d type up some thoughts so here goes.

Types of breadcrumb trail

There are 2 main types of breadcrumb trail:

  • path based trails show the path the user has navigated through to arrive at the current page
  • location based trails show where the page is located in the ‘website hierarchy’

Both of these are problematic so let’s deal with each in turn.

Path based breadcrumb trails

The first thought most people have when confronted by the concept of a breadcrumb trail is Hansel and Gretel. In the story the children were led into the forest and as they walked dropped a trail of bread crumbs. The intention was to retrace their steps out of the forest by following the trail of breadcrumbs (at least until the birds ate them).

The important point is that Hansel and Gretel weren’t conducting a topographical study of the forest. The trail they laid down was particular to their journey. If Alice and Bob had been wandering round the same forest on the same day they might have left a trail of cookie crumbs or even traced out their journey with string. The 2 journeys might have crossed or merged for a while but each trail would be individual to the trail maker(s).

The path based breadcrumb trail is the same principle but traced out in pages the user has taken to get to the page they’re on now. So what’s wrong with that?

If you’re a user experience person you’ve probably heard developers talking about REST and RESTful APIs and possibly thought REST was just techy stuff that doesn’t impact on UX. This would be wrong. From a developer point of view REST provides an architectural style for working with the grain of the web. And the grain of the web is HTTP and HTTP is stateless.

So what does that mean? It means when you ask for a page across the web the only data sent in the request is (HTTP) get me this resource and a tiny bit of incidental header information (what representation / format you want the resource in – desktop HTML, mobile HTML, RSS; which languages do you prefer etc). When the server receives the request it doesn’t know or need to know anything about the requester. In short HTTP does not know who you are, does not know ‘where’ you are, does not care where you’ve come from.

There are various reasons given for this design style; some of them technical, some of them ethical. As ever the ethical arguments are far more interesting so:

The Web is designed as a universal space. Its universality is its most important facet. I spend many hours giving talks just to emphasize this point. The success of the Web stems from its universality as do most of the architectural constraints.

is my favourite quote from Tim Berners-Lee. It’s the universality of the web that led to the design decision of stateless HTTP and the widespread adoption of REST as a way to work with that design. Put simply anybody with a PC and a web connection can request a page on the web and they’ll get the same content; regardless of geographic location, accessibility requirements, gender, ethnic background, relative poverty and all other external factors. And it’s the statelessness of HTTP that allows search bots to crawl (and index) pages just like any other user.

You can of course choose to work against the basic grain of the web and use cookies to track users and their journeys across your site. If you do choose that route then it’s possible to dynamically generate a path based breadcrumb trail unique to that user’s navigation path. But that functionality doesn’t come out of the box; you’re just giving yourself more code to write and maintain. And that code will just replicate functionality already built into the browser: the back button and browser history.

It’s possibly also worth pointing out that any navigation links designed to be seen by a single user are not, by definition, seen by any other user. This includes search bots which are to all intents and purposes just very dumb users. Any effort you put into creating links through user specific path based breadcrumbs will not be seen or followed by Google et al so will accrue no extra SEO juice and won’t make your content any more findable by other users. Besides which…

…it’s really not about where you’ve come from

One of my main bugbears with usability testing is the tendency to sit the user down in front of a browser already open at the homepage of the site to be tested. There’s an expectation that user experience is a matter of navigating hierarchies from homepage to section page to sub-section page to content page. If this were true then path based breadcrumb trails might make some sense.

But in reality how many of your journeys start at a site homepage? And how many start from a Google search or a blog post link or an RSS feed or a link shared by friends in Facebook or Twitter. You can easily find yourself deep inside a site structure without ever needing to navigate through that site structure. In which case a path based trail becomes meaningless.

In fairness Jakob Neilson points out pretty much the same thing in the ‘Hierarchy or History’ section of his post:

Offering users a Hansel-and-Gretel-style history trail is basically useless, because it simply duplicates functionality offered by the Back button, which is the Web’s second-most-used feature.

A history trail can also be confusing: users often wander in circles or go to the wrong site sections. Having each point in a confused progression at the top of the current page doesn’t offer much help.

Finally, a history trail is useless for users who arrive directly at a page deep within the site.

All this is true but it’s only part of the truth. The real point is that path based breadcrumb trails work against the most fundamental design decision of the web: universality through statelessness. By choosing to layer state behaviour over the top of HTTP you’re choosing to pick a fight with HTTP. As ever it’s best to pick your fights with care…

Location based breadcrumb trails

In the same post Jakob Neilson goes on to say:

breadcrumbs show their greatest usability benefit [for users arriving directly at a page deep within a site], but only if you implement them correctly – as a way to visualize the current page’s location in the site’s information architecture. Breadcrumbs should show the site hierarchy, not the user’s history.

But what’s meant by ‘site hierarchy’?

Hierarchy and ‘old’ media

Imagine you’re in proud possession of a set of box sets of Doctor Who series 1-4. Each box has 4, 5 or 6 DVDs. Each DVD has 2 or 3 episodes. Each episode has 10 or so chapters:

This structure is obviously mono-hierarchical; each thing has a single parent. So the chapter belongs to one episode, the episode is on one disc, the disc is in one box set. It’s the same pattern with tracks on CDs, chapters in books, sections in newspapers…

With ‘old’ media the physical constraints of the delivery mechanisms enforce a mono-hierarchical structure. Which makes it easy to signpost to users ‘where’ they are. An article in a newspaper can be in the news section or the comment section or the sport section or the culture section but it’s only ever found in one (physical) place (unless there’s a cock-up at the printing press). So it gets an appropriate section banner and a page number and a page position.

But how does this map to the web?

Files and folders, sections and subsections, identifiers and locations

The first point is people like to organise things. And they do this by categorising, sub-categorising and filing appropriately, dividing up the world into sets and sub-sets and sub-sub-sets… Many of the physical methods of categorisation have survived as metaphors into the digital world. So we have folders and files and inboxes and sent items and trash cans.

In the early days of the web the easiest way to publish pages was to stick a web server on a Unix box and point to the folder you wanted to expose. All the folders inside that folder and all the folders inside those folders and all the files in all the folders were suddenly exposed to the world via HTTP. And because of the basic configuration of web servers they were exposed according to the folder structure on the server. So a logo image filed in a folder called new, filed in a folder called branding, filed in a folder called images would get the URL /images/branding/new/logo.jpg. It was around this time that people started to talk about URLs (mapping resources to document locations on web servers) rather than HTTP URIs (file location independent identifiers for resources).

Obviously file and folder structures are also mono-hierarchical; it’s not possible for a file to be in 2 folders simultaneously. And the easiest and most obvious way to build site navigation was to follow this hierarchical pattern. So start at the home page, navigate to a section page, navigate to a sub-section page and navigate to a ‘content’ page; just as you navigate through folders and files on your local hard drive. Occasionally some sideways movement was permitted but mostly it was down, down, up, down….

Many of the early battles in Information Architecture were about warping the filing systems and hierarchies that made sense inside businesses into filing systems and hierarchies that made sense to users. But it was still about defining, exposing and navigating hierarchies of information / pages. In this context the location based breadcrumb trail made sense. As Neilson says the job of the location based breadcrumb trails is to show the site hierarchy and if you have a simple, well-defined hierarchy why not let users see where they are in it? So location based breadcrumb trails make sense for simple sites. The problem comes with…

Complex sites and breadcrumb trails

Most modern websites are no longer built by serving static files out of folders on web servers. Instead pages are assembled on the fly as and when users request them by pulling data out of a database and content out of a CMS, munging together with feeds from other places and gluing the whole lot together with some HTML and a dash of CSS. (Actually, when I say most I have no idea of the proportions of dynamic vs static websites but all the usual suspects (Amazon, Facebook, Twitter, Flickr etc) work dynamically.) Constructing a site dynamically makes it much easier to publish many, many pages; both aggregation pages and content pages. The end result is a flatter site with more complex polyhierarchical structures that don’t fit into the traditional IA discipline of categorisation and filing.

The problem is wholly contained sets within sets within sets are a bad way to model most things. Real things in real life just don’t lend themselves to being described as leafs in a mono-hierarchical taxonomy. It’s here that I part company with Neilson who, in the same post, goes on to say:

For non-hierarchical sites, breadcrumbs are useful only if you can find a way to show the current page’s relation to more abstract or general concepts. For example, if you allow users to winnow a large product database by specifying attributes (of relevance to users, of course), the breadcrumb trail can list the attributes that have been selected so far. A toy site might have breadcrumbs like these: Home > Girls > 5-6 years > Outdoor play

There’s an obvious problem here. In real life sets are fuzzy and things can be ‘filed’ into multiple categories. Let’s pretend the toy being described by Neilson is a garden swing that’s also perfectly suited to a 5-6 year old boy. In this case journeys to the swing product page might be ‘Home > Girls > 5-6 years > Outdoor play’ or ‘Home > Boys > 5-6 years > Outdoor play’. If there’s an aggregation of all outdoor playthings there might be journeys like ‘Home > Outdoor play > Girls > 5-6 years’ and ‘Home > Outdoor play > Boys > 5-6 years. If the swing goes on sale there might be additional journeys like ‘Home > Offers > Outdoor play’ etc.

Now it’s not clear from the quote whether Neilson is only talking about breadcrumb trails on pages you navigate through on your way to the product page or also including the product page itself. But the problem remains. If the garden swing can be filed under multiple categories in your ‘site structure’ which ‘location’ does the product page breadcrumb trail display? There are 4 possible ways to deal with this:

  • drop the breadcrumb trail from your product pages. But the product pages are the most important pages on the website. They’re the pages you want to turn up in search results and be shared by users. I can’t imagine it was Neilson’s intent to show crumbtrails on aggregation pages but not on content pages so…
  • make the breadcrumb trail reflect the journey / attribute selection of the current user. Unless I’m misreading / misunderstanding Neilson this seems to be what he’s suggesting by the breadcrumb trail can list the attributes that have been selected so far. Quite how this differs from a path based breadcrumb trail confuses me. Again you’re serving one page at one URI that changes state depending on where the user has ‘come from’. Again you’re choosing to fight the statelessness of HTTP. And again the whole thing fails if the user has not navigated to that page via your ‘site structure’ but instead arrived via Google or Bing or a Twitter link or a link in an email…
  • Serve (almost) duplicate pages at every location the thing might be categorised under with the breadcrumb trail tweaked to reflect ‘location’. For all kinds of reasons (not least your Google juice and general sanity) serving duplicate pages is bad. It’s something you can do but really, really doesn’t come recommended.
  • Serve a single page at a single RESTful URI and make a call about which of the many potential categories is the most appropriate.

The latter option can be seen in use on The Guardian website which attempts to replicate the linear content category sectioning which works so well in the print edition into an inherently non-linear web form. So the Chelsea stand by John Terry and insist he took no money article has a location breadcrumb trail of:

whereas the High court overturns superinjunction granted to England captain John Terry article has a location breadcrumb trail of:

At some point in the past it’s possible (probable?) that the superinjunction story was linked to from the homepage, the sport page, the Chelsea page, the John Terry page etc. But someone has made the call that although the article could be filed under Football or Chelsea or John Terry or Press freedom it’s actually ‘more’ a press freedom story than it is a John Terry story.

The point I’m trying to make is that breadcrumb design for a non-hierarchical site is tricky. It’s particularly tricky for news and sport where a single story might belong ‘inside’ many categories. But if you’re lucky…

It isn’t about ‘site structure’, it’s about ‘thing structure’

Traditional IA has been about structuring websites in a way that journeys through the pages of those sites make the most amount of sense to the most amount of users. The Linked Data approach moves away from that, giving URIs to real life things, shaping pages around those things and promoting journeys that mirror the real life connections between those things.

Two examples are BBC Programmes and BBC Wildlife Finder. Neither of these sites are hierarchical and the ontologies they follow aren’t hierarchical either. An episode of Doctor Who might be ‘filed’ under Series 2 or Drama or Science Fiction or programmes starring David Tennant or programmes featuring Daleks or programmes on BBC Three on the 4th February 2010. So again location based breadcrumb trails are tricky. But like The Guardian one of the many possible hierarchies is chosen to act as the breadcrumb trail:

which is echoed in the navigation box on the right of the page:

The same navigation box also allows journeys to previous and next episodes in the story arc:

The interesting point is that the breadcrumb links all point to pages about things in the ontology – not to category / aggregation pages. So it’s less about reflecting ‘site structure’ and more about reflecting the relationship between real world things. Which is far easier to map to a user’s mental models.

Wildlife finder is similar but subtly different. The location breadcrumb at the top of the page is a reflection of ‘site structure’. In the original Wildlife Finder it didn’t exist but initial user testing found that many people felt ‘lost’ in the site structure so it was added in. Subsequent user testing found that its addition solved the ‘lost’ problem. So in an input / output duality sense it’s primarily an output mechanism; it makes far more sense as a marker of where you are than a navigation device to take you elsewhere.

Much more interesting is the Scientific Classification box which reflects ‘thing structure’ (in this case the taxonomic rank of the Polar Bear), establishes the ‘location’ of the thing the page is about and allows navigation by relationships between things rather than via ‘site structure’:

In summary

  • We need a new word for crumbtrails. Even seasoned UX professionals get misled by the Hansel and Gretel implications. Unfortunately ‘UX widgets that expose the location of the domain object in the ontology of things’ doesn’t quite cut it
  • Secondary navigation is hard; signposting current ‘location’ to a user is particularly hard. IAs need to worry as much about ‘thing structure’ as ‘site structure’
  • Building pages around things and building navigation around relationships between things makes life easier
  • HTTP and REST are not techy / developer / geeky things. They’re the fundamental building blocks on top of which all design and user experience is built

The mobile computing cloud needs OAuth

As Paul Miller notes Cloud Computing is everywhere – we are pushing more and more data and services into the cloud. Particularly when accessed from mobile devices this creates an incredibly powerful and useful user experience. I love it. The way that I can access all sorts of services from my iPhone means that an already wonderful appliance becomes way more powerful. But not all is well in the land of mobile-cloud computing; a nasty anti-pattern is developing. Thankfully there is a solution and it’s OAuth.

"Mobile phone Zombies" by Edward B. Used under licence.
"Mobile phone Zombies" by Edward B. Used under licence.

So what’s the problem then? Since Apple opened up the iPhone to third party developers we have seen a heap of applications that connect you to your online services – there are apps that let you upload photos to Flickr, post to Twitter, see what’s going on in Facebook land all sorts of stuff. The problem is the way some of them are gaining access to these services by making you enter your credentials in the applications rather than seeking to authorise the application from the service.

Probably the best way to explain what I mean is to look at how it should work. The Pownce app is an example of doing it right as is Mobile Foto – these applications rely on OAuth. This is how it works: rather than entering your user-name and password in the application you are sent over to Safari to log into the website and from there you authorise (via OAuth) the application to do its thing.

This might not sound so great – you could argue that the user experience would be better if you were kept within the application. But that would mean that your login credentials would need to be stored on your ‘phone, and that means that you need to disclose those credentials to a third party (the folks that wrote the app).

By using OAuth you log into Flickr, Pownce etc. and from there authorise the application to user the site – your credentials are kept safe and if your iPhone gets stolen you can visit the site and disable access. Everything is where it should be and that means your login details are safe.

To be fair to the iPhone app developers this type of delegated authorisation isn’t always possible. Twitter, for example, still hasn’t implement OAuth and as a result if you want to use one of the growing number of iPhone Twitter app you need to give up your user-name and password. I find this incredible frustrating – especially from a service like Twitter where (according to Biz Stone, Twitter’s co-founder) “the API… has easily 10 times more traffic than the website“.

links for 2008-02-23

links for 2008-02-20

Links for 2008.01.15

» QR-Code enhanced scarf [via cookin’/relaxin]
Geeky clothing adverts for website. QR codes are ‘physical hyperlinks’ – take a photo of the matrix code with phone and visit the webpage.

» More info on those QR Code
Pictographic representaitons of URLs.

» QR-Code Generator | QR-Code
Get your own QR codes here.

» Jonathan Ive’s passion for simplicity and honest design is rooted in the work of Dieter Ram [gizmodo via blackbeltjones]
His passion for “simplicity” and “honest design” is at the core of Dieter Rams’ 10 principles for good design. Both designers shape their products around the function with no artificial design, keeping the design honest.

» Looking increasingly likely that the UK government will repeal the law against blasphemy [davblog]
“Following a debate the Justice Minister, said that the government had “every sympathy for the case for formal abolition” and that, subject to a “short and sharp” consultation with the Anglican church, they intended to table their own abolition amendment.

Web design 2.0 – it’s all about the resource and its URL

Jamie’s recent post about his work on the design of BBC’s /programmes service highlights an important trend in the design of modern web products. It’s all about the resource and its URL.

Topography thumnail

Web site design use to be focused on the page and the sitemap – we assumed users would visit a site and browse around a bit while they were there – sites therefore wrapped their content in heavy branding and optimised their navigation around the ‘left hand nav’.

This sort of made sense if, as a publisher, you’re dealing with a small number of pages, your paradigm for web publishing is the same as print publishing and your technology is limited to static files. However, it does force users into navigating in a ‘wagon wheel strategy’ i.e. they would read something and then navigate up to a top level index page (e.g. news, programmes, a-z &c.) to find something else of interest before navigating out again to a resource.

Site owners effectively thought of their sites as silos – a self contained object, a web of pages, with a handful of doors (links) in and out – well even if they didn’t think of them as silos they sure treated them as such. But as Tom Coates puts it Web 2.0 is about moving from a “web of pages to a web of data“:

A web of data sources, services for exploring and manipulating data, and ways that users can connect them together.

This has some important implications for the design of web sites. Users expect to be able to navigate directly from resource to resource. From concept to concept. Look at the YouTube design – right there on the right are links to other related videos and the top level navigation is light weight and focused around aggregations or list views.

Lego Darth Vader Canteen Incident

In other words web 2.0 sites have two main classes of page: primary objects or resources and aggregation views which give their users multiple routes into the same resource. And that resource is located at a single URL. That last point might seem painfully self evident: ‘one resource = one URL’ but either its not that obvious or its difficult to achieve.

Wikipedia are the masters of this – they work to ensure that they only have one entry per concept – this means that they have one URL per concept. And that’s a lot better, if harder to achieve, than one URL per resource. On Wikipedia an entry will be moved, merged or split to ensure that it only deals with one concept. But even if your resource deals with more than one concept, it should only be found at one (persistent) URL so that search engines can index it properly (so people can find it); so people can link to it and; so third party applications can be integrated with it. Simon Willison explains why, although people may well be able to discern that two URLs probably point to the same thing, machines can’t.

Once you have a single resource at a single persistent URL you can start to do some interesting things. You can make that resource available in a variety of different formats each optimised for different uses: HTML for web browsers, XHTML MB or WML for mobile, JSON for Ajax applications etc. You can start to expose your data to other web applications and then you can start to benefit from the network effect.

One consequence of a network effect is that the purchase of a good by one individual indirectly benefits others who own the good — for example by purchasing a telephone a person makes other telephones more useful.

By making your site and its data available in this way means that others will be able to link to your resources to help give context to their content. And if you’ve done your job well they won’t even need to manually ‘link’ to it – if you happen to be using a common set of identifiers and vocabularies then web applications can do a lot of the work for you.

If however, you start off by thinking about web pages, site maps, left hand navigations and visual design you will very quickly photoshop yourself into a corner. You will find it difficult to create a web of data instead you will end up with a bag of pages, others won’t be able to integrate with your data and your relevance on the web will fall or fail to start.

And if you are after a case study of what to do – you should have a look at Matt’s presentation at FOWA on the development of Dopplr.