Scientific publishing on the Web

As usual these are my thoughts, observations and musings not those of my employer.

Scientific publishing has in many ways remained largely unchanged since 1665. Scientific discoveries are still published in journal articles where the article is a review, a piece of metadata if you will, of the scientists’ research.

Nature 1869

Cover of the first issue of Nature, 4 November 1869.

This is of course not all bad. For example, I think it is fair to say that this approach has played a part in creating the modern world. The scientific project has helped us understand the universe, helped eradicate diseases, helped decreased child mortality and helped free us from the drudgery of mere survival. The process of publishing peer reviewed articles is the primary means of disseminating this human knowledge and as such has been, and remains, central to the scientific project.

And if I am being honest nor is it entirely fair, to claim that things haven’t changed in all those years – clearly they have. Recently new technologies, notably the Web, have made it easier to publish and disseminate those articles, which in turn has lead to changes in the associated business models of publishers e.g. Open Access publications.

However, it seems to me that scientific publishers and the scientific community at large has yet to fully utilize the strengths of the Web.

Content is distributed over http but what is distributed is still, in essence, a print journal over the Web. Little has changed since 1665 – the primary objects, the things a SMT STM publisher publishes remain the article, issue and journal.

The power of the Web is its ability to share information via URIs and more specifically its ability to globally distribute a wide range of documents and media types (from text to video to raw data and software (as source code or as binaries)). The second and possibly more powerful aspect of the Web is its ability to allow people to recombine information, to make assertions and statements about things in the world and information on the Web. These assertions can create new knowledge and aid discoverability of information.

This is not to say that there shouldn’t be research articles and journals – both provide value – for example journals provides a useful point of aggregation and quality assurance to the author and reader. The article is an immutable summary of the researchers work at a given date and, of course, the paper remains the primary means of communication between scientists. However, the Web provides mechanisms to greatly enhance the article, to make it more discoverable and allow it to place it into a wider context.

In addition to the published article STM publishers already publish supporting information in the form of ‘supplementary information’ unfortunately this is often little more than a PDF document. However, it is also not clear (to me at least) if the article is the right location for some of this material – it appears to me that a more useful approach is that of the ‘Research Object’ [pdf], semantically rich aggregations of resources, as proposed by the Force11 community.

It seems to me that the notion of a Research Object as the primary published object is a powerful one. One that might make research more useful.

What is a Research Object?

Well what I mean by a Research Object is a URI (and if one must a DOI) that identifies a distinct piece of scientific work. An Open Access ‘container’ that would allow an author to group together all the aspects of their research into a single location. These resources within it might include:

  • The published article or articles if a piece of research resulted in a number of articles (whether they be OA or not);
  • The raw data behind the paper(s) or individual figures within the paper(s) (published in a non-proprietary format e.g. csv not Excel);
  • The protocols used (so an experiment can be easily replicated);
  • Supporting or supplementary video;
  • URLs to News and Views or other commentary from the Publisher or elsewhere;
  • URLs to news stories;
  • URLs to university reading lists;
  • URLs to profile pages of the authors and researchers involved in the work;
  • URLs to the organizations involved in the work (e.g. funding bodies, host university or research lab etc.);
  • Links to other research (both historical i.e. bibliographic information but also research that has occurred since publication).

Furthermore, the relationship between the different entities within a Research Object should be explicit. It is not enough to treat a Research Object as a bag of stuff, there should be stated and explicit relationship between the resources held within a Research Object. For example, the relationship between the research and the funding organization should be defined via a vocabulary (e.g. funded_by), likewise any raw data should be identified as such and where appropriate linked to the relevant figures within a paper.

Something like this:

Domain model of a Research Object

The major components of a Research Object.

It is important to note that while the Research Object is open access the resources it contains may or may not be. For example, the raw data might be open whereas the article might not. People would therefore be able to reference the Research Object, point to it on the Web, discuss it and make assertions about it.

In the FRBR world a Research Object would be a Work i.e. a “distinct intellectual creation”.

Making research more discoverable

The current publishing paradigm places seriously limitations on the discoverability of research articles (or research objects).

Scientists work with others to research a domain of knowledge; in some respects therefore research articles are metadata about the universe (or at least the experiment). They are assertions, made by a group of people, about a particular thing based on their research and the data gathered. It would therefore be helpful if scientists could discover prior research along these lines of enquiry.

Implicit in the above description of a Research Object is the need to publish URIs about: people, organisations (universities, research labs, funding bodies etc.) and areas of research.

These URIs and the links between them would provide a rich network of science – a graph that describes and maps out the interrelationships between people, organisations and their area of interest, each annotated with research objects, such a graph would also allow for pages such as:

  • All published research by an author;
  • All published research by a research lab;
  • The researchers that have worked together in a lab;
  • The researchers who have collaborated on a published paper;
  • The areas of research by lab, funding body or individual;
  • Etc.

Such a graph would help readers to both ‘follow their nose’ to discover research and provide meaningful landing pages for search.

Digital curation

One of the significant benefits a journal brings to its readership is the role of curation. The editors of the journal selects and publishes the best research for their readers. On the Web there is no reason this role couldn’t be extended beyond the editor to the users and readers of a site.

Different readers will have different motivations for doing so but providing a mechanism for those users to aggregate and annotate research objects provides a new and potentially powerful mechanism by which scientific discoveries could be surfaced.

For example, a lecturer might curate a collection of papers for an undergraduate class on genomics, combining research objects with their own comments, video and links to other content across the web. This collection could then be shared and used more widely with other lecturers. Alternatively a research lab might curate a collection of papers relevant to their area of research but choose to keep it private.

Providing a rich web of semantically linked resources in this way would allow for the development of a number of different metrics (in addition to Impact Factor). These metrics would not need to be limited to scientific impact; they could be extended to cover:

  • Educational indices – a measure of the citations in university reading lists;
  • Social impact – a measure of citations in the mainstream media;
  • Scientific impact of individual papers;
  • Impact of individual scientists or research labs;
  • Etc.

Such metrics could be used directly e.g. research indexes or; indirectly e.g. to help readers find the best/ most relevant content.

Finally it is worth remembering that in all cases this information should be available for both humans and machines to consume and process. In other words this information should be available in structured, machine readable formats.

Our development manifesto

Manifesto’s are quite popular in the tech community — obviously there’s the agile manifesto and I’ve written before about the kaizen manifesto and then there’s the Manifesto for Software Craftsmanship. They all try to put forward a way of working, a way of raising professionalism and a way of improving the quality of what you do and build.

If at first you don't succeed - call an airstrike.

Banksy by rocor, some rights reserved.

Anyway when we started work on on the BBC’s Nature site we set out our development manifesto. I thought you might be interested in it:

  1. Peristence — only mint a new URIs if one doesn’t already exist: once minted, never delete it
  2. Linked open data — data and documents describe the real world; things in the real world are identified via HTTP URIs; links describe how those things are related to each other.
  3. The website is the API
  4. RESTful — the Web is stateless, work with this architecture, not against it.
  5. One Web – one canonical URI for each resource (thing), dereferenced to the appropriate representation (HTML, JSON, RDF, etc.).
  6. Fix the data don’t hack the code
  7. Books have pages, the web has links
  8. Do it right or don’t do it at all — don’t hack in quick fixes or ‘tactical solutions’ they are bad for users and bad for the code.
  9. Release early, release often — small, incremental changes are easy to test and proof.

It’s worth noting that we didn’t always live up to these standards — but at least when we broke our rules we did so knowingly and had a chance of fixing them at a later date.

What I wish I had made at bbc.co.uk if I stayed

In many ways I’ve been very lucky at the BBC I’ve helped make some cool stuff – well stuff I’m proud of. But since I’ve decided to leave I’ve started to wonder what else I would have like to have made, if I had stayed at the BBC.

There’s a bit of a health warning however, these are just ideas. I’ve no real idea if they are that practical and they almost certainly don’t fit into the current strategy.

Get Excited and make things

Get Excited and make things

My ideas…

Lab UK meets so you want to be a scientist

Lab UK is the part of the BBC’s website where you can participate in scientific experiments. They’ve done some cool stuff – including Brain Test Britain which had 67,000 people sign up and resulted in a paper in Nature [pdf].

The various experiments are tied into TV programmes and this is really important because it helps generate interest and get the number of participants required to make the experiment work. However, it also means that the experiments are designed in advance, by the scientists, and the public’s role is one of test subject.

The experiments do help build knowledge but they probably don’t help people understand science.

So here’s the idea – a bit like Radio 4’s “So You Want To Be A Scientist” the process would start with people suggesting ideas, questions they would like answering, the site would need to provide sufficient support to help people refine their ideas. It might even use material from the BBC archive to help explain some of the basics but at it’s heart it would be a collaborative process.

The ideas would then be voted on and the most popular would then be taken forward. With the help of scientists the experiment would be designed and build and carried out on the Lab UK platform, giving these amateur experiments potential access to a huge audience.

The process would be a rolling series of experiments designed and carried out by the public.

History through the eyes of the BBC

The BBC makes a lot of programmes about history – but much more significantly it has been part of or at least recorded a lot of our more recent history.

So rather than making a history site about the Romans, the Victorians or whatever I would use the BBC archive to tell the history of the world as seen through the eyes of the BBC.

Combining news stories, clips from programmes (broadcast or not), music and photographs the site would tell the story of the world since 18 October 1922.

The site would chart the major political, scientific, sporting, cultural and technological events since 1922 but also the minor events – the ones that remind us of our own past.

The site would provide a page for every day, month, year and decade since the BBC came into existence as well as pages for the people, organisations and events the BBC has featured in that time.

Basically a URI for everything the BBC has recorded in the last 90 odd years.

The site would also allow members of the public to add their thoughts and memories (shared under whatever licensing terms they wish) to enrich it further to create a digital public space for the UK.

Follow

Get every new post delivered to your Inbox.

Join 1,337 other followers

%d bloggers like this: