Google Chrome why?

The Internet is all a buzz with Google’s open source web browser Chrome. But you have to ask why and even if it’s a big deal. Not why there’s all the interest but why Google bothered to build their own browser? After all they could have worked with Mozilla to add these features to Firefox – instead Google went and built their own browser.

Introducing Google Chrome
Introducing Google Chrome

So clearly I don’t know, but I wonder whether Google just got a bit fed up of waiting for the features they wanted and went ahead and built their own browser, while leaving the door open to merge these features back into Firefox at a later date. Google are a big supporter of Firefox and the idea of a Google browser has been associated with Firefox in the past; and Sergey Brin has said he is keen to see Firefox and Chrome become more unified in the future.

It is probably worth noting that they (Mozilla Corp) are across the street and they come over here for lunch,” Brin said of Mozzilla employees visits to cafeterias at the Googleplex headquarters. “I hope we will have more and more unity over time”.

But what features are important to Google? After all, as Jon Hicks points out, from an interface point of view, Chrome brings nothing new – all the features are already available in existing browsers. But I don’t think that’s the point and I don’t think that’s why it’s important. Google want to offer much richer and, more importantly, faster web applications.

The current browsers, including Firefox, just can’t cut it. JavaScript isn’t fast enough (thereby limiting the UX), browsers are single threaded and they aren’t stable enough. If Google want to challenge Microsoft (or anyone else for that matter) in the desktop space they needed a better platform. Of course others have sought to solve the same problem – notably Adobe with Air and Microsoft with Silverlight. Google’s solution is I think much neater – build an open source browser that supports multithreading, fast JavaScript execution and stuff Google Gears into the back end so it works offline. Joel Spolsky suggested something similar a while back:

So if history repeats itself, we can expect some standardization of Ajax user interfaces to happen in the same way we got Microsoft Windows. Somebody is going to write a compelling SDK that you can use to make powerful Ajax applications with common user interface elements that work together. And whichever SDK wins the most developer mindshare will have the same kind of competitive stronghold as Microsoft had with their Windows API

Imagine, for example, that you’re Google with GMail, and you’re feeling rather smug. But then somebody you’ve never heard of, some bratty Y Combinator startup, maybe, is gaining ridiculous traction selling NewSDK, which combines a great portable programming language that compiles to JavaScript, and even better, a huge Ajaxy library that includes all kinds of clever interop features. Not just cut ‘n’ paste: cool mashup features like synchronization and single-point identity management (so you don’t have to tell Facebook and Twitter what you’re doing, you can just enter it in one place). And you laugh at them, for their NewSDK is a honking 232 megabytes … 232 megabytes! … of JavaScript, and it takes 76 seconds to load a page. And your app, GMail, doesn’t lose any customers.

But then, while you’re sitting on your googlechair in the googleplex sipping googleccinos and feeling smuggy smug smug smug, new versions of the browsers come out that support cached, compiled JavaScript. And suddenly NewSDK is really fast. And Paul Graham gives them another 6000 boxes of instant noodles to eat, so they stay in business another three years perfecting things.

Of course the big difference is that it’s Google that have gone and launched the new browser that supports cached, compiled JavaScript.

With the release of Chrome, Google can now release versions of their apps that are richer and more responsive. Chrome then isn’t targeted at Firefox I think that Chrome is more of a threat to Silverlight and Air. After all if you can write a web app in JavaScript that’s just as rich and responsive as anything you can write in Silver-Air why would you bother with the proprietary approach?

Chrome is in effect a way to deliver a Google OS to your desktop, one that lets you run fast JavaScript applications. And if you believe Sergey Brin Firefox will, in time, adopt the same technologies as Chrome; which is of course just what Google want – maximum market penetration of those browsers that support their new rich web apps.

Interesting stuff for 2008-08-09

"Dawkins and Darwin" by Kaptain Kobold. Used under licence.
"Dawkins and Darwin" by Kaptain Kobold. Used under licence.

Darwin’s theory of evolution was simple, beautiful, majestic and awe-inspiring [Charlie Brooker – The Guardian]
“But because it contradicts the babblings of a bunch of made-up old books, it’s been under attack since day one. Had the Bible claimed gravity is caused by God pulling objects toward the ground with magic threads, we’d still be debating Newton with idiots.”

The Bible and the Quran agree: Insects have four legs [Dwindling In Unbelief]
Why do people belief this stuff?

BioNumbers – The Database of Useful Biological Numbers
If you’re interested in the number of prokaryotes in cattle rumen world-wide. If not this site might not be for you. :)

On a different note some interesting tech stuff

Open Source implementation of Yahoo! Pipes with added semweb goodness [deri.org]
Inspired by Yahoo’s Pipes, DERI Web Data Pipes implement a generalization which can also deal with formats such as RDF (RDFa), Microformats and generic XML.

ActiveRDF – a library for accessing RDF data from Ruby [activerdf.org]
A library for accessing RDF data from Ruby programs. It can be used as a data layer in Ruby-on-Rails, similar to ActiveRecord (which provides an O/R mapping to relational databases).

Load Balancing & QoS with HAProxy [igvita.com]
The worst thing you can do is queue up another request behind an already long running process. To mitigate the problem HAProxy goes beyond a simple round-robin scheduler, and implements a very handy feature: intelligent request queuing!

RA DIOHEA_D / HOU SE OF_C ARDS

Radiohead are miles ahead of the pack when it comes to content innovation. First off when they released their seventh album, In Rainbows, they let customers chose their own price which according to Thom Yorke, outstripped the combined profits from digital downloads of all of the band’s other studio albums. Then with their single Nude folk were given the opportunity to remix the single. And now their new video for ‘House of Cards‘ has been made without any cameras, just lasers and data. And best of all they are giving you the chance to play with the data, under a license that allows remixing.

More details are available at Google Code http://code.google.com/creative/radiohead/ which explains that:

No cameras or lights were used. Instead two technologies were used to capture 3D images: Geometric Informatics and Velodyne LIDAR. Geometric Informatics scanning systems produce structured light to capture 3D images at close proximity, while a Velodyne Lidar system that uses multiple lasers is used to capture large environments such as landscapes. In this video, 64 lasers rotating and shooting in a 360 degree radius 900 times per minute produced all the exterior scenes.

The site also includes a short documentary showing how the video was made and the 3D plotting technologies behind it.

A 3D viewer to explore the data visualization and best of all the the data with instructions on how to create your own visualizations. All very cool.

links for 2008-02-19

Links for 2008.01.08

» Search Wikia has just been launched – but why?
It isn’t as good as Google but they are make their index freely available. And that’s cool.

» Blue-ray wins
Warner Bros would no longer support Toshiba’s HD DVD high-definition disc format, and would instead throw all its weight behind the rival Blu-ray

» Google, Facebook and Plaxo Join DataPortability.org [ReadWriteWeb]
Good bye customer lock-in, hello to new privacy challenges. If things go right, today could be a very important day in the history of the internet.

Perl on Rails

I’ve just published a post over at the BBC’s Radio Labs blog about ‘Perl on Rails‘ the MVC framework we’ve written to dynamically publish /programmes and /music (next years project).

This isn’t quite as insane as it might appear. Remember that we have some rather specific non-functional requirements. We [the BBC] need to use Perl, there are restrictions on which libraries can and can’t be installed on the live environment and we needed a framework that could handle significant load. What we’ve built ticks all those boxes. Our benchmarking figures point to significantly better performance than Ruby on Rails (at least for the applications we are building), it can live in the BBC technical ecosystem and it provides a familiar API to our web development and software engineering teams with a nice clean separation of duties with rendering completely separated from models and controllers.

We’ve also adopted an open source approach to its development which is already starting to bear fruit and is personally and professionally hugely rewarding.

In general the BBC’s Software Engineering community is pretty good at sharing code. If one team has something that might be useful elsewhere then there’s no problem in installing it and using it elsewhere. What we’re not so good at is coordinating our effort so that we can all contribute to the same code base – in short we don’t really have an open source mentality between teams – we’re more cathedral and less bazaar even if we freely let each other into our cathedrals.

With the Perl on Rails framework I was keen to adopted a much more open source model – and actively encouraged other teams around the BBC to contribute code – and that’s pretty much what we’ve done. In the few weeks since the programmes beta launch JSON and YAML views have been written – due to go live next month. Integration with the BBC’s centrally managed controlled vocabulary – to provide accurate term extraction and therefore programme aggregation by subject, person or place – is well underway and should be with us in the new year. And finally the iPlayer team are building the next generation iPlayer browse using the framework. All this activity is great news. With multiple teams contributing code (rather than forking it) everyone benefits from faster development cycles, less bureaucracy and enhanced functionality.

UPDATE (2007-12-05)

We’re releasing the ‘Perl on Rails’ code under an open source license. James has just written about this and about the BBC’s infrastructure over at the BBC’s Internet blog in response to the I am Seb’s post.

UPDATE (2007-12-03)

Wow! Well this certainly generated quite a lot of interest. I’m not sure that I will be able to address everyone’s issues – especially all the folk over at slashdot but I’ll write another post to address as much as I can, in the meantime I just wanted to pick up on a few of the major themes.

Why didn’t we use Catalyst or something else that already existed? As Duncan indicated the simple answer is because we can’t install Catalyst etc. on the live environment. The BBC’s infrastructure is limited to Perl 5.6 with a limited set of approved modules and there are further limitation on what is allowed (making unmoderated system calls etc.)

Access to the code: I’ll see what I can do. The BBC does open source some of its code at http://www.bbc.co.uk/opensource/, I’m don’t know if we will be able to open source this code but I’ll let you know. However, its worth bearing in mind that we ended up writing this app to work within the BBC’s infrastructure (Perl 5.6, BBC approved modules etc.) so if we did release the code under an OSS license we would still need to maintain this requirement (clearly the code could be forked etc.)

Too many files on a file system: @Viad R. nice solution and yes that solves the seek time issue – unfortunately it doesn’t solve the other problems we needed solving. These include building the sort of interactive products we want to build; nor how to maintain up to date links between pages – when we publish information about a programme we not only need to publish a page for that programme but also update a large number of other pages that should now link to it/ or remove those that shouldn’t – trying to work out what pages to update becomes a nightmare, its much easier to render the pages dynamically.

BBC web infrastructure – not sure were to start with this one. You my find this hard to believe but the vast majority of bbc.co.uk is published statically – people write HTML and FTP those files onto the live servers. This provides a reliable service, if not a dreadfully exciting one. Its also clearly restrictive which is why we needed to build a solution like this in the first place, rather than using an existing framework. Now I’m very aware that this whole endeavor – building ‘Perl on Rails’ – seems bonkers to most people outside the BBC. But what isn’t bonkers is the fact that we have built an elegant application (with a small team) and deployed it within the constraints of the infrastructure and in doing so delivered a new product to our users and helped move the debate on inside the BBC.

UPDATE (2007-12-02)

Since I posted about this work there’s been quite a lot of chat about why we didn’t simply use an existing framework – like Catalyst for example. The simple answer is we can’t – we are restricted in what can be installed on the live environment (Perl 5.6 etc.) ‘I am Seb’ has some more information on the infrastructure. Believe me if we could have simply used an existing framework then we would have done – we all want to build great audience facing services – unfortunately to get there we sometimes need to do some unusual foundation work first. Still now that we’ve done it we can get on with building some great web apps.