Google Chrome why?

The Internet is all a buzz with Google’s open source web browser Chrome. But you have to ask why and even if it’s a big deal. Not why there’s all the interest but why Google bothered to build their own browser? After all they could have worked with Mozilla to add these features to Firefox – instead Google went and built their own browser.

Introducing Google Chrome
Introducing Google Chrome

So clearly I don’t know, but I wonder whether Google just got a bit fed up of waiting for the features they wanted and went ahead and built their own browser, while leaving the door open to merge these features back into Firefox at a later date. Google are a big supporter of Firefox and the idea of a Google browser has been associated with Firefox in the past; and Sergey Brin has said he is keen to see Firefox and Chrome become more unified in the future.

It is probably worth noting that they (Mozilla Corp) are across the street and they come over here for lunch,” Brin said of Mozzilla employees visits to cafeterias at the Googleplex headquarters. “I hope we will have more and more unity over time”.

But what features are important to Google? After all, as Jon Hicks points out, from an interface point of view, Chrome brings nothing new – all the features are already available in existing browsers. But I don’t think that’s the point and I don’t think that’s why it’s important. Google want to offer much richer and, more importantly, faster web applications.

The current browsers, including Firefox, just can’t cut it. JavaScript isn’t fast enough (thereby limiting the UX), browsers are single threaded and they aren’t stable enough. If Google want to challenge Microsoft (or anyone else for that matter) in the desktop space they needed a better platform. Of course others have sought to solve the same problem – notably Adobe with Air and Microsoft with Silverlight. Google’s solution is I think much neater – build an open source browser that supports multithreading, fast JavaScript execution and stuff Google Gears into the back end so it works offline. Joel Spolsky suggested something similar a while back:

So if history repeats itself, we can expect some standardization of Ajax user interfaces to happen in the same way we got Microsoft Windows. Somebody is going to write a compelling SDK that you can use to make powerful Ajax applications with common user interface elements that work together. And whichever SDK wins the most developer mindshare will have the same kind of competitive stronghold as Microsoft had with their Windows API

Imagine, for example, that you’re Google with GMail, and you’re feeling rather smug. But then somebody you’ve never heard of, some bratty Y Combinator startup, maybe, is gaining ridiculous traction selling NewSDK, which combines a great portable programming language that compiles to JavaScript, and even better, a huge Ajaxy library that includes all kinds of clever interop features. Not just cut ‘n’ paste: cool mashup features like synchronization and single-point identity management (so you don’t have to tell Facebook and Twitter what you’re doing, you can just enter it in one place). And you laugh at them, for their NewSDK is a honking 232 megabytes … 232 megabytes! … of JavaScript, and it takes 76 seconds to load a page. And your app, GMail, doesn’t lose any customers.

But then, while you’re sitting on your googlechair in the googleplex sipping googleccinos and feeling smuggy smug smug smug, new versions of the browsers come out that support cached, compiled JavaScript. And suddenly NewSDK is really fast. And Paul Graham gives them another 6000 boxes of instant noodles to eat, so they stay in business another three years perfecting things.

Of course the big difference is that it’s Google that have gone and launched the new browser that supports cached, compiled JavaScript.

With the release of Chrome, Google can now release versions of their apps that are richer and more responsive. Chrome then isn’t targeted at Firefox I think that Chrome is more of a threat to Silverlight and Air. After all if you can write a web app in JavaScript that’s just as rich and responsive as anything you can write in Silver-Air why would you bother with the proprietary approach?

Chrome is in effect a way to deliver a Google OS to your desktop, one that lets you run fast JavaScript applications. And if you believe Sergey Brin Firefox will, in time, adopt the same technologies as Chrome; which is of course just what Google want – maximum market penetration of those browsers that support their new rich web apps.

There’s no such thing as a document – only HTTP?

The closing keynote at XTech 2008 saw Sean McGrath discussing “Orang utans, Oxen and Ogham Stones“. The central premiss of the presentation is that as the web becomes more dynamic so more and more of the data is only accessible when its requested – and this can mean that its inaccessible to machines and therefore the rest of the web. There are no persistent documents.

Sean argued that we have three models operating on the web.

  • Model A is the platonic model. Documents (already) exist on the server – you simple request them over HTTP.
  • Model B has documents existing on the server but are dynamically rendered transforming the content in the process using, for example, CSS and JavaScript.
  • Model C has nothing existing until you observe it. The document is composed and rendered when requested – Just In Time programmatic generation of content.

Model C is Turing complete, user-sensitive, location-sensitive and device-sensitive and model C is winning at least on the client side with Ajax, Flash, Silverlight and Air. It’s now relatively common when viewing the source of pages and see no actual content, just JavaScript to generate the content.

So does this matter? Sean thinks so yes. He fears that this data is siloed, trapped within the code and not accessible via addressable URIs. And if we lose URIs and hypertext then we also lose deep linking – and what about search engines? Will the Googlebot download that JavaScript and eval it to spider it? And what about everyone else? URLs are great for wombling – they can be bookmarked, tagged and mashed-up.

If Sean is right then rather than the web being made up of documents with some code (as it once was) we will be left with a web of few documents and lots of applications. A Web which is really just HTTP.

But is this all true? I’m not so sure.

Sure there has been a rise in the use of client side scripting to dynamically render content (notably with the rise of Ajax web apps) and there are plenty of server side applications delivering dynamic content – but I don’t think we should be worried about server side apps, as long as they are well designed.

It seems to me that we have three classes of webpage:

  • Resources – individual objects, which if designed well live quite happily at persistent URLs;
  • Aggregations – listings and groupings of those resources;
  • Web apps – pages that let users manipulate resources.

So for example even though the BBC Programmes is rendered dynamically (from a server side application) the resources are found at persistent URLs and the pages contain lots of lovely, semantic, mark-up (there are are also plenty of aggregations). Whereas Flickr uses Picnik a client side photo editing application to let Flickr users edit their photos.

Is this a problem? I don’t think so, no. After all, as Sean noted there’s no such thing as a resource only a representation of one. And this is the best you can ever get – the web is made up of URIs and HTTP. We just need to be careful not to lose sight of the importance of URIs.

Photo: good ol days, by emdot. Used under licence.

Link for 2007.12.29

» Size Is The Enemy aka “Java is the problem” because Java is a statically typed language, it requires lots of tedious, repetitive boilerplate code to get things done [Coding Horror]
Jeff Atwood’s review of Steve Yegge’s Code’s Worst Enemy: “One of the most fundamental and truly effective pieces of advice you can give a software development team – any software development team – is to write less code, by any means necessary.”

» Ruby 1.9—Right for You? [PragDave]
It’s faster, importantly it supports unicode – but on the downside it’s not backwardly compatible in a few areas and is a development release that’s not ready for production use.

» Google Phone In Spring 2008? [GigaOM]
Google, apparently has taken substantial amount of floor space at the upcoming Mobile World Congress trade show in Barcelona, Spain, leading some to speculate that the company might actually be ready to launch its Android based phones.

» Comet: Low Latency Data for the Browser [Continuing Intermittent Incoherency]
Comet applications can deliver data to the client at any time, not only in response to user input. The data is delivered over a single, previously-opened connection.

» Comet works, and it’s easier than you think [Simon Willison]
“Before taking a detailed look at Comet, my assumption was that the amount of complexity involved meant it was out of bounds to all but the most dedicated JavaScript hackers. I’m pleased to admit that I was wrong: Comet is probably about 90% of the way to being usable for mainstream projects, and the few remaining barriers (Bayeux authentication chief amongst them) are likely to be solved before too long.”

Google Mashup without the map

James and Joe are using the Google Maps API to give users a novel interface to explore a portfolio of work; rather than using it to provide a mapping interface. This is interesting not just because of what James and Joe have done, but also because it demonstrates an alternative use of the Google Maps API. Bravo!

Google Mash-up without the masp

As with all Google Maps – the interface allows users to pan and zoom – as well as adding push pins with additional information – text, images or video.

Google’s strategy to win the next API war

Joel has recently published an article speculating on the future standardization of the Ajax user interface:

So if history repeats itself, we can expect some standardization of Ajax user interfaces to happen in the same way we got Microsoft Windows. Somebody is going to write a compelling SDK that you can use to make powerful Ajax applications with common user interface elements that work together. And whichever SDK wins the most developer mindshare will have the same kind of competitive stronghold as Microsoft had with their Windows API.

Imagine, for example, that you’re Google with GMail, and you’re feeling rather smug. But then somebody you’ve never heard of, some bratty Y Combinator startup, maybe, is gaining ridiculous traction selling NewSDK, which combines a great portable programming language that compiles to JavaScript, and even better, a huge Ajaxy library that includes all kinds of clever interop features. Not just cut ‘n’ paste: cool mashup features like synchronization and single-point identity management (so you don’t have to tell Facebook and Twitter what you’re doing, you can just enter it in one place). And you laugh at them, for their NewSDK is a honking 232 megabytes … 232 megabytes! … of JavaScript, and it takes 76 seconds to load a page. And your app, GMail, doesn’t lose any customers.

But then, while you’re sitting on your googlechair in the googleplex sipping googleccinos and feeling smuggy smug smug smug, new versions of the browsers come out that support cached, compiled JavaScript. And suddenly NewSDK is really fast. And Paul Graham gives them another 6000 boxes of instant noodles to eat, so they stay in business another three years perfecting things.

Interesting. I wonder if this is exactly what Google’s strategy is – develop a standardized Ajax SDK, a speedy JavaScript engine and tied into Firefox. Clearly this could all be launched alongside a new version of Google’s office suite and Firefox 3.

AJAX what is it? (it’s not DHTML)

Despite the fact that AJAX is at the centre of the current Web 2.0 movement it is still a greatly misunderstood technology. So what is it?

Ajax, is short for Asynchronous JavaScript and XML – its not a technology in it own right rather a set of established technologies, a development technique if you will, for creating more responsive, interactive web applications such as Google maps.

Traditional web applications require the entire page to refresh whenever the user interacts with the system this is because the entire webpage is rebuilt following the transfer of data between the browser and the server (there is a synchronicity between a user’s action and data transfer between the web browser and the web server – the user clicks on something, data is transfered and the page is rebuilt).

Rebuilding the page in this way introduces a delay and interrupts the user from achieving their goal, which is obviously a bad thing; at least for web applications if not for sites such as this blog or a news site.

The objective of Ajax is to make web applications, more responsive by separating the data transfer (which happens in the background) from the users’ actions. This is achieved by placing a piece of code (JavaScript) between the user and the server – the JavaScript (running on the users’ browser) requests data (as XML) in the background and uses this to build the webpage (by using iFrames the JavaScript can be made to only rebuild parts of the page at a time). This separation means there is an asynchronous relationship between data transfer and the user’s interaction with the application.

Ajax model

I would now like to draw a distinction between Ajax and DHTML (Dynamic HTML) which is often used to make rollover or drop-down buttons on a web page and is often confused with Ajax.

DHTML, like Ajax uses client-side scripting (such as JavaScript) to change the presentation of the page, but unlike Ajax where the JavaScript is used to request data from the server, DHTML is only used to modify an otherwise “static” HTML page after the page has been fully loaded (the page is requested and delivered synchronously with the users request as per the traditional model).

In other words if there is no data transfer between the server and the browser – it’s not Ajax.