The Web Paradigm Four Years Later: Where We Started

Everything old is new again.  The advent of Chromium OS, and discussions at work with David, have prompted me to dust off these old posts and revisit my positions, arguments, and examples.  This is the first of a multi-part series, and is intended as a refresher (mostly for myself) on my past posts on the topic.

Long-time consumers of my site might remember a four-part series from 2006 entitled “What Does ‘Web Paradigm’ Mean, Anyway?”  In Part 1, I described how the web browser has been struggling – and mostly failing – to replicate the desktop user’s experience.  That’s not what the Web was designed for.

Notice that web applications have always striven to behave more like desktop applications. Since the very beginning, any web application of any complexity yearned to present a stateful, responsive, user-driven application flow. Sessions, cookies, and Javascript were all created with this in mind. Witness the advent of Ajax as the latest effort in this campaign to make the web more like the desktop. It’s the next logical step in a path that began with the <form> element all those years ago.

In Part 2, I took the path of the web browser as the canonical application platform to its logical conclusion – complete reinvention of the operating system – and discussed the folly of re-implementing decades of established engineering.

In the short-term, it allows for a quick path to closer web integration. In the long-term, however, it leads you down a slippery slope, demanding the question, “How far do you take it?” An IRC plugin may as well connect to AIM, Y!, and MSN. It’s only a short step from a peer-to-peer application to an Apache extension. After that, why not integrate file-level security and access control into your browser? With all of these different plugins, though, we can’t allow one malicious or buggy extension to monopolize the browser’s resources; so we’ll need to write a fair preemptive scheduler and memory manager.

In Part 3, I charted a path that allows us to combine the Web’s strengths with the Desktop’s strength’s.

There must be things we can do to more effective use the web in our daily lives than Firefox and Internet Explorer, right? We’ve taken pretty good advantage of the hyperlink, but can we finally take full advantage of the common languages and ubiquitous protocols, the final two things the web offers to us?

Finally, in Part 4, I gave some examples of how that might work, and what kind of platform and tools we need to make it happen.

We need a platform, independent of any particular application, that makes obtaining, parsing, transforming, aggregating, and forwarding XML as natural as displaying a window. Web integration needs to be a first-class citizen.

Projects like XULRunner are very strong steps in that direction. Microsoft is getting awfully close with .NET 2.0, and maybe we’ll see the first real platform in this space with Vista and WinFX. Finally, like Firefox, the ideas behind Smart Clients and Java Web Start are a good intermediate step, but they are not the future.

So, four years later, how are we doing?  We’ve made some bold steps in the right direction, but we have yet to fully harness the potential of the web in our daily experience.  In my next post, I’ll talk about the limitations of the status quo, and the model we should strive to emulate.