What Does “Web Paradigm” Mean, Anyway? (Part 4)

First of all, sorry for the delay in posting this last part. I’ve had a great majority of it written for some time, but - as Faulkner once wrote - life intervened. I know there has been a trackback to part 3, and I have intentionally forgone reading it because I want to finish my current thoughts encumbered neither by argument or counter-argument. So, to continue:

In the previous entry, we changed our definition of web from Web 1.0 to Web 2.0. As over-hyped and sullied a buzzphrase as it is, it is difficult to deny the newfound power with which we find ourselves, wrought by the evolved technologies of XML, HTTP, and hyperlinks. The web has finally been freed from the shackles of mere HTML, but the original question was left unanswered: How can we seemlessly bind the web into our everyday workflow?

Ideally, we want to take our own personal work and information, correlate it with whatever is available on the web, and then selectively feed back our own content into that web. How do we do that? The solution is simpler than one might think, and we have already seen the first steps towards this future. Look at OS X Spotlight. The search box in the corner allows for keyword searching of content on the local computer, including program names, document content and metadata, help files, and even source code files. In the simplest form, imagine that Spotlight woud also automatically google for your search terms, displaying the results in real-time along side to local search results. Imagine further that it could correlate the search results found on the web with the search results found on your computer, showing you related concepts side-by-side regardless of their digital location.

Imagine typing “weather” and finding your thesis paper on cloud formation on your hard drive next to an option to install a weather widget in your dock - a widget dynamically discovered by Google according to a (theoretical) Widget Description Markup Language. This is the new Web Paradigm - relevant information available automatically regardless of its location - and it is so much more than just a search box.

Imagine a word processor - Microsoft Word or OpenOffice Writer or even EMACS. As you write, local word analysis of your writing will determine the topic of your work and automatically pre-cache relevant resources for fast access on your topic. If I’m writing about cloud formation, then a hypothetical CTRL+R shortcut might bring up the “related information” sidebar, containing all my personal work on cloud formations along with other related articles on the subject from around the Internet.

Integration of the web and the desktop will most be most effective in existing highly-specific interfaces taylored to a specific task, with the web accessed transparently in the background. You will neither know nor care that Web 2.0 technology gathered the results to be displayed - you will just care that they are relevant and accessible. This is the new Web Paradigm.

Of course, web 1.0 sites as we know them will not go away. They will always provide a base-level method for accessing content and services. It is cheap and easy to put together a web page, so it will always be a common approach. Thus the web browser will never go away. And especially during the transition period, a hybrid of Web 1.0 and Web 2.0 will become increasingly common.

As a starting point, witness the PageRank Status Extension for Firefox. It’s a little widget that sits in the status bar, and whenever I visit a page, it queries Google via a simple REST-based web service for the rank of the page. Thus, I can instantly guage the popularity of every page I visit.

Now take it a step further to the AllPeers peer-to-peer client. Imagine that it is running in the background, sharing files with your friends and family, and downloading the same. It is its own process, which is very important for security and stability purposes. But it’s interface might very well be tightly integrated into Firefox. As you browse to a family member’s files in FireFox, a series of sidebars and status bars and extensions will allow you to drag files from their shares and onto your desktop. Or perhaps it will dynamically pull up album art for MP3s they have shared.

So why even bother with the intermediate step? The issue preventing us from full-on integration right now is that the only platform for common user interface development with high-level natural integration with any web technology is the browser. As I mentioned in part 2, “Since the web browser is the central experience required for today’s web, we can attain better functionality by moving our programs closer to the core.” We need a platform, independent of any particular application, that makes obtaining, parsing, transforming, aggregating, and forwarding XML as natural as displaying a window. Web integration needs to be a first-class citizen.

Projects like XULRunner are very strong steps in that direction. Microsoft is getting awfully close with .NET 2.0, and maybe we’ll see the first real platform in this space with Vista and WinFX. Finally, like Firefox, the ideas behind Smart Clients and Java Web Start are a good intermediate step, but they are not the future.

So what does “web paradigm” mean? It’s not HTML and web browsers, nor is it even XML and HTTP. The web paradigm is one of world-wide interconnected information woven into our daily lives without our knowledge, built on high-level web-oriented plaforms.

Such a world is too big for any mere web browser.

Fin.