Life and code.
RSS icon Home icon
  • The Web Paradigm Four Years Later: King Browser

    Posted on December 4th, 2009 Brian No comments

    In my first post in this series, we took a quick look at where we were at the start of 2006 in re-defining the Web, and then asked, “How are we doing?  Have we made progress during the intervening fourteen-hundred days?”  The answer is, “Yes, we’ve made a lot of progress on the web, but we have yet to take the big leap.  And we are in danger of taking some serious steps backwards.”

    (Too Bad) The Browser Is Still King

    When you think of the applications with the most impact on people’s day-to-day lives, chances are many of them will start with the letter ‘G’.  Google has done an amazing job pushing the limits of what applications in a browser can do.  They have pioneered new frontiers in web standards, compatibility, scripting, and browser user-interface capabilities.  All of this has taken place inside of a web browser which is essentially unchanged since its inception.  The browser is still king of the Web.

    And yet, for all the advances in web user interfaces, they still suck.  Take Google Calendar, for example: The CSS-styled interface is flat and ugly.  Your options for different views are severely limited.  Right-clicking brings up a context menu devoid of any calendar-specific context.  Printing is a crapshoot, at best.  When you have a meeting in five minutes, Google Calendar can’t interface with your desktop to provide a nice notification (like Growl or the Balloons); instead you get a ugly JavaScript pop-up and the default system sound.  (And that’s not to mention more complicated issues like process isolation and window sizes and task switching!)

    Need I go on?

    Where Does Value on the Web Come From?

    So why bother to use a web application like Google Calendar at all?  It’s certainly not because we like the poor interface or lackluster usability.  Rather, we get value comes from the accessibility of the important information it contains.  Who gives a damn about a fancy calendar interface if it forgets your wife’s birthday!  What’s more, we want access to our data.  We want it to be available and accessible when we need it, in a format most appropriate for the access mechanism.  Whether we’re scheduling our next hair cut on the iPhone, planning a trip home on our PC, or booking a meeting room at work, it has to be accessible any place and any time. A calendar in the cloud does that.

    And it is easily shared with people you know and other systems you use.  Metcalfe’s Law predicts that the value of our individual applications goes up exponentially with the amount of sharing we can do.  The accessibility of the information gives a crappy interface connected to the web greater value than a fantastic – but lonely – user interface.

    If we do value the connectedness of our data more than the interface in which its presented, then Google’s success with products like Docs, GMail, and Calendar are easily explained.  That their interfaces happened to suck less than competing web applications merely gave them the leg up needed to take the majority of the market.  So far.

    Having And Eating Our Cake

    Twitter shows us the future of the Web.  The user interface on Twitter’s home page is as technologically up-to-date as any of Google’s applications: it’s a full-on CSS-styled, HTML-structured, JavaScript-driven, AJAX-enhanced web application.  And it looks just as lackluster as GMail or Google Calendar.  But Twitter isn’t about HTML and CSS – it’s about data and the APIs to access and manipulate it.

    More than 70% of users on Twitter post from third-party applications that aren’t controlled by Twitter. Some of those applications are other services – sites like TwitterFeed that syndicate information pulled from other places on the web (this blog, included).  Others are robots like JackBot, my Java IRC bot which tweets the topics of conversation for a channel I frequent.

    Most, however, are specialized user interfaces, designed for humans to read, write, respond, dm, link, post pictures, and otherwise poke at their Twitter accounts.  Each one is unique, and each one has specific features that particular users find the most useful for their purposes.  Clients like TweetDeck target the power-tweeter with multiple columns and advanced features for multiple accounts.  Other clients, like Tweetie, aim to provide a full-featured interface within the limits of a mobile device.  Still other clients, like Blu (my personal choice), are full of fancy graphics and animations.

    These applications successfully meld the web and the desktop.  They harness the value of Web-connected data while in rich, interactive experiences.  And its not just flash and bling.  By leveraging their platform’s capabilities, each application can be tailored to the needs of its users, making it possible for each person to extract the most value from their data.

    So if Twitter is the model for how Web applications should be written, then why aren’t we there yet?  In the next post, I’ll discuss why we’re so far behind, and why I see Chromium OS as a step in the wrong direction for web-centric applications.

  • The Web Paradigm Four Years Later: Where We Started

    Posted on December 2nd, 2009 Brian 1 comment

    Everything old is new again.  The advent of Chromium OS, and discussions at work with David, have prompted me to dust off these old posts and revisit my positions, arguments, and examples.  This is the first of a multi-part series, and is intended as a refresher (mostly for myself) on my past posts on the topic.

    Long-time consumers of my site might remember a four-part series from 2006 entitled “What Does ‘Web Paradigm’ Mean, Anyway?”  In Part 1, I described how the web browser has been struggling – and mostly failing – to replicate the desktop user’s experience.  That’s not what the Web was designed for.

    Notice that web applications have always striven to behave more like desktop applications. Since the very beginning, any web application of any complexity yearned to present a stateful, responsive, user-driven application flow. Sessions, cookies, and Javascript were all created with this in mind. Witness the advent of Ajax as the latest effort in this campaign to make the web more like the desktop. It’s the next logical step in a path that began with the <form> element all those years ago.

    In Part 2, I took the path of the web browser as the canonical application platform to its logical conclusion – complete reinvention of the operating system – and discussed the folly of re-implementing decades of established engineering.

    In the short-term, it allows for a quick path to closer web integration. In the long-term, however, it leads you down a slippery slope, demanding the question, “How far do you take it?” An IRC plugin may as well connect to AIM, Y!, and MSN. It’s only a short step from a peer-to-peer application to an Apache extension. After that, why not integrate file-level security and access control into your browser? With all of these different plugins, though, we can’t allow one malicious or buggy extension to monopolize the browser’s resources; so we’ll need to write a fair preemptive scheduler and memory manager.

    In Part 3, I charted a path that allows us to combine the Web’s strengths with the Desktop’s strength’s.

    There must be things we can do to more effective use the web in our daily lives than Firefox and Internet Explorer, right? We’ve taken pretty good advantage of the hyperlink, but can we finally take full advantage of the common languages and ubiquitous protocols, the final two things the web offers to us?

    Finally, in Part 4, I gave some examples of how that might work, and what kind of platform and tools we need to make it happen.

    We need a platform, independent of any particular application, that makes obtaining, parsing, transforming, aggregating, and forwarding XML as natural as displaying a window. Web integration needs to be a first-class citizen.

    Projects like XULRunner are very strong steps in that direction. Microsoft is getting awfully close with .NET 2.0, and maybe we’ll see the first real platform in this space with Vista and WinFX. Finally, like Firefox, the ideas behind Smart Clients and Java Web Start are a good intermediate step, but they are not the future.

    So, four years later, how are we doing?  We’ve made some bold steps in the right direction, but we have yet to fully harness the potential of the web in our daily experience.  In my next post, I’ll talk about the limitations of the status quo, and the model we should strive to emulate.