Life and code.
RSS icon Email icon Home icon
  • The Real Reason We’re Disappointed About Google Acquiring Nest

    Posted on January 16th, 2014 Brian No comments

    nest-thermostatLeave aside the privacy concerns for a moment. The issues there are real enough: The non-answers in their privacy statement leave enough room to build a data center in, and both the words and the actions of the leadership at Google can leave no doubt that they intend to push the boundary of what is creepy in pursuit of more, better data about you. They need it to target you with ads.

    But that isn’t enough for disappointment. Paranoia and cynicism? Sure. But not disappointment. So why did the Internet react so strongly at the news?

    Nest is in the thermostat and smoke detector business. The make money in one way: Selling you thermostats and smoke detectors. The economic incentives drive them to make better thermostats and smoke detectors. The best damn smoke detectors and thermostats we have ever seen, in fact. So good that we were willing to pay a 10x premium for what is normally a $15 forgotten-about white box. One could only imagine all the cool things that were coming – and Nest had every reason to keep making them better: To sell us more thermostats and smoke detectors.

    Google is in the advertising business. They make money in one way: Selling advertisements to be shown to you. The economic incentives drive everything in their business to that end. Everything else is secondary, and must at least provide some benefit to their primary business. Android is about controlling a mobile platform so they can mine it for data and show you ads. Gmail is about mining your email for context and showing you ads. Web search is about understanding you from your web activity and showing you ads based on it. And Google has every reason to find new ways to learn more about you: To sell more advertisements.

    And there’s the rub: All of us could all see the potential of Nest to make our lives better in real, concrete, visceral ways; and they had the economic incentives to make it happen. Under Google, the economic incentives have changed, and with it the goal. Instead of making great products, the goal is now to push as close to the Creepy Line as possible. It won’t start out that way, but it will end up that way. You can’t fight economics.

    The products will continue to improve, of course. A secondary goal must be to make them good enough to compete in the market, but it will always be secondary. It’s easy to see how much potential has been lost, and that is ever so disappointing.

    “The best minds of my generation are thinking about how to make people click ads. That sucks.” – Jeff Hammerbacher

  • How Twitter Ruined Their API And What They Can Do To Fix It

    Posted on December 3rd, 2012 Brian No comments

    During the question and answer section of the panel I recently spoke on at DCWeek 2012, one questioner asked the panel to describe an API that had “disappointed” us at some point. I replied: Twitter. Though he was angling for technical reasons  – poor design, bad documentation, or insufficient abstraction – I had different reasons in mind.

    Twitter’s Successful API

    Twitter’s primary API is without a doubt a hallmark of good design. It is singularly focused on one particular task: broadcasting small messages from one account to all following accounts as close to real-time as possible. That extreme focus led to simplicity, and that simplicity meant it is easy for developers to code and test their applications. Interactions with the API’s abstraction are straight-forward and often self-documenting. When coupled with Twitter’s excellent documentation and sense of community, the early years meant that developers were free to explore and experiment, leading to a plethora of interesting – and sometimes terrible – Twitter clients (including my own Java IRC bot JackBot).

    Coincidentally, the explosion of smart phones, social networking, and always-on Internet connectivity meant Twitter’s raison d’être was also a means to explosive growth. The Fail Whale was an all-too-familiar sight during those early growing pains, but the same focus and simplicity that made it an easy API for developers to use also made it possible for Twitter to dramatically improve the implementation. Today, Twitter serves over 300 million messages daily – up several orders of magnitude from when I joined – yet our favorite marine mammal surfaces rarely.

    Business Decisions

    Twitter’s early business model is a familiar story. A cool idea formed the basis of a company, funded by venture-capital and outside investment. There was little thought given to how to turn a profit. Seeing themselves in competition with the already-huge Facebook, growing the user-base was the only real concern. For many years, Twitter continued to foster its community: In a symbiotic relationship with developers and users – who were often the same – Twitter expanded and modified the API, improved the implementation, and actively encouraged new developers to explore new and different ways of interacting with the Twitter systems. So important was this relationship that even things like the term “tweet”, the concept of the re-tweet, and even Twitter’s trademarked blue bird logo all originated with third-parties.

    But the good times can’t roll forever; eventually the investors want a return, and the company began seeking a method to make money. Seeing itself as a social network, advertising was the obvious choice. But there was a problem: the company’s own policy and openness had made advertising difficult to implement. Here’s what I wrote in December 2009:

    Twitter shows us the future of the Web. The user interface on Twitter’s home page is as technologically up-to-date as any of Google’s applications: it’s a full-on CSS-styled, HTML-structured, JavaScript-driven, AJAX-enhanced web application. And it looks just as lackluster as GMail or Google Calendar.  But Twitter isn’t about HTML and CSS – it’s about data and the APIs to access and manipulate it.

    More than 70% of users on Twitter post from third-party applications that aren’t controlled by Twitter. Some of those applications are other services – sites like TwitterFeed that syndicate information pulled from other places on the web (this blog, included). Others are robots like JackBot, my Java IRC bot which tweets the topics of conversation for a channel I frequent.

    Advertisers purchase users’ attention, and if you can’t guarantee that access, you can’t sell ads. But what third-party client is going to show ads on behalf of Twitter? Users – particularly the developers creating those third-party apps – don’t want to see ads if they can avoid it. You won’t make much money selling ads to only 30% of your users (who are also likely the least savvy 30%). What’s a little blue birdie to do?

    The chosen path was to limit – and perhaps eliminate entirely – third-party clients. The recent 100,000 limit on client tokens is an obvious technological step, and they are already completely cutting off access for some developers. Additionally, where technological restrictions are difficult, changes to the terms of service have placed legal restrictions on how clients may interact with the API, display tweets, and even in how they may interact with other similar services. (Twitter clients are not allowed to “intermingle” messages from Twitter’s API with messages from other services.) It seems likely that the screws will continue to tighten.

    A Way Forward: Get On The Bus

    Twitter has built the first ubiquitous, Internet-wide, scalable pub-sub messaging bus. Today that bus primarily carries human-language messages from person to person, but there are no technical limitations preventing its broader use. The system could be enhanced and expanded to provide additional features – security, reliability, bursty-ness, quantity of messages, quantity of followers, to name just a few – and then Twitter can charge companies for access to those features. Industrial control and just-in-time manufacturing, stock quotes and financial data, and broadcast and media conglomerates would all have benefited from a general-purpose, simple message exchange API.

    Such a generalized service would be far more useful to the world at large than just another mechanism for shoving ads in my face, and I would bet that the potential profits from becoming the de facto worldwide messaging bus would dwarf even the wildest projections for ad revenues. It wouldn’t be easy: highly available, super-scalable systems are fraught with difficulty – just ask Amazon – but Twitter is closer to it than anyone else, and their lead and mindshare would give them a huge network-effect advantage in the marketplace.

    With this new model replacing the advertising money, third-party clients would no longer be an existential threat. Twitter could remove the pillow from the face of their ecosystem and breath new life back into their slowly-suffocating community.

    Will they take this path? I doubt it. The company’s actions in the past several months clearly telegraph their intentions. Twitter’s API teaches us an important lesson that, no matter how well designed, documented, and supported an platform is, there will always be people behind it making business decisions. Those decisions can affect the usability of the API just as deeply as bad design, and often much more suddenly. Caveat programmer!

    Busses from per•spec•tive by Alan Smythee under a CC Attribution 2.0 Generic license.

  • The One API You Should Know

    Posted on November 10th, 2012 Brian No comments

    I had the opportunity to speak on a panel at DCWeek 2012 this past week: “Five Crucial APIs to Know About”. (I am not listed on the speakers page, as I was a rather last-minute addition.) Conversation ranged from what goes into making a good API – dogfooding, documentation, focus – to pitfalls to be aware of when building your business on an external API. It was a fun and informative discussion, and I walked away with plenty to chew on.

    An API is all about two things: Abstraction and Interaction. It takes something messy, abstracts away some of the details, and then you, as a programmer, interact with that abstraction. That interaction causes the underlying code to do something (and hopefully making your life easier). If you interact with it differently, you’ll get different results. Understanding an API, then, requires understanding both the abstraction as well as how you are meant to interact with it.

    Now, DCWeek focuses primarily on the startup scene. As such, I expected that most of my fellow panelists would be focusing on web-exposed APIs. Sure enough, there was plenty of talk on Facebook, Twilio, Twitter, and laundry list of other HTTP-accessible APIs. All of which are great! Note, though, that these APIs share one common thing: They are all network-reliant APIs. As such, they are built on a whole bunch of other APIs, but at the end of the day, they all route through one specific API (or a clone): Berkley Sockets.

    Why should you care about a 30-year-old API when you care about tweets and friends and phone calls? Stop for a moment and think about what those high-level APIs are built on: a network. Worse – the Internet. A series of tubes. Leaky, lossy, variable-bandwidth tubes. And it’s only getting worse – sometimes you’re on a high-bandwidth wifi connection; other times you’re on a crappy, intermittent cellular connection in a subway tunnel.

    The user’s experience with a high-level network API is going to be directly impacted by socket options chosen several layers down – often just by default – but different experiences require different expectations from the network. Do you have a low-latency API that provides immediate user-interactive feedback in super-short bursts? Then you might want to learn about Nagel’s Algorithm and TCP_NODELAY. Does your app require a user to sit and stare at a throbber while you make a network call? You might want to consider adjusting your connection, send, and receive timeouts to provide more prompt feedback when the network fails.

    And believe me: the network will fail. But how do you handle it? As programmers, we tend to focus on the so-called “happy path”, relegating failure handling to second-class status. Unfortunately, treating failure as unlikely is simply not acceptable in a world of ubiquitous networking and web services. Not all network failures are the same, and providing the best user experience requires understanding the difference between the various types of failures in the specific context of what you were attempting to accomplish.

    So take a moment and do some research. If you’re using a networked API that exposes network details, learn about them and tweak them for the specific task at hand. If you’re writing an API, consider how users will be accessing it, and provide them guidance with how to achieve the best possible experience over the network. The people using your apps will thank you.

    I’d like to thank my fellow panelists – Greg CypesSasha Laundy, and Evgeny Popov –  for such an interesting discussion, as well as to thank our moderator Matt Hunckler for keeping us on track. I hope we can do it again in the future.

  • The Steve Jobs Book Pissed Me Off

    Posted on February 25th, 2012 Brian No comments

    I have always had a thing for books. I started reading when I was a kid, and I never stopped. I oscillate regularly between fiction and nonfiction, binging for a while on science fiction and epic fantasy before devouring title after title on politics, economics, science, and philosophy. Packing for every trip included at least one hardcover – preferred over paperbacks for their sturdiness and aesthetics on my bookshelf – and sometimes two or three.

    I purchased an iPad in May 2010.

    Since then, every single book I have read has been on the iPad. This wasn’t because I had some idealistic desire to switch to eBooks. It just sort of happened, in retrospect I think because they were both cheaper and just easier to get. I have bought and read nearly two dozen books on my iPad in the past year-and-three-quarters, but I planned on purchasing the hardcover for books that I really wanted on my shelf, that I really thought were special – the books I really wanted to read, the way you only can with a hardcover.

    Ironically, the first book to pass that bar was Walter Isaacson’s biography Steve Jobs.

    I had pre-ordered it, and also received a copy as a gift, so I had two copies sitting on my shelf while I churned through the reading list on my iPad. Finally, a couple of weeks back, I located the large, white cover, pulled it off the shelf, and dove in.

    It immediately pissed me off.

    Not the content of the book, what Walter Isaacson wrote, but the book itself – the actual physical thing. I had had no problems reading from wood pulp and ink for three three decades before the iPad, yet suddenly I found myself constantly annoyed by its characteristics. I was shocked at all the annoying things about a paper book that I had never noticed before.

    The paper book is bigger than my iPad, and it’s heavier, too. The book takes up four or five times the room in my backpack. Before, I had sometimes carried around two or three of these things at once! How the hell did I ever do that?

    It’s more cumbersome and awkward, too – a lot more. My iPad is evenly balanced and always the same shape, making it a simple matter to re-orient it, hand it off to someone, or just lean it against something for support. The physical construction of the book – there’s a hinge in the middle! – makes it lopsided most of the time, which makes it difficult to re-orient. And don’t you dare try laying it flat or propping it up without something holding down the pages – you’ll lose your place in no time as it unhelpfully flips pages around.

    What is a book for, if not reading? And yet, though I had never noticed it before, a book on paper is actively hostile to reading. The pages curve inwards towards the binding, distorting and hiding the words. It isn’t a constant thing, either, because the curvature and distortion becomes worse as you get closer to the binding. I regularly find myself physically reorienting the book – made more difficult by its aforementioned cumbersome nature – just to read the words on the page! In contrast, the words on my iPad are always flat and never distorted.

    The final nail in the coffin is the paltry lack of features in the paper book. There is no built-in dictionary, so I can’t just tap a word and check its definition, nor is it possible to search for all occurrences of a particular word or phrase. As a little bonus, if I have a few moments to spare, I can pull out my iPhone and pick up where I left off; my current spot is synchronized automatically. Though they are not essential, I have become used to such niceties.

    Which is not to say that reading on the iPad is perfect. Glare can be an issue, especially if I am outside. A paper book isn’t going to run out of battery any time soon, and it isn’t prone to breaking if I drop it from a height. Nor is it a tempting target for thieves. And as a long-term archival mechanism, or just a pretty thing to show off on a shelf, the paper book wins hands-down.

    But for reading, the actual process of reading: I’m a convert. Give me an eBook.

  • How To Bypass Wikipedia’s Stupid SOPA Blackout

    Posted on January 18th, 2012 Brian No comments

    So if you’ve been to Wikipedia at all today, you have no doubt noticed that instead of your desired web page, you’re instead being shown a big, black page directing you to take action to prevent Congress from passing a really stupid piece of legislation called SOPA. It’s a really bad law which will infringe free speech and basically break the Internet all at the behest of some already-stupidly-wealthy special interests. I definitely encourage you to take action against SOPA if you haven’t already.

    Some of us have work to do, though, and besides being a great resource on Justin Bieber’s hair, Wikipedia also has a plethora of important and useful technical information. Fortunately for us, Wikipedia chose about the most brain-dead way possible to implement their blackout: a script. So, if you would like to bypass their blackout, simply block the following URL using your favorite ad blocker in your browser:

    http://en.wikipedia.org/w/index.php?title=Special:BannerController&cache=/cn.js&303-4

     

  • Windows 7 Chkdsk Prompt Hangs At 1 Second: How I Fixed It

    Posted on January 5th, 2010 Brian No comments

    A few weeks back, something bad happened to my computer.  I’m not sure what, but my nightly backup reported a failure to run due to corrupted folders.  So I immediately pulled out the toolbox and scheduled a chkdsk for the next reboot.  And then I rebooted.

    Chkdsk Prompt on a Windows XP MachineWhen Windows rebooted, I was greeted with the familiar notice, “A disk check has been scheduled.”  As anyone familiar with Windows knows, you then get a ten second countdown to abort the disk check.  I waited (10…9…8…) patiently (7…6…) while it ticked (5…) off (4…) every (3…) excruciating (2…) number (1…), and then … nothing.  I had one second left, permanently.  The computer had frozen, and hitting the any key did nothing.  Hitting CTRL+ALT+DEL did nothing, either, and so I was forced to hard-off the machine.

    On reboot, I received the same prompt, and once again it hung at one second.  I couldn’t even get it to abort the disk check, the very purpose for the countdown prompt!  Woe is me!  I was stuck in a reboot loop.  At this point, I am going to fast forward over the gory details of booting the rescue tools off my install CD, unlocking my encrypted drive, fixing the disk, resetting my TPM state, and all that.  Trust me, it’s for the best.  But in the end, I had a working machine again.

    Last night, and right before bed no less, I encountered the same problem.  Googling led me to the same results I had seen before.  But this time, reading through the top result, there was a new post.  The comment by Tayloradical on December 10 recommended removing all the peripherals, including any SD card.  I have lots of peripherals, including an SD card I use for ReadyBoost.  After a somewhat systematic cycle of decoupling and rebooting, the chkdsk finally kicked off normally after removing the SD card!

    So, if you encounter this problem, try removing your SD card, and maybe some other peripherals as well.

    2010-02-02 update: This is a known issue, and Microsoft has issued a hotfix.  Also, for my machine at least, the fix has been rolled into a update sp46718  from HP.

  • Living 64-bit: Why Windows Thumbnails Show Up Randomly

    Posted on December 17th, 2009 Brian No comments

    Windows Explorer has had support for showing thumbnails instead of icons for many years now.  Support is built-in for many common formats, like images or rich text, but it doesn’t know how to deal with more complicated formats like PDFs and ODTs.  To compensate for this lack of support, there is an extension mechanism that third-party applications can use to teach Explorer how to render thumbnails. Then, whenever you create or modify a file, Explorer notices that it has changed, recreates the thumbnail, and saves it to a cache file.  This behavior is very evident if you save files on your desktop, since it is one Explorer window that is always visible.

    Unfortunately, in a 64-bit world, this approach often fails for the same reasons that Windows Search Filters often fail.  In 64-bit Windows, Explorer is a 64-bit process, but most third-party application programmers only provide 32-bit extension DLLs.  Since Explorer is unable to load the 32-bit DLLs, it cannot render the thumbnail – and you’re left with an ugly icon.  If you’re living 64-bit, though, you’ve no-doubt noticed that many of the thumbnails do get generated, but seemingly at random.  You might have a PDF sitting on your desktop for weeks, and suddenly one day it will switch from an ugly icon to a thumbnail.  What gives?

    The secret is that Explorer is not just a running process for viewing your files and folders.  It is exists in a series of libraries and common controls that third-party applications use to provide common functionality with a familiar interface.  For example, almost all “Save” and “Open” dialogs either use or extend the built-in Windows versions, and those Windows versions use the same libraries as Windows Explorer to hoist some of that familiar functionality into the applications.  In a very real sense, Explorer is being embedded in these third-party applications.

    But remember: These third-party applications are 32-bit.  That means that there are 32-bit versions of the Explorer libraries hanging around Windows, in case these applications need to use them.  So when a 32-bit application opens a “Save” dialog and you navigate to a folder, you’re essentially pointing a 32-bit version of Explorer at that folder.  Like usual, Explorer notices there are thumbnails that have not been generated, but now it can properly load the 32-bit third-party thumbnail extensions.  It renders the thumbnails and writes them out to the cache file.

    Surprise!  Thumbnails for a file you weren’t even working on have suddenly been updated.  It’s not random at all, but because you were working on a different file, it just seems random.

  • The Web Paradigm Four Years Later: King Browser

    Posted on December 4th, 2009 Brian No comments

    In my first post in this series, we took a quick look at where we were at the start of 2006 in re-defining the Web, and then asked, “How are we doing?  Have we made progress during the intervening fourteen-hundred days?”  The answer is, “Yes, we’ve made a lot of progress on the web, but we have yet to take the big leap.  And we are in danger of taking some serious steps backwards.”

    (Too Bad) The Browser Is Still King

    When you think of the applications with the most impact on people’s day-to-day lives, chances are many of them will start with the letter ‘G’.  Google has done an amazing job pushing the limits of what applications in a browser can do.  They have pioneered new frontiers in web standards, compatibility, scripting, and browser user-interface capabilities.  All of this has taken place inside of a web browser which is essentially unchanged since its inception.  The browser is still king of the Web.

    And yet, for all the advances in web user interfaces, they still suck.  Take Google Calendar, for example: The CSS-styled interface is flat and ugly.  Your options for different views are severely limited.  Right-clicking brings up a context menu devoid of any calendar-specific context.  Printing is a crapshoot, at best.  When you have a meeting in five minutes, Google Calendar can’t interface with your desktop to provide a nice notification (like Growl or the Balloons); instead you get a ugly JavaScript pop-up and the default system sound.  (And that’s not to mention more complicated issues like process isolation and window sizes and task switching!)

    Need I go on?

    Where Does Value on the Web Come From?

    So why bother to use a web application like Google Calendar at all?  It’s certainly not because we like the poor interface or lackluster usability.  Rather, we get value comes from the accessibility of the important information it contains.  Who gives a damn about a fancy calendar interface if it forgets your wife’s birthday!  What’s more, we want access to our data.  We want it to be available and accessible when we need it, in a format most appropriate for the access mechanism.  Whether we’re scheduling our next hair cut on the iPhone, planning a trip home on our PC, or booking a meeting room at work, it has to be accessible any place and any time. A calendar in the cloud does that.

    And it is easily shared with people you know and other systems you use.  Metcalfe’s Law predicts that the value of our individual applications goes up exponentially with the amount of sharing we can do.  The accessibility of the information gives a crappy interface connected to the web greater value than a fantastic – but lonely – user interface.

    If we do value the connectedness of our data more than the interface in which its presented, then Google’s success with products like Docs, GMail, and Calendar are easily explained.  That their interfaces happened to suck less than competing web applications merely gave them the leg up needed to take the majority of the market.  So far.

    Having And Eating Our Cake

    Twitter shows us the future of the Web.  The user interface on Twitter’s home page is as technologically up-to-date as any of Google’s applications: it’s a full-on CSS-styled, HTML-structured, JavaScript-driven, AJAX-enhanced web application.  And it looks just as lackluster as GMail or Google Calendar.  But Twitter isn’t about HTML and CSS – it’s about data and the APIs to access and manipulate it.

    More than 70% of users on Twitter post from third-party applications that aren’t controlled by Twitter. Some of those applications are other services – sites like TwitterFeed that syndicate information pulled from other places on the web (this blog, included).  Others are robots like JackBot, my Java IRC bot which tweets the topics of conversation for a channel I frequent.

    Most, however, are specialized user interfaces, designed for humans to read, write, respond, dm, link, post pictures, and otherwise poke at their Twitter accounts.  Each one is unique, and each one has specific features that particular users find the most useful for their purposes.  Clients like TweetDeck target the power-tweeter with multiple columns and advanced features for multiple accounts.  Other clients, like Tweetie, aim to provide a full-featured interface within the limits of a mobile device.  Still other clients, like Blu (my personal choice), are full of fancy graphics and animations.

    These applications successfully meld the web and the desktop.  They harness the value of Web-connected data while in rich, interactive experiences.  And its not just flash and bling.  By leveraging their platform’s capabilities, each application can be tailored to the needs of its users, making it possible for each person to extract the most value from their data.

    So if Twitter is the model for how Web applications should be written, then why aren’t we there yet?  In the next post, I’ll discuss why we’re so far behind, and why I see Chromium OS as a step in the wrong direction for web-centric applications.

  • The Web Paradigm Four Years Later: Where We Started

    Posted on December 2nd, 2009 Brian 1 comment

    Everything old is new again.  The advent of Chromium OS, and discussions at work with David, have prompted me to dust off these old posts and revisit my positions, arguments, and examples.  This is the first of a multi-part series, and is intended as a refresher (mostly for myself) on my past posts on the topic.

    Long-time consumers of my site might remember a four-part series from 2006 entitled “What Does ‘Web Paradigm’ Mean, Anyway?”  In Part 1, I described how the web browser has been struggling – and mostly failing – to replicate the desktop user’s experience.  That’s not what the Web was designed for.

    Notice that web applications have always striven to behave more like desktop applications. Since the very beginning, any web application of any complexity yearned to present a stateful, responsive, user-driven application flow. Sessions, cookies, and Javascript were all created with this in mind. Witness the advent of Ajax as the latest effort in this campaign to make the web more like the desktop. It’s the next logical step in a path that began with the <form> element all those years ago.

    In Part 2, I took the path of the web browser as the canonical application platform to its logical conclusion – complete reinvention of the operating system – and discussed the folly of re-implementing decades of established engineering.

    In the short-term, it allows for a quick path to closer web integration. In the long-term, however, it leads you down a slippery slope, demanding the question, “How far do you take it?” An IRC plugin may as well connect to AIM, Y!, and MSN. It’s only a short step from a peer-to-peer application to an Apache extension. After that, why not integrate file-level security and access control into your browser? With all of these different plugins, though, we can’t allow one malicious or buggy extension to monopolize the browser’s resources; so we’ll need to write a fair preemptive scheduler and memory manager.

    In Part 3, I charted a path that allows us to combine the Web’s strengths with the Desktop’s strength’s.

    There must be things we can do to more effective use the web in our daily lives than Firefox and Internet Explorer, right? We’ve taken pretty good advantage of the hyperlink, but can we finally take full advantage of the common languages and ubiquitous protocols, the final two things the web offers to us?

    Finally, in Part 4, I gave some examples of how that might work, and what kind of platform and tools we need to make it happen.

    We need a platform, independent of any particular application, that makes obtaining, parsing, transforming, aggregating, and forwarding XML as natural as displaying a window. Web integration needs to be a first-class citizen.

    Projects like XULRunner are very strong steps in that direction. Microsoft is getting awfully close with .NET 2.0, and maybe we’ll see the first real platform in this space with Vista and WinFX. Finally, like Firefox, the ideas behind Smart Clients and Java Web Start are a good intermediate step, but they are not the future.

    So, four years later, how are we doing?  We’ve made some bold steps in the right direction, but we have yet to fully harness the potential of the web in our daily experience.  In my next post, I’ll talk about the limitations of the status quo, and the model we should strive to emulate.

  • Living 64-bit: Search Filters for Windows

    Posted on November 19th, 2009 Brian No comments

    One of the greatest features in Windows Vista that carries forward to Windows 7 is the Windows Search-In-The-Start-Menu.  Just hit the Windows key and start typing, and voila! you are instantly graced with search results.  Suddenly desktop search is useful!

    Unfortunately, the utility of the search is greatly limited by whether or not an appropriate filter exists for a particular file type.  Windows ships with filters for various barebones formats, such as text files and web pages, as well as Microsoft Office documents (of course).  Though filters for some formats can be found on the web, normally it is the job of the installer to properly configure filters to handle the application’s file types.

    And herein lies the problem.

    You see, when you’re running a 64-bit OS, most application programs you have are actually running in 32-bit mode.  Why?  Well, from an end-user’s perspective of the application, there is usually no difference between 32-bit mode and 64-bit mode.  There are virtually no performance differences, no look-and-feel differences, and no functional differences.

    But from an application vendor’s perspective, 64-bit support requires often drastic API changes, as well as compiling, testing, and releasing a 64-bit version.  It’s a lot of work to support something that your customer probably won’t even notice, and that’s not to mention having to explain to a confused grandmother that she downloaded the 64-bit version for her 32-bit machine and could she please try again.  So for most application vendors, 64-bit is something only done when absolutely necessary, and thus most applications get released in 32-bit versions only.

    So back to search filters:  One of the gotchas of 64-bit is that you cannot load 32-bit libraries into a 64-bit process, and on a 64-bit machine, the Windows Indexing Engine is a 64-bit process.  Thus most 32-bit applications will be unable to properly install their search filters on 64-bit Windows unless they go out of their way to do so.  OpenOffice currently suffers from this problem, as does Adobe’s PDF Reader.

    Fortunately, it has been recognized as a problem, and applications are fixing it.  OpenOffice is supposed to have it fixed in version 3.2, and Adobe offers a free 64-bit version of their PDF filter.  And in the meantime, you can often find good filters for free on IFilter.org, or some for free and for sale on IFilterShop.com.