Posted on February 4th, 2014 No comments
Just south of San Francisco, shortly after you merge onto Interstate 280 heading down to the South Bay, there is this massive red stain in the center lane of the freeway. At night, flashing by so quickly under the headlights before disappearing underneath your car at seventy-five miles per hour, it looks for all the world like an eighteen-wheeler tractor trailer decimated a full-grown deer mere moments before.
You instinctually look around when faced with that much carnage; you can’t help it: Where is the cab of the truck? Is the driver okay? The deer certainly isn’t, but … Why is there no body? Where are the guts?
It wasn’t until the third or fourth time I made the drive that I finally realized it was just paint. But it still freaks me out four years later.
Posted on January 16th, 2014 No comments
Leave aside the privacy concerns for a moment. The issues there are real enough: The non-answers in their privacy statement leave enough room to build a data center in, and both the words and the actions of the leadership at Google can leave no doubt that they intend to push the boundary of what is creepy in pursuit of more, better data about you. They need it to target you with ads.
But that isn’t enough for disappointment. Paranoia and cynicism? Sure. But not disappointment. So why did the Internet react so strongly at the news?
Nest is in the thermostat and smoke detector business. The make money in one way: Selling you thermostats and smoke detectors. The economic incentives drive them to make better thermostats and smoke detectors. The best damn smoke detectors and thermostats we have ever seen, in fact. So good that we were willing to pay a 10x premium for what is normally a $15 forgotten-about white box. One could only imagine all the cool things that were coming – and Nest had every reason to keep making them better: To sell us more thermostats and smoke detectors.
Google is in the advertising business. They make money in one way: Selling advertisements to be shown to you. The economic incentives drive everything in their business to that end. Everything else is secondary, and must at least provide some benefit to their primary business. Android is about controlling a mobile platform so they can mine it for data and show you ads. Gmail is about mining your email for context and showing you ads. Web search is about understanding you from your web activity and showing you ads based on it. And Google has every reason to find new ways to learn more about you: To sell more advertisements.
And there’s the rub: All of us could all see the potential of Nest to make our lives better in real, concrete, visceral ways; and they had the economic incentives to make it happen. Under Google, the economic incentives have changed, and with it the goal. Instead of making great products, the goal is now to push as close to the Creepy Line as possible. It won’t start out that way, but it will end up that way. You can’t fight economics.
The products will continue to improve, of course. A secondary goal must be to make them good enough to compete in the market, but it will always be secondary. It’s easy to see how much potential has been lost, and that is ever so disappointing.
“The best minds of my generation are thinking about how to make people click ads. That sucks.” – Jeff Hammerbacher
Posted on December 19th, 2013 No comments
The world seems to be going crazy this morning with news that some 40 million credit cards were stolen from Target’s databases. My Twitter feed is aflame, it’s all over my news sites, and I’ve even received email from family members warning me to get all my cards reissued. I’m not going to, though. I only use credit cards, so it’s not my problem.
If my number just happens to be one of the 40 million stolen, and if a thief happens to actually try and use it, and if the credit card companies’ powerful automated anti-fraud systems don’t happen to notice that I couldn’t possibly be buying a new television from Amazon in Kazakhstan while also paying for dinner at Chez Billy, then in that unlikely scenario my total projected costs will be a grand total of $0, plus a minor inconvenience while my card gets re-issued.
How can this be?
In the United States, the law limits consumer liability for credit card fraud to the first $50, so no matter what happens, I’m only out $50. As a bonus, credit card companies nearly always waive that $50 charge, as they would rather not piss me off and lose me as a customer over a another person’s crime.
But if you use debit cards, the story is different. The consumer protection laws are far more lax, with a potential maximum liability of up to $500 – 10x more! And even though most cases of fraud will probably eventually be resolved, in the meantime, the missing money is your money. The bank can take up to 10 days to investigate the fraud, and during that time your checking account will sit empty. When you use a credit card, you’re buying things with somebody else’s money. While they investigate the fraud, you’re not out a nickel.
The moral of the story is: Use credit cards instead of debit cards. Or use cash. But either way, cut up your debit cards.
[You can find more information on the legal difference between credit cards and debit cards at US PIRG.]
Posted on August 23rd, 2013 No comments
[I wrote this post several months ago, shortly after the divorce was finalized. I have held off publishing it until all the final details were completed. Now they are.]
“How long were you married,” was always the first question. “Nine years,” I would reply. It was only a slight exaggeration. Our anniversary was only a few months away, and the mandatory waiting period meant we wouldn’t be divorced for six months, at least. “That’s a long time. You must have gotten married young.”
It didn’t feel like it at the time. We were both out of college – she had a master’s degree already! – when I proposed. We had career paths and car payments, student loans and credit cards, rent checks and insurance premiums. We felt like adults, though still struggling to learn who we really were and where we really fit and what was really important. Several of our other friends had already said their vows a year or more before, and we loved each other. She agreed, and the ceremony was a year later. We were twenty-three.
The years seem to have rushed by now, looking back. Our career paths detoured, and we sold our cars. A new city meant new friends gained, and old friends grown distant. The credit cards and loan payments were still there, but we exchanged the rent check for a mortgage payment, and put down roots. There were no children – a fact for which I am ever so grateful now – and as we discovered more of who we were, our love was changed and redefined, but was still a constant to me.
When I uncovered her affair, my world view shattered. I left her. I told our mutual friends what was happening, moved out of the house, and spent the next four-and-a-half months hurting. I posted to a secret, anonymous blog, dumping to it anything that came into my head: I wrote angry epitaphs, aimed at her. I wrote of the unexpected sadness I felt at the family we would never now have. I transcribed the dreams from which I awoke crying at three o’clock in the morning.
And then, without warning, I got over her. I moved on.
“That’s too fast,” people would say. “Ten years is a long time. You can’t be over her that quickly. Give yourself time to heal.” Know thyself, said somebody or another. And I guess I do. I had set myself up to succeed the moment I decided the marriage was over. With the help of my friends and family, I had made it through to the other side. For the first time in a decade, I was ready to live only for myself. I could do anything I wanted, and I did. I lost forty-five pounds. I went on trips and to concerts. I ate goat brains and learned to love seafood. I started dating.
I threw a big party this past New Years Eve. As I mingled with friends new and old, I found myself telling them that 2012 had been a really good year for me. Sure, a terrible thing had happened, but I came through it happier and healthier and more fulfilled than ever before. Loss and pain have acted like a lens, focusing me on what I want and what is important to me. My future is clearer to me now than ever before.
Posted on December 11th, 2012 No comments
Everything old is new again, and that includes Chromatic Coffee in Santa Clara, California. Formerly Barefoot Coffee, longtime visitors were surprised by the sudden change in name and signage. But concerns that the hangout would lose its charm or passion for great joe were quickly ameliorated: The staff tells me that the owner simply decided to open his own roastery to go with the shop – which only means even more interesting micro-roasts and single-origin coffees to experience! And the warm interior, friendly and professional staff, funky playlist, and ever-changing gallery of local art all remain, as welcoming as ever.
Chromatic Coffee runs two primary coffee stations. The first is a three-head La Marzocco rotary espresso machine, providing shots for the usual selection of lattes, macchiatos, cappuccinos, and doppios. Two time-modded grinders provide the day’s pair of espresso selections. The baristas are well-trained and know their product. When I arrived shortly after opening one morning, the barista Christine was still dialing in the machine to provide the optimum extraction for that day’s beans: grind-pull-sip-toss-adjust, grind-pull-sip-toss-adjust, grind-pull-sip-toss-adjust. Her extra few minutes of effort paid off in the end, though, with an excellent cafe latte: The Emperor blend she used held an excellent sweet chocolate, but with enough brightness to kick over the perfectly micro-foamed milk. And all topped with the a lovely bit of art that it seems we’ve all come to take for granted. The Guatamala Eucaliptos in the second grinder provided me with a doppio that landed on my tongue with a mild start, but quickly evolved into a swath of fruit and earth (the packaging describes it as “cherry cola”).
The second of Chromatic’s stations is a four-funnel manual pour-over stand, used to highlight the company’s selection of single-origin beans and artisan roasting. Each order is individually ground, filtered, and poured by hand into a single cup – a meticulous process, but one that brings out the flavor characteristics of each coffee. If you’re looking to experience the aroma and flavor of a particular region of coffee, this is how you want to do it. At the recommendation of Kyle, I sampled the El Cerro from El Salvador, and was thoroughly surprised by the different ends of the flavor spectrum it yielded: a tart fruitiness on the top with a deep base of chocolate underlying.
Chromatic Coffee’s ample seating and medium-to-large tables make it a great place to meet friends for a chat, or gather with larger groups. Coupled with the free wifi, they also make Chromatic an excellent spot for a laptop-bug to work while staying caffeinated. But please, no mooching! And a typical assortment of pastries are delivered fresh each morning, if you need more than just caffeine to function.
Chromatic Coffee is located at 5237 Stevens Creek Blvd. in
Santa Clara, California. Follow @CHROMATICCOFFEE on Twitter, @chromaticcoffee on Instagram, and find them on Facebook. For more photos, be sure to check out my Chromatic Coffee set on Flickr.
Laptop Friendly: Yes
Posted on December 3rd, 2012 No comments
During the question and answer section of the panel I recently spoke on at DCWeek 2012, one questioner asked the panel to describe an API that had “disappointed” us at some point. I replied: Twitter. Though he was angling for technical reasons – poor design, bad documentation, or insufficient abstraction – I had different reasons in mind.
Twitter’s Successful API
Twitter’s primary API is without a doubt a hallmark of good design. It is singularly focused on one particular task: broadcasting small messages from one account to all following accounts as close to real-time as possible. That extreme focus led to simplicity, and that simplicity meant it is easy for developers to code and test their applications. Interactions with the API’s abstraction are straight-forward and often self-documenting. When coupled with Twitter’s excellent documentation and sense of community, the early years meant that developers were free to explore and experiment, leading to a plethora of interesting – and sometimes terrible – Twitter clients (including my own Java IRC bot JackBot).
Coincidentally, the explosion of smart phones, social networking, and always-on Internet connectivity meant Twitter’s raison d’être was also a means to explosive growth. The Fail Whale was an all-too-familiar sight during those early growing pains, but the same focus and simplicity that made it an easy API for developers to use also made it possible for Twitter to dramatically improve the implementation. Today, Twitter serves over 300 million messages daily – up several orders of magnitude from when I joined – yet our favorite marine mammal surfaces rarely.
Twitter’s early business model is a familiar story. A cool idea formed the basis of a company, funded by venture-capital and outside investment. There was little thought given to how to turn a profit. Seeing themselves in competition with the already-huge Facebook, growing the user-base was the only real concern. For many years, Twitter continued to foster its community: In a symbiotic relationship with developers and users – who were often the same – Twitter expanded and modified the API, improved the implementation, and actively encouraged new developers to explore new and different ways of interacting with the Twitter systems. So important was this relationship that even things like the term “tweet”, the concept of the re-tweet, and even Twitter’s trademarked blue bird logo all originated with third-parties.
But the good times can’t roll forever; eventually the investors want a return, and the company began seeking a method to make money. Seeing itself as a social network, advertising was the obvious choice. But there was a problem: the company’s own policy and openness had made advertising difficult to implement. Here’s what I wrote in December 2009:
More than 70% of users on Twitter post from third-party applications that aren’t controlled by Twitter. Some of those applications are other services – sites like TwitterFeed that syndicate information pulled from other places on the web (this blog, included). Others are robots like JackBot, my Java IRC bot which tweets the topics of conversation for a channel I frequent.
Advertisers purchase users’ attention, and if you can’t guarantee that access, you can’t sell ads. But what third-party client is going to show ads on behalf of Twitter? Users – particularly the developers creating those third-party apps – don’t want to see ads if they can avoid it. You won’t make much money selling ads to only 30% of your users (who are also likely the least savvy 30%). What’s a little blue birdie to do?
The chosen path was to limit – and perhaps eliminate entirely – third-party clients. The recent 100,000 limit on client tokens is an obvious technological step, and they are already completely cutting off access for some developers. Additionally, where technological restrictions are difficult, changes to the terms of service have placed legal restrictions on how clients may interact with the API, display tweets, and even in how they may interact with other similar services. (Twitter clients are not allowed to “intermingle” messages from Twitter’s API with messages from other services.) It seems likely that the screws will continue to tighten.
A Way Forward: Get On The Bus
Twitter has built the first ubiquitous, Internet-wide, scalable pub-sub messaging bus. Today that bus primarily carries human-language messages from person to person, but there are no technical limitations preventing its broader use. The system could be enhanced and expanded to provide additional features – security, reliability, bursty-ness, quantity of messages, quantity of followers, to name just a few – and then Twitter can charge companies for access to those features. Industrial control and just-in-time manufacturing, stock quotes and financial data, and broadcast and media conglomerates would all have benefited from a general-purpose, simple message exchange API.
Such a generalized service would be far more useful to the world at large than just another mechanism for shoving ads in my face, and I would bet that the potential profits from becoming the de facto worldwide messaging bus would dwarf even the wildest projections for ad revenues. It wouldn’t be easy: highly available, super-scalable systems are fraught with difficulty – just ask Amazon – but Twitter is closer to it than anyone else, and their lead and mindshare would give them a huge network-effect advantage in the marketplace.
With this new model replacing the advertising money, third-party clients would no longer be an existential threat. Twitter could remove the pillow from the face of their ecosystem and breath new life back into their slowly-suffocating community.
Will they take this path? I doubt it. The company’s actions in the past several months clearly telegraph their intentions. Twitter’s API teaches us an important lesson that, no matter how well designed, documented, and supported an platform is, there will always be people behind it making business decisions. Those decisions can affect the usability of the API just as deeply as bad design, and often much more suddenly. Caveat programmer!
Posted on November 29th, 2012 No comments
Little pockets of downtime pepper our lives: waiting for the bus, waiting at a crosswalk, waiting for that one person in the group to come back from the bathroom again. We make smalltalk, we look at our watches, we check our phones. These moments flit away like a mote of dust passing through a sunbeam, a few seconds at at time.
Win, Lose, Banana is a game for these moments. A typical game lasts less than ten seconds, turning that awkward silence where everyone would normally be feigning sudden extreme interest in the patterns on the tin ceiling into an awkward argument over which player has the banana.
Win, Lose, Banana is a so-called convincing game for three players. It consists of merely three cards: Win, Lose, and Banana. Each player randomly choses a card, and the player with the Win card simply shows it and announces victory. Congratulations! The winner is then entitled to the banana.
Ah, but who has it? The two remaining players must then convince the winner that they have the banana – and that the other player is the loser. If the winner chooses correctly, both she and the banana may smugly gloat over the loser. But if the loser manages to be more convincing, he must mercilessly mock the other two players. The only real rule is that you may not simply show the winner your card; everything else is fair game. The lengths to which players can go to be convincing are otherwise bounded only by decorum and your imagination. And perhaps the length of time it takes that one friend to pee.
It’s hard to describe how much fun I’ve had with this game. Passing moments on the street with friends would sudden turn rowdy as we argued over possession of the banana. And at only $1, it’s a no-brainer that you ought to buy it. In fact, buy a dozen and give them away to friends. This may be the single best value in gaming – ever – and quite possibly the best $1 you’ve ever spent.
Posted on November 10th, 2012 No comments
I had the opportunity to speak on a panel at DCWeek 2012 this past week: “Five Crucial APIs to Know About”. (I am not listed on the speakers page, as I was a rather last-minute addition.) Conversation ranged from what goes into making a good API – dogfooding, documentation, focus – to pitfalls to be aware of when building your business on an external API. It was a fun and informative discussion, and I walked away with plenty to chew on.
An API is all about two things: Abstraction and Interaction. It takes something messy, abstracts away some of the details, and then you, as a programmer, interact with that abstraction. That interaction causes the underlying code to do something (and hopefully making your life easier). If you interact with it differently, you’ll get different results. Understanding an API, then, requires understanding both the abstraction as well as how you are meant to interact with it.
Now, DCWeek focuses primarily on the startup scene. As such, I expected that most of my fellow panelists would be focusing on web-exposed APIs. Sure enough, there was plenty of talk on Facebook, Twilio, Twitter, and laundry list of other HTTP-accessible APIs. All of which are great! Note, though, that these APIs share one common thing: They are all network-reliant APIs. As such, they are built on a whole bunch of other APIs, but at the end of the day, they all route through one specific API (or a clone): Berkley Sockets.
Why should you care about a 30-year-old API when you care about tweets and friends and phone calls? Stop for a moment and think about what those high-level APIs are built on: a network. Worse – the Internet. A series of tubes. Leaky, lossy, variable-bandwidth tubes. And it’s only getting worse – sometimes you’re on a high-bandwidth wifi connection; other times you’re on a crappy, intermittent cellular connection in a subway tunnel.
The user’s experience with a high-level network API is going to be directly impacted by socket options chosen several layers down – often just by default – but different experiences require different expectations from the network. Do you have a low-latency API that provides immediate user-interactive feedback in super-short bursts? Then you might want to learn about Nagel’s Algorithm and TCP_NODELAY. Does your app require a user to sit and stare at a throbber while you make a network call? You might want to consider adjusting your connection, send, and receive timeouts to provide more prompt feedback when the network fails.
And believe me: the network will fail. But how do you handle it? As programmers, we tend to focus on the so-called “happy path”, relegating failure handling to second-class status. Unfortunately, treating failure as unlikely is simply not acceptable in a world of ubiquitous networking and web services. Not all network failures are the same, and providing the best user experience requires understanding the difference between the various types of failures in the specific context of what you were attempting to accomplish.
So take a moment and do some research. If you’re using a networked API that exposes network details, learn about them and tweak them for the specific task at hand. If you’re writing an API, consider how users will be accessing it, and provide them guidance with how to achieve the best possible experience over the network. The people using your apps will thank you.
I’d like to thank my fellow panelists – Greg Cypes, Sasha Laundy, and Evgeny Popov – for such an interesting discussion, as well as to thank our moderator Matt Hunckler for keeping us on track. I hope we can do it again in the future.
Posted on March 20th, 2012 No comments
I don’t have anything to say publicly about Mike Daisey’s lies. What I will say publicly is that Heather and I saw The Agony and The Ecstasy of Steve Jobs at Woolly Mammoth Theater on April 14, 2011. It was a powerful piece, and – unusually for me – I saved the program. After reading this essay by the former marketing directory of the Woolly Mammoth Theatre Company, I have scanned page three and page four and placed them here. I have slightly edited the image of page three in an obvious manner. You may click on the the images to view full-size versions.
Posted on February 25th, 2012 No comments
I have always had a thing for books. I started reading when I was a kid, and I never stopped. I oscillate regularly between fiction and nonfiction, binging for a while on science fiction and epic fantasy before devouring title after title on politics, economics, science, and philosophy. Packing for every trip included at least one hardcover – preferred over paperbacks for their sturdiness and aesthetics on my bookshelf – and sometimes two or three.
I purchased an iPad in May 2010.
Since then, every single book I have read has been on the iPad. This wasn’t because I had some idealistic desire to switch to eBooks. It just sort of happened, in retrospect I think because they were both cheaper and just easier to get. I have bought and read nearly two dozen books on my iPad in the past year-and-three-quarters, but I planned on purchasing the hardcover for books that I really wanted on my shelf, that I really thought were special – the books I really wanted to read, the way you only can with a hardcover.
Ironically, the first book to pass that bar was Walter Isaacson’s biography Steve Jobs.
I had pre-ordered it, and also received a copy as a gift, so I had two copies sitting on my shelf while I churned through the reading list on my iPad. Finally, a couple of weeks back, I located the large, white cover, pulled it off the shelf, and dove in.
It immediately pissed me off.
Not the content of the book, what Walter Isaacson wrote, but the book itself – the actual physical thing. I had had no problems reading from wood pulp and ink for three three decades before the iPad, yet suddenly I found myself constantly annoyed by its characteristics. I was shocked at all the annoying things about a paper book that I had never noticed before.
The paper book is bigger than my iPad, and it’s heavier, too. The book takes up four or five times the room in my backpack. Before, I had sometimes carried around two or three of these things at once! How the hell did I ever do that?
It’s more cumbersome and awkward, too – a lot more. My iPad is evenly balanced and always the same shape, making it a simple matter to re-orient it, hand it off to someone, or just lean it against something for support. The physical construction of the book – there’s a hinge in the middle! – makes it lopsided most of the time, which makes it difficult to re-orient. And don’t you dare try laying it flat or propping it up without something holding down the pages – you’ll lose your place in no time as it unhelpfully flips pages around.
What is a book for, if not reading? And yet, though I had never noticed it before, a book on paper is actively hostile to reading. The pages curve inwards towards the binding, distorting and hiding the words. It isn’t a constant thing, either, because the curvature and distortion becomes worse as you get closer to the binding. I regularly find myself physically reorienting the book – made more difficult by its aforementioned cumbersome nature – just to read the words on the page! In contrast, the words on my iPad are always flat and never distorted.
The final nail in the coffin is the paltry lack of features in the paper book. There is no built-in dictionary, so I can’t just tap a word and check its definition, nor is it possible to search for all occurrences of a particular word or phrase. As a little bonus, if I have a few moments to spare, I can pull out my iPhone and pick up where I left off; my current spot is synchronized automatically. Though they are not essential, I have become used to such niceties.
Which is not to say that reading on the iPad is perfect. Glare can be an issue, especially if I am outside. A paper book isn’t going to run out of battery any time soon, and it isn’t prone to breaking if I drop it from a height. Nor is it a tempting target for thieves. And as a long-term archival mechanism, or just a pretty thing to show off on a shelf, the paper book wins hands-down.
But for reading, the actual process of reading: I’m a convert. Give me an eBook.