There’s been a lot of hooplah over the robots.txt file published by the official White House web site. I’ve heard a few different theories from different sources, ranging from Big Brother to mere ineptitude on the part of the domain’s sysadmins.
(For those who are unaware, the
robots.txt file keeps search engines such as Google from indexing certain pages on a web site. The
whitehouse.gov has been recently modified to disallow searching inside of directories named
iraq throughout the White House web site.)
I would usually chalk up stuff like this to stupidity by the system administrators. After all, if you look at the file, it disallows things such as
/kids/baseball/teeball-20020923/iraq. Perhaps the White House T-Ball game was held in Iraq? Or maybe Uday and Qusay were playing that day. Really, this looks like some kind of botch. However, I have to be honest that the current administration’s sketchy history with the truth makes me wonder about their motives.
Taken on its own, I see no evidence at any sort of plot to hide the past forms of the site from the public. Things such as the Wayback Machine would make such attempts pointless. Tracking who is searching for what on the White House web site seems equally pointless, since obtaining personally identifying information from an IP address is an annoying and arduous process; and while I do not put the capability to do so beyond the reach of the executive branch, it seems unlikely they would waste their time on something of such limited value. So in the end, we come back to plain old ineptitude. Seems to me like a theme common throughout the entire administration.