I’m back on the command line…look out…

So, I am still poking away at various ways to interface between the bash and various web components. This stuff is still shaking out of my Twitter to CLI to LCD project. I have been using some of the usual suspects (wget, python url libraries, etc) in an effort to find one that is best suited to sniffing and parsing html info. I have found that writing to serial is the most fail-proof means of mashing around data quickly, so that is my inspiration.
Anyhow, I was horsing around with curl on the command line. It turns out, that spoofing user agents is pretty simple to implement.
Take this two-line one liner for instance:
swantron@Dell15:~$ x=10
swantron@Dell15:~$ for (( y=1; y<=x; y++)) ; do curl --user-agent "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)" swantron.com ; done
In a nutshell, this is acting to snag this site ten times, with credentials indicating the machine is running Windows 2000 and MS Internet Explorer 5.1. The success is indicated by my server statistics:
Not too bad. I am considering using this as a testing tool for my site. Messing around with PHP and CSS…I can put together a quick regression using some legacy operating systems and browsers to make certain that I can still snag my content. Granted, the example setup may seem like gross overkill, but as those stats indicate, I do see quite a few requests from ancient machines. Makes you wonder..
Anyhow, this layout is in need of some serious testing. If anyone is interested in a copy, drop me a line. Cheers.