HellOnline

Icon

Eran's blog

Power Tools for the Web

[limbo@hellonline]$ cat /var/log/httpd/access_log | grep technorati | cut -d " " -f4,7,11 | sort | cut -f2,3 -d " "

The above line extracts page and referrer information from my access logs sorted by request time. It really isn’t very impressive by itself; I’m simply using it as an example of how useful small and simple tools can be when used together. Wouldn’t it be wonderful if we could do the same thing on the Web? Here’s a quick example:

.
http://smallweb.cs.usfca.edu/dispatch.fcgi/rssmangler/?act=fromical&src=...
http://suda.co.uk/projects/X2V/get-vcal.php?uri=...
http://laughingsquid.com/squidlist/calendar/

This horribly complex URL (broken into 3 lines and un-urlencoded) uses Brian Suda’s hCal2iCal application to extract iCal information from Laughing Squid’s hCal enabled event page. The created iCal file is then sent to RSS Mangler where it’s converted to RSS. Why is this good? At the very least, it allows me to read about those events using bloglines. But why stop with one feed? Pull in as many feeds as you’d like, plug’em in an Events directory and there ya go! How about automatically creating a feed based on an OPML list of URLs? Does that sound like a good way to keep up with every one on your blogroll? Automated stalking just doesn’t get simpler than that.

The “glueâ€? that lets you do that on Unix is pipes, on the Web things seem to be more complex. XML is a good start, XSLT helps quite a lot especially with some XPath based filtering. The problem with these two is they’re much too complex to just use on the fly. I think that a set of small power tools like the RSS Mangler can evolve into something very useful. I don’t have a clear plan yet; I’m thinking smart but not too smart, general but not abstract, using RSS as the glue and microformats for the actual data.

Advertisements

Filed under: Projects, The Net

%d bloggers like this: