Yeah, TechCrunch got the "big sites" wrong as well as the importance of being an issuing party. Being a relying party is useless to FB's 200+ million users.
Unfortunately I'm unable to share the code, as I sold the intellectual property.
However, I can share that it used Hpricot, and had a number of XPath rules for each discrete element of both the general feed and "items" that needed to be extracted (title, link, description, time, etc). Each rule was in an array, so rules for Atom and RSS could be mixed.. the first to match dictated the format. This is a pretty quick and dirty (but ever so effective) way of doing it - parsing feeds as XML in the "technically correct" way is an absolute nightmare given the poor validity of XML out there ;-)
All that said, one thing worth looking at is: http://rfeedparser.rubyforge.org/ - it's based on the Python Universal Feed Parser which is generally considered to be the most awesome of feed parsers out there :)
Either force install Hpricot 0.6 (your usual stuff will still use the newer one, rfeedparser will use 0.6) or go mangle with rfeedparser to remove the lock to that version (of course, that removes any guarantee of it working 100%).
BTW, today is an awesome day to write a new feed parser. Here's why:
I always use Facebook Connect (via Facebooker plugin) over OpenID authentication. The adoption rate and viral advantages of Facebook are unmatched.