About a week ago I got a note from Jury Gerasimov, a developer on Surfpack. Gerasimov was excited by my wistful dreams of an "aggregator as platform". Aparently Surfpack is aiming to fit that bill.
There are two elements that I think are crucial for extensiblity, and missing in just about every aggregator. One is the ability to hook into the fetching/crawling mechanism. Inside every aggregator is a little personal Web crawler. As far as I know, no aggregator makes it easy to monitor and extend what that crawler does.
Second, once the crawler has done it's work retrieving all of that web content on the user's behalf, presuming the aggregator persistently stores what it finds, you'd like to have a nice interface for interrogating that data. A little feed item specific query and manipulation language would be my ideal, but then again I'm a geek. Along with an extensible user interface this is one of the key aspects of "aggregator as platform".
Again, making an analogy to Emacs, text navigation, querying, region marking, etc. are fabulously well supported in Emacs lisp. What would be the appropriate language mechanisms for working with a database of webfeed items?