home ¦ Archives ¦ Atom ¦ RSS

Leung: Aggregators Pushing Performance?

Ted Leung notes that he interacts with the Web much more through his aggregator than his browser, and he subscribes to a lot (1000?) of feeds. Combine with aggregator side analysis and he sees it as an opportunity to productively burn future cycles.

I have to respectfully disagree. When we're tallking machines in the Ghz range, and I think 1 Ghz is the low end of desktops these days, with 1/2 Gb of memory, they should be able to analyze 4000 RSS items a day easily. This is purely a gut feeling, but I'm betting aggregators are actually network bound, not CPU bound. There's relatively mature, optimized packages out there for the techniques Leung's thinking about. Just as an example, see how Steven Johnson uses DevonThink, which must have some similar stuff inside. Granted Johnson's research material might not be growing at the same rate as a complete archive of Leung's blogroll, but this analysis stuff has been pounded on for a long time. It's really only when you get to Google/Amazon/Yahoo!/MS scale problems, trying to do it for millions of users in real time that things get hairy. Besides, Leung is probably an outlier although I realize making things better for prosumers typically propagates to all users.

Now more cycles for rendering analysis derived pretty pictures, visualizations, charts, and graphs , especially interactive, might be a welcome driver for future CPU sales.

© Brian M. Dennis. Built using Pelican. Theme by Giulio Fidente on github.