home ¦ Archives ¦ Atom ¦ RSS

Python: 2.5 New Features

Figures. I have my copy of Python Essential Reference (3rd edition) for a week, and it's immediately outdated as Python 2.5 goes to alpha. Sigh.

The new features in Python 2.5 look really interesting though. As I tell my scripting languages class, while enterprise neanderthals and programming semantics snobs may give these languages short shrift, there's actually a fair amount of language design that goes into their evolution. Perl might look all gnarled and mangy, but it looks that way on purpose! Ha, ha. Only serious.

Anyhoo, at a glance the coolest additions are the inclusion of the etree module, for XML processing, and sqlite3, a lightweight, embeddable RDBMS system. If the Python junta is going that route, they should just include pycurl to replace the somewhat archaic urllib2. Core Python, pycurl, etree, sqlite3, and bsddb start to make a nice platform for webfeed aggregation straight of the box, although a reimagining of Pilgrim's Universal Feed Parser in ElementTree would be nice.


NMH: SOS Sphere?

After all that fanfare, does anyone know if Sphere is still alive? Any plans to actually move out of closed beta? Zombied and we just don't know it? What's the scoop?

Enquring minds want to know!


NMH: A Precis Engine

Thinking out loud. I'd like an engine that I could give a URL and would return a precis of web documents related to the URL. Sort of like how link:url works on Google and other search engines, except with more smarts. Technorati is in the right direction for blog posts, at least when it actually works. But it doesn't tell you much more than what links to the post and some notion of related links.

This is hard for the entire web of URLs but would be really useful in some restricted domains. For example, what if you had an engine that simply looked for content related to every piece of legislation in the US House and Senate. Then I just plug in a URL for a particular bill (the engine would have an easily understood way to construct such a URL, e.g. http://example.com/congress/109/house/1234) and get back a page that summarizes related bills, information about the key politicians involved, analysis from government agencies and non-profit institutions, relevant news stories from across the country, and highly relevant content found on the web, including weblogs and other discussion.

Actually, the URLs don't even have to map to documents but relatively obvious concepts, for example stocks.

This precis engine concept relies on focusing the Web to a small, finite set of documents, easily mappable to URLs, and only crawling and searching for information about those documents. The technology is currently well within our reach and reminds me to take a look at what's been happening in the focused crawler community.

Of course, I've sort of reinvented Topix.net, but instead of straight news about a topic, the precis engine would tend towards a Wikipedia page's content, despite using alternative methods to build the encylopedic page.


Barr: S3 Activity

Despite my attestation that S3 isn't a game changer, it looks like there's going to be some interesting activity around Amazon's cheap, network storage service. Jeff Barr has a roundup of some recent developments.

Apropos of Elle Driver: "You know I've always liked that word... attestation ... so rarely get to use it in a sentence."


NMH: Nickelodeon Programming

You know you still retain a bit of your PLDI roots when you see the following title in your aggregator:

Nickelodeon Continues Multiplatform Programming Experiments; Commits To Multi-million Dollar Development Slate
and think "Why the heck does Nickelodeon have a programming languages research group?"

;-/


Topix.net: Forums and Citizen's Journalism

Full disclosure, this post was mainly prompted by an e-mail from Rich Skrenta, Topix.net's CEO, although I had his posts bookmarked for further review.

There's been a couple of interesting effects around Topix's introduction of forums beside their automated news gathering tools. First, they ditched forum registration and participation went up along with quality. Second, the forum for Caruthersville,MO became a coordination spot for citizens after a series of deadly storms and a tornado ripped through the small town.

I'm not sure I'd call the forum citizen's journalism as Rich does, but I agree with the tail of his post where a challenge of the modern news organization is to "enrich and enable communities of readers with voices of their own." Topix is demonstrating effective engineering of discussion forums in conjunction with news. As Vin Crosbie points out, cesspool forums is a tired old story that still crops up (cf Washington Post) in the news industry. Any new paradigms to deal with the problem should be warmly greeted.

I wonder if Topix could use some of its blog mining techniques on forums to surface and highlight really useful forum posts. I'll leave unanswered how to measure utility, but at that point I'd really start to call the forums citizen's journalism.


NMH: Stupid Date Meme Study

I refuse to link to that stupid "counting date" meme that's running around the blogosphere, or even provide an example because the concept is so content free. Seriously, a moment in time, when formatted according to some arbitrary human standard, looks like an ordered string of numbers. AND?!

The mind boggles.

But! some enterprising blog tracking company should do a study of where the dang thing emerged and how it spread and present the results as some form of social network analysis. The tricky bit is trying to estimate a causal event that motivated a particular blogger to post the meme, but solving such problems is half the fun. I mean where are those HP, Blogpulse, Technorati, and memetracking folks when you really need em!

Ha, ha. Only serious


NewsGator: REST API

NewsGator is a company that has a collection of webfeed aggregators. NewsGator's web based aggregator provides a Web API, principally for synchronizing reader status across the family of products. Note that the products are actually pretty good including NetNewsWire and FeedDemon, but the API, being SOAP based, wasn't particularly friendly, at least to these eyes.

Or at least until now. The NewsGator API has been recast into REST (PDF). NewsGator online becomes one of the few (only?) web based aggregators with a lightweight Web API that supports modifying the shared state at a fine granularity. (Please!! Correct me if I'm wrong. Acking UserLand based aggregators that support XML-RPC).

The nut is that NewsGator becomes another interesting aggregation platform, like endo, where background processing stuff can usefully interrogate and drive really good clients. We're creeping closer to putting some hackability into the DailyMe.

[Via Kevin Burton]


ZoomCloud: Outsourced Tagcloud

Link parkin': ZoomClouds is an outsourced tag cloud service, derived from webfeeds + the Y! content analysis service. Apparently there's an API to boot.

[Via Jeff Barr]

P.S. I'm not dead yet. Light posting is due to not seeing much really interesting going on in the tech world and a lot of work locally.


NMH: Memetracker Transparency

With the addition of OPML seeding in aggregators like TailRank and Megite, memetrackers seem to be slowly drifting towards my vision of focused crawlers in aggregators. I like the direction things are going, but have an issue with transparency in these systems.

Maybe it's just the geek in me, but I'd like to have some way to investigate how such systems decide on what to serve up on my behalf. As it is, I can't tell whether they're any better than random. For example, in addition to a hidden algorithm, memeorandum doesn't even reveal its sources. Megite has generated a bunch of pages for me, presumably customized, but I have a hard time distinguishing anything different from what I would have seen in my aggregator.

To put it simply, how do we know these systems aren't just big old Wizard of Oz experiments, "Pay no attention to the man behind the mirror." I admit this is something of a power user/prosumer issue, but this camp can serve to generate confidence amongst the unquestioning masses.

And of course the next thing I'll be asking for is some knobs on the things so I can tweak them, but you can't get greedy.


Amazon: S3

Just a few thoughts on Amazon's new web service S3, which is cheap, reliable, plentiful Internet based storage.

S3 is not a game changer.

To me it seems appropriate for boutique applications or proofs of concepts, but you wouldn't bet the ranch on it. I did a back of the envelope calculation on what 1 terabyte of data (a nice round size and the point at which this really matters) would cost on a monthly basis. Your first month, storage and transfer is roughly $360. Every month thereafter you're paying about $150. After about 5 months, you've forked over enough for one of LaCie's rackmount, network attached terabytes which costs $899. That 5 months is if you never transfer any of those stored bytes, so it's a lower bound on your cost. I know it's not quite apples to apples, since you'd really want to do RAID and possibly some clustering, but the gist is still the same: commercial, off the shelf, network attached storage is competitive with S3.

The upside of the LaCie is that you can park it as close as you want to your computation. If you're using S3 for anything more sophisticated than a dumping ground, and/or not planning to peer close to Amazon's routers, expect to do some serious cache engineering to deal with the Internet latency and congestion between you and S3. The LaCie also has standard network filesystem interfaces. To access your storage on S3 you've got REST and SOAP.

The upside of S3 is that they guarantee 99.99% availability and you don't have to deal with backups. And I am in no way pooh-poohing this cost savings.

So what you're paying for on an ongoing basis is the removal of a certain amount of engineering hassle and replacing it with others. I don't think you can build the whizzy, large scale, interactive web applications that folks are fantasizing about on top of S3, without a lot of smarts. At which point you might as well build your own cheap knockoff of S3, and own your own wheel.

But it would be a gas to build a tuple space frontend to S3 and run with it.

By the by, I wonder if Alexa Web Search has generated anything interesting.


Pilgrim: Encoding Detector

Total geek post forthcoming. I've been doing a lot of webfeed crawling and parsing development recently. Due to the heinous abuse of character encoding declarations out there, I had to rip off some encoding detection Python code from Mark Pilgrim's Universal Feed Parser. (Don't worry it's legal.) While grinding my teeth, I was wishing there was a way to just hand off a string to some function and have it guess what the encoding was.

Oops. Problem solved.


NMH: Good Software Experiences

After my travails with hardware falling over last week, I've actually had a pretty good week with software. Herewith, three short anecdotes.

One bugaboo of desktop Linux back around the turn of the century, was dealing with printers. That old UNIX chestnut, lpr, just didn't play well with modern printers. Today I ventured to use my network printer with my Ubuntu desktop to print out some technical papers. Suffice it to say, the printer control panel pleasantly surprised me with a two step process for configuration. In no time, I was printing out PDFs with one click. Duplex even!

Through an automated script I've been meticulously recording information regarding Flickr's interestingness for the past five weeks. The data is stored in a Berkeley DB and resulted in a 1.6 gigabyte file. Accessing the data was dog slow though. After a careful reading of the documentation, creating a new db tuned to the properties of the data, and converting the data over, I've got a db that's 1/3 the size and fast as all get out. This took the better part of an hour or two but was well worth it. Most of the time was spent redesigning the db scheme and waiting for the conversion to complete.

I have a webfeed crawler, mostly written in Python, that records all content that it retrieves. There's a fair amount of redundancy between most fetches. I wanted a way to calculate deltas between versions and simply store the deltas. After digging around on the web for various algorithms to do this, and dreading implementing any of them, turns out Python has a nice module, difflib, that fits the bill. The SequenceMatcher class is really handy. Talk about batteries included!!


kula: endo

I don't know if Adriaan Tijsseling is the brains behind kula, but it's good bet since they seem to be the home of ecto and 1001.

In any event endo looks like a worthy competitor to NetNewsWire, which is the best of the RSS aggregators I've used. I'm really interested in the extensibility features. A clean, full-featured plug-in architecture combined with good AppleScript support means a wide latitude for experimentation. And the feed management tools look like a nice departure from the current "three pane/river of news" dichotomy.

endo's probably worth shelling out $18.00, at least if you're on a Mac.


Uckan: MonitorThis

Link parkin': Alp Uckan's MonitorThis takes a bunch of search terms and generates a bunch of urls to 30+ searchable engines. The results come back in the machine readable OPML format to munge as you please.

Apologies for not getting the accent correct in Alp's last name.

Via Marshall Kirkpatrick at The Social Software Blog


Webb: playsh

playsh seems very interesting according to Matt Webb's description of the MOO-like environment.Quote:

playsh is a MOO-like text environment that runs on your local computer. The basic object types and verbs are based on LambdaMOO. It's organised geographically, so you can walk north and south and so on. You have a player, so you can take and drop items. You can create new things and dig to new rooms, and there are verbs attached to all of these. There are ssh interfaces so you can connect to playsh from other computers, and other folks can connect to your playsh instance from their own.

playsh is implemented in Python and the game's interactive language is prototype based, ala Self and io, with a couple of neat twists: per-player perspectives on objects, and web resources cleanly incorporated. The only downside, from my perspective, is the big inhale of other toolkits. The thought of installing Twisted always gives me the willies.


Bricklin: When The Long Tail Wags The Dog

Digesting Dan Bricklin's "When The Long Tail Wags The Dog" takes a long time, but is well worth it. Bricklin makes an argument for generalized tools being the big winners in hyperabundant, hyperdifferntiated economies, commonly referred to as Long Tail markets. If you don't track this stuff, read as "market with a ton of niche product of high value only to a small audience." The club DJ 12" vinyl dance record market might be an example. In any event, Bricklin's argument rest on general tools providing a wider array of options, including any unanticipated capabilities that you might want.

Extrapolating Bricklin's argument to webfeed aggregators, an extremely flexible aggregator that exhibited one or two compelling uses out of the box could be a big winner. So all the meme-tracking tools might be barking up the wrong tree in the long run. Then again, designing for extensiblity in an aggregator is still in its infancy, and designing for extensibility in general is hard.


Cal Students: Victoria Hack

My Cal brethren pulled a good one on a USC basketball player. Nothing like using IM to invent a fictitious babe, name Victoria, chatting up the opposition, Gabe Pruitt, and rattling Pruit mid game to good effect.

Of course others might suggest the prank a little on, if not past, the edge but I put it in the Robin Ficker category of deviantly inspired mischief. Ah the mircles of the modern Internet.


Buterbaugh & Dennis: matchr

Ryan Buterbaugh, a CS undergrad here at NU working on a combined BS/MS, has brought to life a couple of half-baked ideas I've had about leveraging some of the usable exhaust emanating from Flickr.

matchr is a Flickr puzzle game. The short story is we take a set of eight Flickr tags, randomly select photos labeled with those tags (two per tag), construct a 4 by 4 grid of thumbnails, and ask users to figure out which photos have the same tag. The puzzles are surprisingly difficult, especially depending on which tags are selected. Really generic tags have an amazingly broad range of attached visual imagery.

What's the point from a research angle? Well, matchr allows users to form their own groups, pick their own tag sets, and essentially create new media artifacts from Flickr photos. Puzzles are scored based upon number of match guesses and time to completion. We're interested in understanding how such game construction and playing would fit into a system like Flickr, as an additional means to motivate and encourage participation. There are a few in-system Flickr games and social activities, like deleteme, Squared Circle, and "Day in the Life", but Flickr wasn't really designed to support social activities where folks can build new constructs. matchr is a small prototype to see what happens when such activities are more explicitly supported.

matchr is a bit more involved than other games like fastr, but we hope it's also a bit more engaging over the long term. That said we know there's more work to do, and are open to constructive criticism. If you're a long term NMH reader and want to do us favor, a link would be great.


NMH: E-mail & Web apps

Is it just me, or should a Web application that sends e-mail be ready to receive it? Blackboard was the first application to irritate me with this bug. It's bad enough I have to suffer through a horrid Web interface, and edit messages in a text area, but then people can't respond to the message!

I thought it might just be Blackboard, but Basecamp has the same dysfunction. Comment notifications can be e-mailed to you, but you can't e-mail back. If you'd like to respond, from within your nice comfy e-mail app, for which you have a ton of muscle memory, you have to click on a link, wait for your browser to come up, maybe log in, find the response area, and then edit your response in one of the world's worst editors (again, the text area).

Makes me want to participate in the conversation!!

This is a plea then, for someone to come up with a good Web app idiom where you can e-mail to the conversation, sort of like, gasp!, a mailing list. Besides, it might actually lead to more active and efficient conversations.


Ferrier: Ajax + REST

In his article Using REST with Ajax, Nic Ferrier highlights an interesting property of the XmlHttpRequest object used in browser side JavaScript. The object can use all of the HTTP verbs, which is impossible to do from straight HTML. So in some sense Ajax makes REST implementations of Web services easier and vice versa. If you need to use PUT and DELETE as part of your REST protocol, then a client side UI can take advantage of all the standard idioms.

Learn something new every day.


Steward: ListMixer

ListMixer is a meta-social bookmark service. You post bookmarks to it and then can easily forward it on to other services like del.icio.us (clean ui, very popular) or Furl (caches bookmarked pages). Another feature of ListMixer is timed existence for bookmarks. If you don't revisit the link within a set amount of time, it gets deleted.

A nice combination of some simple, but potentially quite effective, ideas courtesy of Sid Steward.

Via Marshall Kirkpatrick at The Social Software Blog


Chen: Megite

For whatever reason, Megite's Matthew Chen solicited me as a beta test for the personalized news site. What the heck? So now I have a personalized news page based upon my Bloglines subscriptions. We'll see how it goes, although I have to admit, I went through the same process with TailRank and haven't revisited that site since registration. I think I'll have to subscribe to both of my personalized feeds in Bloglines.


NMH: Ubuntu Impressions

Me and computers haven't been getting along very well recently. My laptop croaked, my grad student's laptop croaked, my desktop motherboard keeled over, the Windows XP install on the desktop harddrive only boots in old hardware, and a power supply on an important research machine bought the farm. Sigh!

In any event, I rescued the Windows XP drive, and plugged it into an older unused machine. Meanwhile, having had enough of XP for a while, I installed Ubuntu Linux on the old machine's disk. Wow! Desktop Linux has come a long way since I made my last pass at living in front of UNIX about 5 years ago. Note that I use a lot of UNIX machines, just haven't sat in front of one on a daily basis for a long time.

The installation was smooth as silk and in under an hour I had a nice graphical desktop that recognized all of my hardware. OpenOffice was installed by default, so I could deal with the inevitable MS Office attachments. Even after I added a cannabalized graphics card and some memory, post-install, the system was straightforwardly updated. There was even a couple of easy to install packages that supported accessing my iPod with no fuss.

Obviously, the true test will be continued use, so call me back in a month to see how things are going. And I still believe MacOS X is the best solution for us UNIX heads who don't want to think hard about a GUI and professional office applications. However, Ubuntu, and I have to believe a number of other Linux distributions, are well past the bleeding edge stage now. This is especially important for disadvantaged or underfunded groups who might be stretching their computing budget by buying or receiving two generation old equipment.


Carden & Schmidt: processing hacks

Somehow I get the feeling that there's this odd alternative universe of fun, artistic programming and hacking built around the processing environment. Tom Carden and Karsten Schmidt's processing hacks wiki is just further confirmation.


Lamantia: NextGen TagClouds

Now that tagging is all the rage, tag clouds, weighted lists of tags, are becoming a more frequently encountered user interface device. For a weblog visualization project I'm working on, I pondered for a half a second using tag clouds liberally, but didn't have time to think through the details.

Fortunately, Joe Lamantia did have the time. In a two part series, he delves into the semantic underpinnings of tag clouds, and future directions for the design and application of tag clouds. There's a little too much background material for me, since I'm steeped in the stuff, but the thinking is definitely going down an interesting avenue. The main point is that variations of tag clouds will start to exhibit more interface elements to understand the context in which the cloud was generated, or even interactively control the cloud contents. I especially like consideration of incorporating temporal dynamics into tag clouds, a.l.a Oliver Steele's Expialidocio.us.

Updated to correct the outbound links.


Sandler: FeedTree + Coral == Calcium

I've been lax in tracking what's going on with FeedTree, the Rice research project looking at P2P mechanisms for distributing webfeed content. A quick glance at the FeedTree weblog indicates healthy activity, including Calcium, a Python program which gateways FeedTree content into the Coral CDN (content distribution network, think Akamai). Projects like FeedTree can help alleviate bandwidth issues when a weblog publisher gets Slashdotted or otherwise stampeded.


Metcalfe: coComment Forking

Ben Metcalfe brings up something I hadn't thought of in relation to coComment, the weblog comment tracking service. coComment forks conversations, meaning it constructs a, possibly, alternative version of the comment stream. For example, on a site a comment might be moderated, edited, or even deleted by the site owner. But since comment "registration" with coComment is the purview of the comment author, coComment may still attribute the comment to the host site. Also, and I did pick up on this when I first heard about coComment, is that it only reflects the comments of those users registered with coComment and conscientious enough to use it regularly.

This arguably makes conversations on weblogs worse rather than better. I know I'm driving up VHS (blogs) vs Betamax (Usenet) Lane, but this is yet another example of reinventing Usenet, except worse. The only hope I really saw was in the comments to Metcalfe's posts, where someone proposed the use of a microformat standard to mark comments. Then a weblog search engine could intelligently aggregate the comments without circumventing the site owners prerogative.

And just as an aside, where are all those folks who were indignant about not letting blog authors opt out of Google's AutoLink, Microsoft's ActiveWords, or Third Voice? It's not apples to apples, since coComment doesn't purport to display content alongside the original site (yet it's an easy toolbar button away), but still you can argue that this eliminates some control of a blog's content by the author.

It all boils down to who owns the comment: the author or the hoster.

Of course, coComment is probably going on to wild success given my ability to predict the future.


Steele: reAnimator

reAnimator is a Web based tool for exploring regular expressions. Oliver Steele built it out of Python and OpenLaszlo.

As Trinity said in Matrix Revolutions, with an appropriate amount of detached cool, "That's a neat trick."

Did ya miss me


Cannasse: haXe

As a follow up project to the MTASC, the Motion-Twin ActionScript Compiler, Nick Cannasse has been working on haXe, a language that can be compiled to Flash, DHTML, or mod_neko, a VM embedded in an Apache plugin.

The mind boggles. A high-level JavaScript like language that compiles to virtual machines embedded in within a browser or webserver. What? No Java virtual machine? Given haXe's heritage, it can't be that hard to implement translation of a simplified version of Python in a similar fashion. Don't know if it would be worth the trouble though.


Willison: Summit Recaps

Simon Willison is contributing to and collecting text summaries of speakers at The Future of Web Apps Summit, taking place across the pond in London, UK. Some notable speakers, from my perspective, already recorded: Joshua Schachter, Tom Coates, and Cal Henderson.


NMH: On coComment

I'm not much of a blog comment fan, viewing blog comments as a distant shadow of Usenet, which despite its faults worked amazingly well. So take the following with a grain of salt.

coComment is a comment tracking tool and community which purports to gather all the comments of its users across the blogosphere into one place. For each user, it gives a nice little AJAXy display of all the comments they've made. For a blog, you get a similar presentation of all comments from coComment participants.

I can't figure out whether to file this under solipsism or narcissism.

However, the developers have set themselves up with an interesting Google style problem. I may scoff at the utility, but I have to imagine "organizing the world's comments" is fairly difficult.

You know I've always liked that word... "solipsism" ..., so rarely have an opportunity to use it in a sentence.


MS: Feeds API

Embedded in the Microsoft's next OS release, Vista, is a Feeds API. Looks like a centralized combination of feed retrieval tools and subscription information storage.

Upsides: less work for application developers, probably can be easily scripted, wide adoption, Outlook will sync with the subscription store, wide deployment if not adoption

Downsides: information leakage across applications, pre-determined data model with no apparent extension capabilities

Still, you might be able to build the Emacs of aggregators on top of this some day.


UCB SIMS: InfoSys 141

Webcast archives of the UCB SIMS course InfoSys 141? Good!!

Archive in evil Real format? Bad!

But the suffering just might be worth it to hear from Peter Norvig, Susan Dumais, Hal Varian, and Sergey Brin, amongst others, in discussion led by Marti Hearst.


Crosbie: Identity 2.0

I haven't been closely following many of the efforts to build new identity mechanisms into the web. I leave that to other folks and plan on riding their coattails when The Right Thing comes along.

In a quite entertaining parable, Vin Crosbie lays out why one should be keeping an eye on Dick Hardt, Sxip, and Identity 2.0.


MS: Live Labs Academic Grants

Microsoft recently announced that they're forming a couple of new lablets, Search and Live, the second under the direction of Gary Flake. Both labs are intended to better connect to the academic research community MS's research and business units focused on search and internet applications. They're even giving out some decent chunks of change, in the form of grants.

Here's one suggestion. Pay some one to port the MSN Search SDK into a form usable by the open source scripting languages: Python, Perl, Php, Ruby. As it is, you're stuck with a ridiculous inhale of MS dev tools if you just want to kick the tires and see how good or bad the dang thing is. Not good for adoption.

By the by, Live Labs looks like it has a nice collection of interesting researchers in place including Susan Dumais, Eric Horvitz, and Doug Terry. Then again, the full roster gives off a whiff of one of those university centers/programs where there's 40 affiliated faculty, 10 participate at all, and 5 do all the work.


Hawk: 30 Boxes Review

30 Boxes has, or will have, the blogorati in a tizzy shortly. The web app is a calendaring tool that apparently will employ many of your favorite Meme Who's Name Shall Not Be Uttered features, slickly executed: tags, AJAX, social mechanisms. Pseudonymous tech chronicler Thomas Hawk has one of a canonical overview of 30 Boxes in its current pre-release state.

One interesting point of 30 Boxes is the mechanisms it uses to implement social boundaries. From my read of the review, access control is implemented using a combination of buddy lists and tags. I've been thinking that buddy lists are the only widely adopted and accessible tool for end user access control and should be applied in just about every piece of social software. The combination with tags though is just brilliant. The tags make it relatively easy to say what elements of your personal calendar others can access. In fact, the tags support a flexible grouping mechanism that should allow users to settle on the right groupings for their usage. No tyranny of one size fits all granularity as in UNIX's owner/group/other model. We'll see how the combination holds up when it meets the real world.

Also, as folks start to realize that tags make a great collecting and labeling tool, we'll see more applications besides the holy manna of bottom up categorization attributed to folksonomies.

You know I've always liked that word... "pseudonymous" ..., so rarely have an opportunity to use it in a sentence


Graham: Free On Lisp

Paul Graham's On Lisp is a classic on the power of the Lisp programming language, and more importantly, how you can mold the language to a specific problem domain. The text fell out of print, but he recovered the copyright and is making On Lisp freely available in electronic form.

[Via Have Browser, Will Travel]


Carden: diu flow

When Bloglines has it's act together, I like the links that come across from Tom Carden's del.icio.us feed. They mostly focus on interactive graphics, information visualization, computational aesthetics, and processing.


van Wingerde: virtual journal club

virtual journal clubs are a damn good idea:

Every Friday I pick a paper from the ACM Digital Library that is found by the search term +connected +2005 +"mobile device" +"user interface", and write a brief discussion of it. Why? Because it makes me actually read them.
Every first year graduate student should be forced to do this on a specific topic, plus add a bookmark to a departmental social bookmarking service, applying some appropriate tags. Senior grad students should be assigned to review the discussions on a rotating basis.

[Via 106 Miles to Chicago (White Sox are still WS Champs!!)]

© Brian M. Dennis. Built using Pelican. Theme by Giulio Fidente on github.