2010-12-09

Sam Roweis Symposium

Today was the Sam Roweis Symposium at NIPS. I spoke, along with four other of Roweis's close collaborators. I learned a lot, especially how LLE and related methods work. It was a great session. One thing it all reminded me of is that the NIPS crowd is far more statistically and inferentially sophisticated than even the most sophisticated astronomers. It really is a different world.

In the morning before the Roweis symposium, two talks of note were by Martin Banks (Berkeley) and Josh Tenenbaum (MIT). Banks talked about the perceptual basis for photographic rules and concepts. The most impressive part of it, from my point of view, was that he explained the tilt-shift effect: If you limit the depth of field in an image, the objects being photographed appear tiny. The effect is actually quantitatively similar to binocular parallax, in the sense that the governing equation is identical. In binary parallax you measure distances relative to the separation of your eyes; in depth-of-field you measure distances relative to the size of your pupil entrance!

Tenenbaum talked about very general models, in which even the rules of the model are up for inference. He has beautiful demos in which he can get computers to closely mimic human behavior (on very artificial tasks). But his main point is that the highly structured models of the mind, including language, may be learned deeply; that is, it might not just be fitting parameters of a fixed grammar. He gave good evidence that it is possible that everything is learned, and noted that if the program is to be pursued, it needs to become possible to assign probabilities (or likelihoods) to computer programs. Some work already exists in this area.

No comments:

Post a Comment