2008-11-25

optimization, deconvolved distributions

I started working through Bovy's opus on our technique for estimating the error-deconvolved distribution function that generates a d-dimensional data set. There are lots of algorithms for describing the distribution function that generates your data. But in general each data point has noise contributions, and in general those noise contributions are drawn from different distributions for each data point. Bovy's framework and algorithm models the distribution that—when convolved with each data point's noise distribution function, maximizes the likelihood of the data. This is the fundamental object of interest in science: The distribution you would have found if your data were perfect.

Conversations continued about image modeling, with Bolton, Lang, Marshall, and Bovy. We have ideas about parameterization of the model, and we have ideas about the objective function, but we are totally at sea when it comes to optimization. Everything we have in mind is expected to be incredibly slow. Incredibly. So I started looking into that.

No comments:

Post a Comment