2014-05-26

why are catalogs cut at 5 sigma or harder?

Many years ago I wrote a paper with Turner that is totally wrong in its language but not totally wrong in its gist; it argues that maximum-likelihood flux estimates for faint stars tend to be over-estimates, for the same reason that maximum-likelihood parallax-based distances tend to be under-estimates (look up Lutz-Kelker if you want to know more). My old paper uses terrible language to ask and answer the question. I won't even link to it, it is so embarrassing!

This weekend, Rix sent me a note on a substantially similar point. He asks (paraphrasing): When you detect a source at N sigma (in a likelihood sense), what are the posterior betting odds that you are qualitatively wrong about the existence, position, or flux of that source? The answer is a bit surprising: Even at four sigma, most sources have a large posterior probability of non-existence. The reason is partly that there are far more faint sources than bright, and also (in many cases) most of the sky is empty of (conceivably detectable) sources. So your priors are not uninformative on these points. We went back and forth today on language and explication. I am trying to argue that we should write an arXiv-only note about it all this summer.

1 comment:

  1. What you really want is a summary of the posterior distribution over catalogs because samples are usually going to be too big to handle. Something like the "expected true scene" map from our paper should suffice, and you could also make one that wasn't weighted by flux, so you could get the posterior expected number of objects in any aperture that you care about.

    ReplyDelete