simulating telescope images

John Peterson (Purdue) is one of the principals in the LSST image simulation project. This project is insane, in that it simulates the images by laying down every single photon, from source through atmosphere, optics, and CCD. I spent a large part of today talking to him about all this, including the issue that it is hard (obviously) to see diffraction effects when you treat the light as being in the form of photons; there are some beautiful approximate methods for bridging the wave–particle duality. This is a nice problem to think about: How do you properly do simulations of a telescope, including all diffraction and refraction effects, without collapsing the wave function at the wrong time?


dark matter

I spoke about dark matter at Purdue University today. I also had some fun chatting with Maxim Lyutikov's group there over lunch.


beyond the virial radius, anthropics

There was productive conversation by email today about RR Lyrae stars in deep, multi-epoch visible data among Willman, Zolotov, and myself. Their models (by Governato et al) suggest that there should be a few RR Lyrae detectable in the 100 to 400 kpc range, even in SDSS Stripe 82, if only we can do robust detection below the individual-epoch detection limit. That kind of thing is my bread and butter. Do I feel a collaborative NSF proposal coming on?

In the afternoon, the high-energy physics seminar was by Adam Brown (Princeton) on bubble nucleation in the eternal-inflation (string-theory-inspired) metaverse. It seems that in many natural situations, the most likely bubble to nucleate in our neighborhood (future light cone) could be to a disastrously different vacuum, perhaps even one in which there would be no volume at all (it seems you would just get mashed against the expanding bubble wall). This has implications for our fate, though the standard anthropics (about which we are pretty skeptical at Camp Hogg) protect this theory from making any falsifiable predictions about our past. I used to say that astronomy was only about the past light cone, so perhaps I should be ignoring these subjects?


candidate Hou

Fengji Hou passed his oral candidacy exam today with a talk about very fast ensemble sampling and exoplanet inference and discovery. Congratulations Fengji!


fitting curves

Hou, Goodman, and I discussed spectral fitting in the context of radial-velocity measurement, relevant to our exoplanet work. We feel like we might be able to go the next level down, as it were. In the afternoon, Dmitry Malyshev (NYU) and I discussed how Malyshev is fitting the number-flux relation for gamma-ray sources in the Fermi data below the point-source detection limit. He is finding some nice surprises in there.


The Milky Way disk

At group meeting, Zolotov talked about her simulations of Milky-Way-like galaxies, and why feedback is required (in the simulations) to limit star-formation. In the afternoon, Bovy and I discussed different ways to formulate or code up the Oort problem, or the contemporary version of it, in which we try to understand disk vertical structure and (we hope) departures from axisymmetry. Also related, Foreman-Mackey and I got in touch with Larry Widrow (Queens), who suggested we all get together to start some projects of mutual interest on inferring gravitational potentials using observations of disk stars. All talk, because that is all there is time for during job season!


human classification

Schiminovich came down and we classified by hand the eclipses Lang and I put together yesterday. Most of them are not eclipses, but there is a good sample that are. We have enough for a publication, for sure, but we have to make some decisions about scope and follow-up and testing and discussion. At the end of the day, Finkbeiner (Harvard) came and gave our colloquium, on the WMAP and Fermi haze results. He now thinks that these are not likely to be created by dark-matter effects, on morphological grounds. Just as I was leaving the building, Foreman-Mackey told me that he has implemented Hou's affine-invariant ensemble sampling method in pure Python, which has some chance of making us infinitely powerful. Indeed, he says it has sped up some of his code by enormous factors (as it does for Hou too). For me, one of the things I like most about all this is that the ensemble sampler has almost no tunable parameters; that is, very few choices. I am against choice.



Lang came into town and we spent part of the day getting the far-ultraviolet data to play nice with the near-ultraviolet data. This (above) is one of the transits we discovered. It is a re-discovery, actually, and the star that is being eclipsed is a dwarf nova, which is violently variable in the ultraviolet. NUV data in grey, FUV data in magenta, broken time axis etc etc.


star-galaxy separation

Today Willman, Ross Fadely (Haverford), and I discussed star–galaxy separation. This is usually done morphologically, but at faint levels, there is usually far more information in the spectral energy distribution than in the deviation from pure-psf shape for making this determination. We batted around a few different ways to set up this problem and make a stab at it. Doing this right will be necessary (we think) if LSST is going to do reliable stellar science to its multi-epoch magnitude limit.



I started my short stint as a distinguished visitor at Haverford today, with an appearance in Beth Willman's class on astronomical ideas for non-science majors, with a seminar on the dark matter, and with an appearance in an observational astronomy laboratory course. In the latter, we took data the students had taken at KPNO last week and assembled it into color JPEGs for visualization. Of course we started at Astrometry.net.



I love the wonderful people who built and maintain the Python matplotlib plotting package. They are heroes of science. I spent the weekend fixing up our GALEX time-stream plots, and they are now far more useful and informative, with transparency and overlays and non-linear axes, all showing multiple kinds of information.


fast sampling and the end of time

At group meeting, Hou described our very fast MCMC algorithm; in the tests we have done so far on exoplanets (radial-velocity fitting) it beats Metropolis-Hastings MCMC by a factor of about 100 in speed. He is using affine-invariant sampling that uses an ensemble of parallel chains to build the proposal distribution. It is slick, and not complicated.

In the afternoon, Ben Freivogel (Berkeley) gave an extremely amusing talk about eternal inflation, string theory, and calculation of probabilities in the multiverse. He concludes that the only consistent way to make predictions in the theory is to put a limit on the time, and—assigning reality to his physical model—therefore finds that time has high probability of coming to an end in the next Hubble time. The argument is way out in left field, relying heavily on arguments of realism, which I reject. But I appreciate the candor: He takes the position that if you need to put a limit on time in order to consistently calculate, then time is predicted by the theory to end. I don't necessarily disagree with the latter part, but there are other reactions one can have to the former—the problem of calculating. Like: maybe eternal inflation just doesn't make predictions at all.


high purity

Schiminovich made a bunch of changes to our data extraction from the GALEX time-domain data, and now we get a very high purity sample of transits in the white dwarf population. This is very good; we are ready to clean up, run on everything, and write a paper! Clean up includes improving our likelihood ratio calculations, and including the FUV data along with the NUV data.

In the late afternoon, Bolton (Utah) gave a great talk about his gravitational lens sample (SLACS), and all the great astrophysics he and his collaborators have done with it. He is doing more now with SDSS-III BOSS.


priors on quasar spectra

Hennawi, Tsalmantza, and I had a long conversation about why our likelihood optimization does better at measuring quasar redshifts than our posterior-PDF optimization. In the latter, we use a highly informative prior PDF: That any new quasar spectrum must look exactly like some quasar we have seen before. This is the hella-informative data-driven prior. It turns out it is too strict for our problem: We end up over-weighting quasar spectra that fit the continuum well, at the expense of the narrow features that best return the right redshift. This raises a great philosophical point, one I used to discuss with Roweis extensively: You don't necessarily want to model all of the features of your data well. You want to model well the parts of your data that matter most to your questions of interest. So if we want to use ultra-informative priors, we ought to also up-weight the informative features of the data, and remove the uninformative. This is done, traditionally, by filtering—which is terribly heuristic and hard to justify technically—but which has been done more quantitatively in some domains, once notably by Panter and collaborators in MOPED.


GALEX meta-data

Schiminovich and I spent a good part of the day trying to understand brightness variations in GALEX sources, taken not from the official catalogs (which are great) but rather from our own analyses of some intermediate data products ("movies" of the photon arrival data). We are looking for time variations that are real, but we are being fooled by some meta-data problems, in which exposure times are not what we think they are. It was a frustrating day, but we did find what appears to be the problem. It is not clear that we can fix it adequately, so we might have to take the Astrometry.net-endorsed route of reconstructing the meta-data from the data through consistency or internal calibration.


spectral variability

Inspired by work this summer with Kasper Schmidt (MPIA), I had Price-Whelan and Foreman-Mackey compare spectra from BOSS and SDSS to look for variability. They find enormous variations in the quasars but also in the F-type stars used for calibration! So there is something wrong with the BOSS calibration; we will investigate next week. In the morning, Eyal Kazin (NYU) gave the group meeting, on the baryon acoustic feature.



I spent the day at Yale with Hou, talking with Debra Fischer and Christian Schwab about various possible exoplanet projects we might do together. In the short term we came up with a few easy projects for Hou's fast sampling methods, including looking at velocity offsets between different instruments and comparing exoplanet models for stellar velocity residuals with stellar oscillation models. For the longer term, we discussed how the velocities are measured, which is extremely non-trivial, and also the Kepler data, many of which are already public. I have a very good feeling that we have started a productive collaboration.


just chatting

Not much research gets done during job season! What a hit to the world's research it is. In my research time today, all I did was talk, with Foreman-Mackey about variance tensors, with Zolotov about cusp catastrophes, and with Jiang about merger rates in the literature.



I had a great day at the Institute for Advanced Study, my old stomping grounds. I gave a black-board talk (no computer), which shocked a few people. After my talk, I learned about ultra-high magnification microlensing events from Subo Dong (IAS) who convinced me that you can discover planets in a wide range of configurations with these systems. I also discussed with an editor the possibility of doing a Sloan Digital Atlas of Galaxies, something I have been dreaming about for years.


inference talk

I thought about my IAS talk tomorrow; I am giving a talk I have never given before, about modeling the data, and the power that gives you. After our Brown Bag seminar (by Gabadadze, on the cosmological constant problem), I chatted with my various students about the possibility that we could re-reduce the WMAP data. With what I learned at Leiden, I am certain that it is possible to make a higher signal-to-noise map than the official map, and get better parameters. But that is a huge job I am not willing to take on. I also tried to talk Kazin (NYU) and Blanton into some homogeneity tests, with little effect. A scattered day, but there are lots of good ideas floating around. One issue I might mention tomorrow: There is no proper probabilistic approach (I know) for measuring two-point functions. Readers: Do you know anything?


reading papers

On the plane home, I read and commented on Bovy's extreme-deconvolution quasar target selection paper, and also Guangtun Zhu's (NYU) paper on finding maser galaxies (for, for example, Hubble Constant measurement) using the SDSS data. I also worked on various writing projects, including Zolotov's cusp paper.