exploding white dwarfs

Abi Polin (Berkeley) came through NYU this week. Today she delivered a great seminar on explosions of white dwarfs. She is looking at different ignition mechanisms, and trying to predict the resulting supernovae spectra and light curves. This modeling requires a huge range of physics, including gastrophysics, nuclear reaction networks, and photospheres (both for absorption and emission lines). The current models have serious limitations (like one-d, which she intends to fix during her PhD), but they strongly suggest that type Ia supernovae (the ones that are created by white-dwarf explosions) do seem to come from a narrow mass range in white-dwarf mass. If you go too high in mass, you over-produce nickel. If you go too low in mass, you under-produce nickel and get way under-luminous. In addition to the NYU CCPP crew, Surabh Jha (Rutgers) and Armin Rest (STScI) were in the audience, so this talk was followed by a lively lunch! Jha suggested that the narrow mass range implied by the talk could also help with understanding the standard-candle-ness of these explosions.


epoch of reionization

I had the realization that I can reduce my concerns about radial-velocity fitting (given a spectrum) to the problem of centroiding a single spectral line, and then scale up using information theory. So there is a paper to write! I sketched an abstract.

In the morning, Andrei Mesinger (SNS) gave a talk about the epoch of reionization. He argued fairly convincingly that, between Planck, Lyman-alpha emission from very high-redshift quasars and galaxies, and the growth of dark-matter structure, the epoch of reionization is pretty well constrained now, around redshift of 7 to 8. The principal observation (from my perspective) is that the optical depth to the surface of last scattering is close to the minimum possible value (given what we know out to redshifts of 5 or 6). He discussed also what we will learn from 21-cm projects, and—like Colin Hill a few weeks ago—is looking for the right statistics. I really have to start a project that finds decisive (and symmetry-constrained) summary statistics, given simulations!


Nice; counter-rotating disks

At Stars group meeting, Keith Hawkins (Columbia) summarized the Nice meeting on Gaia. Some Gaia Sprint and Camp Hogg results were highlighted there in Anthony Brown's talk, apparently. There were results on Gaia accuracy of interest to us (and testable by us), and also things about the velocity distribution in the Galaxy halo.

Tjitske Starkenburg (Flatiron) talked about counter-rotating components in disk galaxies: She would like to find observational signatures that can be found in both simulations and also the data. But she also wants to understand their origins in the simulations. Interestingly she finds many different detailed formation histories that can lead to counter-rotating components. That is consistent with their high frequency in the observed samples.


falsifying results by philosophical argument

I finally got some writing done today, in the Anderson paper on the empirical, deconvolved color-magnitude diagram. We are very explicitly structuring the paper around the assumptions, and each of the assumptions has a name. This is part of my grand plan to develop a good, repeatable, useful, and informative structure for a data-analysis paper.

I missed a talk last week by Andrew Pontzen (UCL), so I found him today and discussed matters of common interest. It was a wide-ranging conversation but two highlight were the following: We discussed causality or causal explanations in a deterministic-simulation setting. How could it be said that “mergers cause star bursts”? If everything is deterministic, isn't it equally true that star bursts cause mergers? One question is the importance of time or time ordering (or really light-cone ordering). For the statisticians who think about causality this doesn't enter explicitly. I think that some causal statements in galaxy evolution are wrong on philosophical grounds but we decided that maybe there is a way to save causality provided that we always refer to the initial conditions (kinematic state) on a prior light cone. Oddly, in a deterministic universe, causal explanations are mixed up with free will and subjective knowledge questions.

Another thing we discussed is a very neat trick he figured out to reduce cosmic variance in simulations of the Universe: Whenever you simulate from some initial conditions, also simulate from the negative of those initial conditions (all phases rotated by 180 degrees, or all over-densities turned to under, or whatever). The average of these two simulations will cancel out some non-trivial terms in the cosmic variance!

The day ended with a long call with Megan Bedell (Chicago), going over my full list of noise sources in extreme precision radial-velocity data (think: finding and characterizing exoplanets). She confirmed everything in my list, added a few new things, and gave me keywords and references. I think a clear picture is emerging of how we should attack (what NASA engineers call) the tall poles. However, it is not clear that the picture will get set down on paper in time for the Exoplanet Research Program funding call!



Today not much! I had a valuable conversation with Trisha Hinners (NG Next) about machine-learning projects with the Kepler data, and I did some pen-and-paper writing and planning for my proposal on exoplanet-related extreme precision radial-velocity measurements.


vertical action is a clock

Ruth Angus (Columbia) and I discussed the state of her hierarchical Bayesian model to self-calibrate a range of stellar age indicators. Bugs are fixed and it appears to be working. We discussed the structure of a Gibbs sampler for the problem. We reviewed work Angus and also Melissa Ness (MPIA) did at the 2016 NYC Gaia Sprint on vertical action dispersion as a function of stellar age. Beautiful results! We had an epiphany and decided that we have to publish these results, without waiting for the Bayesian inference to be complete. That is, we should publish a simple empirical paper based on TGAS, proposing the general point that vertical action provides a clock with very good properties: It is not precise, but it is potentially very accurate, because it is very agnostic about what kind of star it is is timing.


cosmological anomalies

I had lunch with Jesse Muir (Michigan), and then she gave an informal seminar after lunch. She has been working on a number of things in cosmological measurement. One highlight is an investigation of the anomalies (strange statistical outliers or badly fit aspects) in the CMB: She has asked how they are related, and whether they are really independent. I discussed with her the possibility that we might be able to somehow lexicographically order all possible anomalies and then search for them in an ordered way, keeping track of all possible measurements and their outcomes, as a function of position in the ordering. The reason I am interested in this is because some of the anomalies are “odd enough” that I would expect them to come up pretty late in any ordering. That makes them not-that-anomalous! This somehow connects to p-values and p-hacking and so on. I also discussed with Muir the possibility of looking for anomalies in the large-scale structure. This should be an even richer playground.


BHs in GCs, and a new job

In Stars group meeting, Ruth Angus (Columbia) showed her catalog of rotation periods in the Kepler and K2 fields. She has a huge number! We discussed visualizations of these that would be convincing and also possibly create new scientific leads.

Also in Stars group meeting, Arash Bahramian (MSU) spoke about black holes in globular clusters. He discussed how they use simultaneous radio and X-ray observations to separate the BHs from neutron stars: Radio reveals jet energy and X-ray reveals accretion energy, which (empirically) are different for BHs and NSs. However, in terms of making a data-driven model, the only situations in which you are confident that something is a NS is when you see X-ray bursts (because: surface effect) and the only situations in which you are confident that something is a BH is when you can see a dynamical mass substantially greater than 1.4 Solar masses (because: equation of state). He highlighted some oddities around the cluster Terzan 5, which is the globular cluster with the largest number of X-ray sources, and also an extremely high density and inferred stellar collision rate. This was followed by much discussion of relationships between collision rate and other cluster properties, and also some discussion of some individual x-ray sources.

[In non-research news: Today I became an employee of the Flatiron Institute, as a new group leader within the CCA! Prior to today I was only in a consulting role.]


looking at the Sun, through the freakin' walls

In the CCPP Brown-Bag talk, Duccio Pappadopulo (NYU) gave a very nice and intuitive introduction to the strong CP problem (although he really presented it as the strong T problem!). He discussed the motivation for the QCD axion and then experimental bounds on it. He mentioned at the end his own work that permits the QCD axion to have much stronger couplings to photons, and therefore be much more readily detected in the laboratory. He discussed an important kind of experiment that I had not heard about previously: The helioscope, which is an x-ray telescope in a strong magnetic field, looking at the Sun, but inside a shielded building (search "axion helioscope"). That is, the experiment asks the question: Can we see through the walls? This tests the coupling of the QCD sector and the photon to the axion, because (QCD) axions are created in the Sun, and some will convert (using the magnetic field to obtain a free photon) into x-ray photons at the helioscope. Crazy, but seriously these are real experiments! I love my job.


Dr Yuqian Liu

Today it was a pleasure to participate in the PhD defense of Yuqian Liu (NYU), who has exploited the world's largest dataset on stripped supernovae, part of the huge spectral collection of Maryam Modjaz's group at NYU. She pioneered various data-driven methods for the spectral analysis. One is to create a data-driven or empirical noise model using filtering in the Fourier domain. Another is to fit shifted and broadened lines using empirical spectra and bayesian inference. She uses these methods to automatically make uniform measurements of spectral features from very heterogeneous data from multiple sources of different levels of reliability. Her results rule out various (one might say: All!) physical models for these supernovae. Her results are all available open-source, and she has pushed her results into SNID, which is the leading software supernova classifier. Congratulations Dr Liu!


asteroseismological estimators; and Dr Hahn!

Because of the availability of Dan Huber (Hawaii) in the city today, we moved Stars group meeting to Thursday! He didn't disappoint, telling us about asteroseismology projects in the Kepler and K2 data. He likes to emphasize that the >20,000 stars in the Kepler field that have measured nu-max and delta-nu have—every one of them—been looked at by (human) eye. That is, there is no fully safe automated method for measuring these. My loyal reader knows that this is a constant subject of conversation in group meeting, and has been for years now. We discussed developing better methods than what is done now.

In my mind, this is all about constructing estimators, which is something I know almost nothing about. I proposed to Stephen Feeney (Flatiron) that we simulate some data and play around with it. Sometimes good estimators can be inspired by fully Bayesian procedures. We could also go fully Bayes on this problem! We have the technology (now, with new Gaussian-Process stuff). But we anticipate serious slowness: We need methods that will work for TESS, which means they have to run on hundreds of thousands to millions of light curves.

In the afternoon, Chang Hoon Hahn (NYU) defended his PhD, which is on methods for making large-scale structure measurements. We have joked for many years that my cosmology group meeting is always and only about fiber collisions. (Fiber collisions: Hardware-induced configurational constraints on taking spectra or getting redshifts of galaxies that are close to one another on the sky.) This has usually been Hahn's fault, and he didn't let us down in his defense. Fiber collisions is a problem that seems like it should be easy and really, really is not. It is an easy problem to solve if you have an accurate cosmological model at small scales! But the whole point is that we don't. And in the future, when surveys use extremely complicated fiber positioners (instead of just drilling holes), the fiber-collision problem could become very severe. Very. As in: It might require knowing (accurately) very high-point functions of the galaxy distribution. More on this at some point: This problem has legs. But, in the meantime: Congratulations Dr Hahn!


Kronos–Krios; photometric redshifts without training

In the early morning, Ana Bonaca (Harvard) and I discussed our information-theory project on cold stellar streams. We talked about generalizing our likelihood model or form, and what that would mean for the lower bound (on the variance of any unbiased estimator; the Cramér–Rao bound). I have homework.

At the Flatiron, instead of group meeting (which we moved to tomorrow), we had a meeting on the strange pair of stars that Semyeong Oh (Princeton) and collaborators have found, with very odd chemical differences. We worked through the figures for the paper, and all the alternative explanations for their formation, sharpening up the arguments. In a clever move, David Spergel (Flatiron) named them Kronos and Krios. More on why that, soon.

In the afternoon, in cosmology group meeting, Boris Leistedt (NYU) talked about his grand photometric-redshift plan, in which the templates and the redshifts are all estimated together in a beautiful hierarchical model. He plans to get photometric redshifts with no training redshifts whatsoever, and also no use of pre-set or known spectral templates (though he will compose the data-driven templates out of sensible spectral components). There was much discussion of the structure of the graphical model (in particular about selection effects). There was also discussion about doing low-level integrals fast or analytically.


don't cross-correlate with the wrong template!

In principle, writing a funding proposal is supposed to give you an opportunity to reflect on your research program, think about different directions, and get new insights about projects not yet started. In practice it is a time of copious writing and anxiety, coupled with a lack of sleep! However, I have to admit that today my experience was the former: I figured out (in preparing my Exoplanet Research Program proposal for NASA) that I have been missing some very low-hanging fruit in my thinking about the the error budget for extreme precision radial-velocity experiments:

RVs are obtained (usually) by cross-correlations, and cross-correlations only come close to saturating the Cramér–Rao bound when the template spectrum is extremely similar to the true spectrum. That just isn't even close to true for most pipelines. Could this be a big term in the error budget? Maybe not, but it has the great property that I can compute it. That's unlike most of the other terms in the error budget! I had a call with Megan Bedell (Chicago) at the end of the day to discuss the details of this. (This also relates to things I am doing with Jason Cao (NYU).)

In other news, I spent time reading about linear algebra, (oddly) to brush up on some notational things I have been kicking around. I read about tensors in Kusse and Westwig and, in the end, I was a bit disappointed: They never use the transpose operator on vectors, which I think is a mistake. However, I did finally (duh) understand the difference between contravariant and covariant tensor components, and why I have been able to do non-orthonormal geometry (my loyal reader knows that I think of statistics as a sub-field of geometry) for years without ever worrying about this issue.


Dr Sanford

I gave the CCPP Brown-Bag talk today, about how the Gaia mission works, according to my own potted story. I focused on the beautiful hardware design and the self-calibration.

Before that, Cato Sanford (NYU) defended his PhD, about model non-equilibrium systems in which there are swimmers (think: cells) in a homogenous fluid. He used a very simple Gaussian Process as the motive force for each swimmer, and then asked things like: Is there a pressure force on a container wall? Are there currents when the force landscape is non-trivial? And so on. His talk was a bit bio-stat-mech for my astrophysical brain, but I was stoked with the results, and I feel like the things we have done with Gaussian Processes might lead to intuitions in these crazy situations. The nice thing is that if you go from Brownian Motion to a GP-regulated walk, you automatically go out of equilibrium!