2017-08-15

a mistake in an E-M algorithm

[I am on quasi-vacation this week, so only posting irregularly.]

I (finally—or really for the N-th time, because I keep forgetting) understood the basis of E-M algorithms for optimizing (what I call) marginalized likelihoods in latent-variable models. I then worked out the equations for the E-M step for factor analysis, and a generalization of factor analysis that I hope to use in my project with Christina Eilers (MPIA).

Imagine my concern when I got a different update step than I find in the writings of my friend and mentor Sam Roweis (deceased), who is the source of all knowledge, as far as I am concerned! I spent a lot of time looking up stuff on the web, and most things agree with Roweis. But finally I found this note by Andrew Ng (Stanford / Coursera), which agrees with me (and disagrees with Roweis).

If you care about the weeds, the conflict is between equation (8) in those Ng notes and page 3 of these Roweis notes. It is a subtle difference, and it takes some work to translate notation. I wonder if the many documents that match Roweis derive from (possibly unconscious) propagation from Roweis, or whether the flow is in another direction, or whether it is just that the mistake is an easy one to make? Oddly, Ng decorates his equation (8) with a warning about an error you can easily make, but it isn't the error that Roweis made.

So much of importance in computer science and machine learning is buried in lecture notes and poorly indexed documents in user home pages. This is not a good state of affairs!

2017-08-11

serious bugs; dimensionality reduction

Megan Bedell (Chicago) and I had a scare today: Although we can show that in very realistically simulated fake data (with unmodeled tellurics, wrong continuum, and so on) a synthetic spectrum (data-driven) beats a binary mask for measuring radial velocities, we found that in real data from the HARPS instrument that the mask was doing better. Why? We went through a period of doubting everything we know. I was on the point of resigning. And then we realized it was a bug in the code! Whew.

Adrian Price-Whelan (Princeton) also found a serious bug in our binary-star fitting. The thing we were calculating as the pericenter distance is the distance of the primary-star center-of-mass to the system barycenter. That's not the minimum separation of the two stars! Duh. That had us rolling on the floor laughing, as the kids say, especially since we might have gotten all the way to submission without noticing that absolutely critical bug.

At the end of the day, I gave the Königstuhl Colloquium, on the blackboard, about dimensionality reduction. I started with a long discussion about what is good and bad about machine learning, and then went (too fast!) through PCA, ICA, kernel-PCA, PPCA, factor analyzers, HMF, E-M algorithms, latent-variable models, and the GPLVM, drawing connections between them. The idea was to give the audience context and jumping-off points for their projects.

2017-08-10

micro-tellurics

Today, in an attempt to make our simulated extreme-precision radial-velocity fake data as conservative as possible, Megan Bedell (Chicago) and I built a ridiculously pessimistic model for un-modeled (and unknown) telluric lines that could be hiding in the spectra, at amplitudes too low to be clearly seen in any individual spectrum, but with the full wavelength range bristling with lines. Sure enough, these “micro-tellurics” (as you might call them) do indeed mess up radial-velocity measurements. The nice thing (from our perspective) is that they mess up the measurements in a way that is co-variant with barycentric velocity, and they mess up synthetic-spectrum-based RV measurements less than binary-mask-based RV measurements.

At MPIA Galaxy Coffee, Irina Smirnova-Pinchukova (MPIA) gave a great talk about her trip on a SOFIA flight.

2017-08-09

machine learning, twins, excitation temperature

After our usual start at the Coffee Nerd, it was MW Group Meeting, where we discussed (separately) Cepheids and Solar-neighborhood nucleosynthesis. On the latter, Oliver Philcox (St Andrews) has taken the one-zone models of Jan Rybizki (MPIA) and made them 500 times faster using a neural-network emulator. This emulator is tuned to interpolate a set of (slowly computed) models very quickly and accurately. That's a good use of machine learning! Also, because of backpropagation, it is possible to take the derivatives of the emulator with respect to the inputs (I think) and therefore you also get derivatives, for optimization and sampling.

The afternoon's PSF Coffee meeting had presentations by Meg Bedell (Chicago) about Solar Twin abundances, and by Richard Teague (Michigan) about protoplanetary disk TW Hya. On the former, Bedell showed that she can make extremely precise measurements, because a lot of theoretical uncertainties cancel out. She finds rock-abundance anomalies (that is, abundance anomalies that are stronger in high-condensation-temperature lines) all over the place, which is context for results from Semyeong Oh (Princeton). On TW Hya, Teague showed that it is possible to get pretty consistent temperature information out of line ratios. I would like to see two-dimensional maps of those: Are there embedded temperature anomalies in the disk?

2017-08-08

latent-variable model; bound-saturating EPRV

Today, Christina Eilers (MPIA) and I switched her project over to a latent variable model. In this model, stellar spectra (every pixel of every spectrum) and stellar labels (Teff, logg, and so on for every star) are treated on an equal footing as “data”. Then we fit an underlying low-dimensional model to all these data (spectra and labels together). By the end of the day, cross-validation tests were pushing us to higher and higher dimensionality for our latent space, and the quality of our predictions was improving. This seems to work, and is a fully probabilistic generalization of The Cannon. Extremely optimistic about this!

Also today, Megan Bedell (Chicago) built a realistic-data simulation for our EPRV project, and also got pipelines working that measure radial velocities precisely. We have realistic, achievable methods that saturate the Cramér–Rao bound! This is what we planned to do this week not today. However, we have a serious puzzle: We can show that a data-driven synthetic spectral template saturates the bound for radial-velocity measurement, and that a binary mask template does not. But we find that the binary mask is so bad, we can't understand how the HARPS pipeline is doing such a great job. My hypothesis: We are wrong that HARPS is using a binary mask.

2017-08-07

linear models for stars

My loyal reader knows that my projects with Christina Eilers (MPIA) failed during the #GaiaSprint, and we promised to re-group. Today we decided to take one last attempt, using either heteroskedastic matrix factorization (or other factor-analysis-like method) or else probabilistic principal components analysis (or a generalization that would be heteroskedastic). The problem with these models is that they are linear in the data space. The benefit is that they are simple, fast, and interpretable. We start tomorrow.

I made a plausible paper plan with Megan Bedell (Chicago) for our extreme-precision radial-velocity project, in which we assess the information loss from various methods for treating the data. We want to make very realistic experiments and give very pragmatic advice.

I also watched as Adrian Price-Whelan (Princeton) used The Joker to find some very strange binary-star systems with red-clump-star primaries: Since a RC star has gone up the giant branch and come back down, it really can't have a companion with a small periastron distance! And yet...

2017-08-06

enfastenating

Various hacking sessions happened in undisclosed locations in Heidelberg this weekend. The most productive moment was that in which—in debugging a think-o about how we combine independent samplings in The Joker—Adrian Price-Whelan (Princeton) and I found a very efficient way to make our samplings adapt to the information in the data (likelihood). That is, we used a predictive adaptation to iteratively expand the number of prior samples we use to an appropriate size for our desired posterior output. (Reminder: The Joker is a rejection sampler.) This ended up speeding up our big parallel set of samplings by a factor of 8-ish!

2017-08-04

M-dwarf spectral types; reionization

Jessica Birky (UCSD) and I met with Derek Homeier (Heidelberg) and Matthias Samland (MPIA) to update them on the status of the various things Birky has been doing, and discuss next steps. One consequence of this meeting is that we were able to figure out a few well-defined goals for Birky's project by the end of the summer:

Because of a combination of too-small training set and optimization issues in The Cannon, we don't have a great model for M-dwarf stars (yet) as a function of temperature, gravity, and metallicity. That's too bad! But on the other hand, we do seem to have a good (one-dimensional) model of M-dwarf stellar spectra as a function of spectral type. So my proposal is the following: We use the type model to paint types onto all M-dwarf stars in the APOGEE data set, which will probably correlate very well with temperature in a range of metallicities, and then use those results to create recommendations about what spectral modeling would lead to a good model in the more physical parameters.

Late in the day, José Oñorbe (MPIA) gave a great talk about the empirical study of reionization. He began with a long and much needed review of all the ways you can measure reionization, using radio imaging, lyman-alpha forest, damping wings, cosmic microwave background polarization, and so on. This brought together a lot of threads I have been hearing about over the last few years. He then showed his own work on the lyman-alpha forest, where they exploit the thermodynamic memory the low-density gas has about its thermal history. They get good results even with fairly toy models, which is very promising. All indicators, by the way, suggest a very late reionization (redshifts 7 to 9 for the mid-point of the process). That's good for observability.

2017-08-03

planning; marginalization

I had phone calls with Megan Bedell (Chicago) and Lauren Anderson (Flatiron) to discuss near-term research plans. Anderson and I discussed whether the precise MW mapping we were doing could be used to measure the length, strength, and amplitude of the Milky Way bar. It looks promising, although (by bad luck), the 2MASS sensitivity to red-clump stars falls off right around the Galactic Center (even above the plane and out of the dust). There are much better surveys for looking at the Galactic center region.

Bedell and I contrasted our plans to build a data-driven extreme-precision radial-velocity (EPRV) pipeline with our plans to write something more information-theoretic and pragmatic about how to maximize RV precision. Because our data-driven pipeline requires some strong applied math, we might postpone that to the Fall, when we are co-spatial with math expertise in New York City.

I was pleased by a visit from Joe Hennawi (UCSB) and Fred Davies (MPIA / UCSB) in which they informed me that some comments I made about sampling approximations to marginalizations changed their strategy in analyzing very high redshift quasars (think z>7) for IGM damping wing (and hence reionization). We discussed details of how you can use a set of prior-drawn simulations to do a nuisance-parameter marginalization (in this case, over the phases of the simulation).

2017-08-02

graphical models; bugs

At MPIA MW group meeting, Semyeong Oh (Princeton) described her projects to find—and follow up—co-moving stellar pairs and groups in the Gaia TGAS data. She presented the hypothesis test (or model comparison) by showing the two graphical models, which was an extremely informative and compact way to describe the problem. This led to a post-meeting discussion of graphical models and how to learn about them. There is no really good resource for astronomers. We should write one!

I spent the afternoon with Matthias Samland (MPIA) and Jessica Birky (UCSD), debugging code! Samland is adding a new kind of systematics model to his VLT-SPHERE data analysis. Birky is hitting the limitations of some of our code that implements The Cannon. I got a bit discouraged about the latter: The Cannon is a set of ideas, not a software package! That's good, but it means that I don't have a perfectly reliable and extensible software package.

2017-08-01

Simpson's paradox

I spent part of the day working through Moe & Di Stefano (2017), which is an immense and comprehensive paper on binary-star populations. The reason for my attention: Adrian Price-Whelan (Princeton) and I need a parameterization for the binary-star population work we are doing in APOGEE. We are not going to make the same choices as those made by Moe, but there is tons of relevant content in that paper. What a tour de force!

I spent part of the afternoon crashing the RAVE Collaboration meeting at the University of Heidelberg. I learned many things, though my main point was to catch up with Matijevic, Minchev, Steinmetz, and Freeman! Ivan Minchev (AIP), in his talk, discussed relationships between age, metallicity, and Galactocentric radius for stars in the disk. He has a beautiful example of Simpson's paradox, in which, for small population slices (age slices), the lower metallicity stars have higher tangential velocities, but overall the opposite is true, causing measured gradients to depend very strongly on the measurement uncertainties (because: Can you slice the populations finely enough in age?). We discussed paths to resolving this with a proper generative model of the data.

2017-07-31

nuisance model for imaging

The CPM of Wang et al and the transit search methods of Foreman-Mackey et al were developed by us to account for and remove or obviate systematic issues with the Kepler imaging. Last summer, Matthias Samland (MPIA) pointed out that these could be used in direct imaging of exoplanets, which is another place where highly informative things happen in the nuisance space. Today we worked through the math and code that would make a systematics-marginalized search for direct detections of planets in the VLT-SPHERE imaging data. It involves finding a basis of time variations of pixels in the device (pixels, not patches, which is odd and at odds with the standard practice), choosing a prior on these that makes sense, fitting every pixel in the relevant part of the device as sum of variations plus exoplanet, but marginalizing out the former.

2017-07-30

regularize all the things

On the weekend, Bernhard Schölkopf (Tübingen) showed up in Heidelberg to hang out and talk shop. What an honor and pleasure! We spent time im Garten discussing various things, but he was particularly insightful in the projects we have been doing with Christina Eilers (MPIA) on extending The Cannon to situations where stellar labels (even in the training set) are either noisy or missing. As we described the training and test steps, we drew graphical models and then looked at the inconsistencies of those graphical models—or not really inconsistencies, but limitations. We realized that we couldn't keep the model interpretable (which is a core idea underlying The Cannon) without putting stronger priors on both the label space (the properties of stars) and the coefficient space (the control parameters of the spectral expectation). If we put on these priors, the model ought to get regularized into a sensible place. I think I know how to do this!

He also pointed out that a probabilistic version of The Cannon would look a lot like the GPLVM (Gaussian Process latent-variable model). That means that there might be out-of-the-box code that could conceivably help us. I am slightly suspicious,
because my idea of the priors or regularization in the label domain is so specific, astrophysical, and informative. But it is worth thinking about this.

2017-07-28

destroyer of worlds

One of my main research accomplishments today was to work up a project proposal for Yuan-Sen Ting (ANU) and others about finding stars whose spectra suggest that they have (recently) swallowed a lot of rocky material. This was inspired by a few things: The first is that Andy Casey (Monash) can find Li-rich stars in LAMOST just by looking at the residuals away from a fit by The Cannon at the location of Li lines. The second is that Semyeong Oh (Princeton) and various collaborators have found Sun-like stars that look like they have swallowed many Earth masses of rock in their recent pasts, by doing (or having John Brewer of Yale do) detailed chemical abundance work on the spectra. The third is that Yuan-Sen Ting has derivatives of spectral expectations with respect to all elements for LAMOST-like spectra.

At the end of the day, Hans-Walter Rix (MPIA) gave a colloquium on the After-Sloan-IV project, which my loyal reader knows a lot about. I learned things in his talk, however: One is that SDSS-III BOSS has found several broad (ish) lined quasars that shut off between SDSS-I and SDSS-III. One relevant paper is here. Another is that he (with Jonathan Bird of Vandy) has made some beautiful visualizations of the point of doing dense sampling of the giant stars in the Milky Way disk.

2017-07-27

cosmological foregrounds; Cannon extensions

At MPIA Galaxy Coffee, Daniel Lenz (JPL) spoke about foregrounds and component separation in CMB and LSS experiments. He emphasized (and I agree completely) that the dominant problem for next-generation ("Stage-4" in the unpleasant terminology of cosmologists) cosmology experiments—be they CMB, LSS, or line intensity mapping—is component separation or foreground inferences. He showed some nice results using generalized linear models of optical data for Milky-Way dust inferences. Afterwards I pitched him my ideas about latent variable models (all vapor ware right now).

Late in the day, Christina Eilers (MPIA) and I met to discuss why our project to fit for both labels and spectral model in a new version of The Cannon didn't work. I have various theories, most of which relate to some unholy mix of the curse of dimensionality (such that optimization of a model is a bad idea) and model wrongness (such that the model is trying to use the freedom it has inappropriately). But I am seriously confused. We worked through all the possible directions and realized that we need to re-group with our full team to decide what to do next. I assigned myself two things: The first is to look at marginalization of The Cannon internals (that is, what marginalizations might be analytic?). The second is to look at the machine-learning literature on the difference between optimizing a model for prediction accuracy as opposed to optimizing it for model accuracy (or likelihood).

2017-07-26

fitting a line, now with fewer errors

[I was on vacation for a few days.]

I spent a tiny bit of time on my vacation working on fixing the wrong parts of section 7 of my paper with Bovy and Lang on fitting a line to data. I am close to a new wording I am happy with, and with corrected equations. I then realized that there are a mess of old issues to look at; I might do that too before I re-post it to arXiv.

2017-07-21

#GaiaSprint, day 5

Today was the last day of the 2017 Heidelberg Gaia Sprint. Every participant prepared a single slide in a shared slide deck (available here), and had 120 seconds to present their results. Look at the slides for the full story, but it was really impressive! A few highlights for me were:

Rix and Fouesneau used common proper motions to match Gaia DR1 TGAS stars to much-fainter PanSTARRS companions, and found hundreds of white dwarf binaries, with a clear, complete white-dwarf sequence. Hawkins was able to separate red clump stars from other RGB stars with a data-driven spectral classifier, and to interpret it. Ting found similar, but working just with the spectral labels fit to spectra with physical models. El-Badry showed that stars he finds are binaries, spectroscopically (and he can find them even if the velocity differences vanish) are above the main sequence in the color—magnitude diagram.

Beaton showed that an old statistical-parallax calibration of RR Lyrae stars by Kollmeier turns out to be strongly confirmed in the TGAS data. Birky built a beautiful one-dimensional model of M-dwarf spectra in APOGEE using only a single label, which is literature spectral classifications. Burggraaff has a possible vertical-direction moving group coming through our local position in the Milky Way disk. Coronado found that she can calibrate main-sequence star luminosities using spectral labels to almost the quality of other standard candles. Rybizki made progress towards an empirical set of supernova yields, starting with APOGEE abundances and (poor) stellar ages.

And, as I have mentioned before, Casey showed that we might be able to do asteroseismology with Gaia, and Anderson made incredible maps of the Milky-Way disk (and animations of slices thereof!).

2017-07-20

#GaiaSprint, day 4

Today Lauren Anderson (Flatiron) and Adrian Price-Whelan (Princeton) made beautiful visualizations of Anderson's 20-million star catalog with distances, built by training a model on the TGAS Catalog and applying it to plausibly-red-clump stars in the billion-star catalog from Gaia. I give an example below, which shows two thin slices of the Milky Way, one through the Sun, and one through the Galactic Center (but blotted out by local dust).

Andy Casey (Monash) got our asteroseismology project working with real data! He sub-sampled some Kepler light curves down to something like Gaia end-of-mission cadence, and then applied the Stephen Feeney (Flatiron) likelihood function. Again, it has peaks at reasonable asteroseismic parameters, near the KASC published values. We are slowly developing some intuitions about what parameters are well constrained and where.

After four days of hacking on The Cannon but with probabilistic (noisy and missing) labels, Christina Eilers (MPIA) and I gave up: We worked out the bugs, got the optimizer working, and realized that our issues are fundamentally conceptual: When you have a bad model for your data (that is, a model that is ruled out strongly by the data), there can be conflicts between model accuracy and prediction accuracy. We have hit one of those conflicts. We need to re-group on this one.


2017-07-19

#GaiaSprint, day 3

Today we got amazing success with an incredibly simple (read: dumb-ass) project for making precise maps of the Milky Way: Lauren Anderson (Flatiron) and I built a data-driven model of dust extinction, using the red-clump stars in the TGAS sample that we deconvolved last month. We then applied this dust inference to every single star in the full billion-star catalog (requiring 2MASS photometry), and selected stars whose dust-corrected color is consistent with being a RC star. That is, we assumed that every star with the correct de-reddened color is a RC star. RC stars are standard candles, so then we could map the entire MW disk. The maps are precise, but contaminated. So much structure visible. Adrian Price-Whelan (Princeton) says we are seeing a flaring disk!

2017-07-18

#GaiaSprint, day 2

Gaia Sprint continued today with Christina Eilers (MPIA) and I puzzling over the behavior of her code that is an extension of The Cannon to the case in which there are label uncertainties on the training-set stars. The behavior of the code is odd: As we give the code less freedom, the model of the stellar spectra gets better but the prediction gets worse. Makes no sense! The optimization is huge, and it relies on hand-typed analytic derivatives (I know, I know!), so we don't know whether we have conceptual issues or bugs.

Meanwhile, Andy Casey (Monash) and Ana Bonaca (Harvard) got excited about doing asteroseismology with the sparse photometric light curves that will be produced by Gaia. In particular, Casey got Stephen Feeney's (Flatiron) fake-data generator and likelihood function code (made for TESS-like data) working for Gaia-like data. He finds peaks in the likelihood function! Which means that maybe we can do asteroseismology without taking a Fourier Transform. His results, however, challenged both of our intuitions about the information about nu-max and delta-nu that ought to reside in any data stream. Inspired by all this, Bonaca and Donatas Narbutis (Lithuania) looked up large HST programs on stellar clusters and showed that it is plausible that we could do asteroseismology in HST too!

In other news, Mariangela Lisanti (Princeton) worked through recent results on dynamical friction in an ultralight-scalar dark-matter model (where the dark matter has a de Broglie wavelength that is kpc in scale!) and has plausible evidence that the timing argument (for the masses of local-group objects) might rule out or constrain ULS dark matter. And Anthony Brown (Leiden) and Olivier Burggraaff (Leiden) showed me an update of Jo Bovy and my (2009) extreme-deconvolution model of the local MW disk velocity field, and they find some structure in the vertical direction, which is cool and intriguing.

2017-07-17

#GaiaSprint, day 1

Today was the first day of the 2017 Heidelberg Gaia Sprint. It was the first day of the meeting but nonetheless an impressive day of accomplishments. The day started with a pitch session in which each of the 47 participants was given one slide and 120 seconds to say who they are and what they want to do or learn at the Sprint. These pitch slides are here.

After the pitch, my projects launched well: Jessica Birky (UCSD) was able to get the new version of The Cannon created by Christina Eilers (MPIA) working and started to get some seemingly valuable spectral models out of the M-dwarf spectra in APOGEE. Lauren Anderson (Flatiron) set up and trained a data-driven (empirical) model for the extinction of red stars, based on the Gaia and 2MASS photometry.

Perhaps the most impressive accomplishment of the day is that Morgan Fouesneau (MPIA) and Hans-Walter Rix (MPIA) matched stars between Gaia TGAS and the new GPS1 catalog that puts proper motions onto all PanSTARRS stars. They find co-moving stars where the brighter is in TGAS and the fainter is in GPS1. These pairs are extremely numerous. Many are main-sequence pairs but many pair a main-sequence star in TGAS with a white dwarf in GPS1. These pairs identify white dwarfs but also potentially put cooling ages onto both stars in the pair. The white-dwarf sequence they find is beautiful. Exciting!

2017-07-13

M-dwarf expertise

Jessica Birky (UCSD) and I met with Wolfgang Brandner (MPIA) and Derek Homeier (MPIA) to discuss M-dwarf spectra. Homeier has just finished a study of a few dozen M-dwarfs in APOGEE with the PHOENIX models. We are going to find out whether this set of stars will constitute an adequate training set for The Cannon. It is very weighted to a small temperature range, so it might not have enough coverage for us. We learned a huge amount in our meeting, like whether rotation might affect us (or be detectable), whether binaries might be common in our sample, and whether we might be able to use photometry (or photometry plus astrometry) to get effective temperatures. The conversation was very wide ranging and I learned a huge amount.

2017-07-12

Bayes Cannon, asteroseismology, binaries

Today, at MPIA Milky Way Group Meeting, I presented my thinking about Stephen Feeney (Flatiron), Ana Bonaca (Harvard), and my project on doing asteroseismology without the Fourier Transform. I am so excited about the (remote, perhaps) possibility that Gaia might be able to measure delta-nu and nu-max for many stars! Possible #GaiaSprint project?

Before me, Kareem El-Badry (Berkeley) talked about how wrong your inferences about stars can be when you model the spectrum without considering binarity. This maps on to a lot of things I discuss with Tim Morton (Princeton) in the area of exoplanet science. Also Yuan-Sen Ting (ANU) spoke about using t-SNE to look for clustering of stars in chemical space.

I spent the early morning writing up a safe-for-methodologists (think: statisticians, mathematicians, and computer scientists) description of The Cannon's likelihood function, when the stellar labels themselves are poorly known (really the project of Christina Eilers here at MPIA). I did this because Jonathan Weare (Chicago) has proposed that he can probably sample the full posterior. I hope that is true! It would be a probabilistic tour de force.

2017-07-11

not ready for #GaiaSprint

Lauren Anderson (Flatiron) showed up at MPIA today to discuss #GaiaSprint projects and our next projects more generally. We discussed a possible project in which we try to use the TGAS data to infer the relationships between extinction and intrinsic color for red-giant stars, and then use those relationships in the billion-star catalog to predict parallaxes for DR2 (and also learn the dust map and the spatial distribution of stars in the Milky Way).

2017-07-10

asteroseismology; toy model potentials; dwarfs vs giants

Stephen Feeney (Flatiron) sent me plots today that suggest that we can measure asteroseismic nu-max and delta-nu for a red-giant star without ever taking the Fourier Transform of the data. Right now, there are still many issues: This is still fake data, which is always cheating. The sampler (despite being nested and all) gets stuck in poor modes (and this problem is exceedingly multimodal). But when we inspect the sampling after the fact, the good answer beats the bad answers in likelihood by a huge ratio, which suggests that we might be able to do asteroseismology at pretty low signal-to-noise too. We need to move to real data (from Kepler).

Because of concern that (in our stellar-stream project) we aren't marginalizing out all our unknowns yet—and maybe that is making things look more informative than they are—Ana Bonaca (Harvard) stared today on including the progenitor position in our Fisher-matrix (Cramér-Rao) analysis of all stellar streams. We also have concerns about the rigidity of the gravitational potential model (which is a toy model, in keeping with the traditions of the field!). We discussed also marginalizing out some kind of perturbation expansion around that toy model. This would permit us to both be more conservative, and also criticize the precisions obtained with these toy models.

Jessica Birky (UCSD) looked at chi-square differences (in spectral space) between APOGEE spectra of low-temperature stars without good labels and two known M-type stars, one giant and one dwarf. This separated all the cool stars in APOGEE easily into two classes. Nice! We are sanity-checking the answers. We are still far, however, from having a good training set to fire into The Cannon.

2017-07-07

M dwarfs, The Cannon, binaries, streams, corrections, and coronography

So many projects! I love my summers in Heidelberg. I spent time working through the figures that would support a paper on M-dwarf stars with The Cannon with Jessica Birky (UCSD) today. She has run The Cannon on a tiny training set of M-dwarf stars in the APOGEE data, and it seems to work (despite the size and quality of our training set). We are diagnosing whether it all makes sense now.

With Christina Eilers (MPIA), Hans-Walter Rix (MPIA) and I discussed the amazing fact that she can optimize (a more sophisticated version of) The Cannon on all internal parameters and all stellar labels in a single shot; this is a hundred-thousand-parameter non-linear least-square fit! It seems to be working but there are oddities to follow up. She is dealing with the point that many stars have bad, missing, or noisy labels.

With Kareem El-Badry (Berkeley), Rix and I worked through the math of going from an SB2 catalog (that is, a catalog of stars known to be binary because their spectra are better fit by superpositions of pairs of stars than by single stars) through to a population inference about the binary population. This project meshes well with the plans that Adrian Price-Whelan (Columbia) and I have for the summer.

With Ana Bonaca (Harvard), I discussed further marginalizations in her project to determine the information content in stellar streams. She finds that the potential form and the progenitor phase-space information are very informative; that is, if we relax those to give more freedom, we expect to find that the streams are less constraining of the Galactic potential. We discussed ways to test this in the next few days.

With Stephen Feeney (Flatiron) and Daniel Mortlock (Imperial) I discussed the possibility of writing a paper about the Lutz-Kelker correction (don't do it!) and posterior probabilistic catalogs (don't make them!) and what scope it might have. We tentatively decided to try to put something together.

With Matthias Samland (MPIA) and Jeroen Bouwman (MPIA) I discussed their ideas to move the CPM (which we used to de-trend Kepler and K2 light curves) to the domain of direct detection of exoplanets with coronographs. This is a great idea! We discussed the way to choose predictor pixels, and the form that the likelihood takes when you marginalize out the superposition of predictor pixels. This is a very promising software direction for future coronograph missions. But we noticed that many projects and observing runs might be data-limited: People take hundreds of photon-limited exposures instead of thousands of read-noise-limited exposures. I think that's a mistake: No current results are, in the end, photon-noise limited! We put Samland onto looking at the subspace in which the pixel variations live.

I love my job!

2017-07-06

nothing

I had a whole day on the train, back from Potsdam. That didn't translate into a whole day of research.

2017-07-05

Quillen

I spent the day at Potsdam, to participate (and give a talk) in the Wempe Award ceremony; the prize went to Alice Quillen (Rochester), who has done dynamical theory on a huge range of scales and in a huge range of contexts. I spoke about how data-driven models of stars might make it possible to precisely test Quillen's predictions. After my talk I had a long session with Ivan Minchev (AIP), Christina Chiappini (AIP), and Friedrich Anders (AIP) about work on stellar chemical abundances in the disk. They are trying to understand whether the alpha-rich disk itself splits into multiple populations or is just one. We discussed the possibility that any explanation of the alpha-to-Fe vs Fe-to-H plot ought to make predictions for other galaxies. Right now theoretical expectations are soft, both because star formation is not right in the cosmological models, and because nucleosynthetic yields are not right in the chemical evolution models. We also discussed Anders's use of t-SNE for dimensionality reduction and how we might test its properties (the properties of t-SNE, that is).

2017-07-04

computing stable derivatives

In my science time today, I worked with Ana Bonaca (Harvard) on her computation of derivatives—of stellar stream properties with respect to potential parameters. This is all part of our information-theoretic project on stellar streams. We are taking the derivatives numerically, which is challenging to get right, and we have had many conversations about step sizes and how to choose them. We made (what I hope are) final choices today: They involve computing the derivative at different step sizes, comparing each of those derivatives to those computed at nearby step sizes, and finding the smallest step size at which converged or consistent derivatives are being computed. Adaptive and automatic! But a pain to get working right.

Numerical context: If you take derivatives with step sizes that are too small, you get killed by numerical noise. If you take derivatives with step sizes that are too large, the changes aren't purely linear in the stepped parameter. The Goldilocks step size is not trivial to find.

2017-07-03

models of stellar spectroscopy

Today was my first day at MPIA. I worked with Hans-Walter Rix (MPIA) and Christina Eilers (MPIA) on her new version of The Cannon, which simultaneously optimizes the model and the labels, with label uncertainties. It is a risky business for a number of reasons, one of which is that maximum likelihood has all the problems we know, and another of which is that optimization is hard. She has taken all the relevant derivatives (analytically), but is stuck on initialization. We came up with some tricks for improving her initialization; this problem has enormous numbers of local optima!

We also spoke with Kareem El-Badry (Berkeley) about a project he is doing with Rix to find binary stars among the LAMOST spectra. Here the problem is that the binaries will not be resolved spectrally or spatially, so it is up to seeing that the one-d spectrum is better explained by two stars (at the same distance and metallicity) than one. He is finding (not surprisingly) that because the spectral models are not quite accurate enough, a mixture of two stars is almost always better than a single star fit. So he decided today to try implementing (his own, bespoke, version of) The Cannon. Then the model will (at least) be accurate in the spectral domain, which is what he needs.

I got started on a new project with Jessica Birky (UCSD) who is here at MPIA to work with me on M-dwarf spectra in the APOGEE project. Our first job is to find a training set of M dwarfs that have APOGEE spectra but also known temperatures and metallicities. That isn't trivial.

2017-06-29

summer plans

My last research day before heading to MPIA for the summer was taken up with many non-research things! However, I did have brief discussions with Lauren Anderson (Flatiron) about what is next for our collaboration, now that paper 1 is out!

2017-06-28

spin tagging of stars?

At the Stars group meeting, John Brewer (Yale) and Matteo Cantiello (Flatiron) told us about the Kepler / K2 Science meeting, which happened last week. Brewer was particularly interested in the predictions that Ruth Murray-Clay made for chemical abundance differences between big and small planet hosts; it is too early to tell how well these match on to the results Brewer is finding for chemical differences between stars hosting different kinds of exoplanet architectures.

Other highlights included really cool supernova light curves, with amazing details, Granulation or flicker estimates of delta-nu and nu-max, and a clear bimodality in planetary radii between super-earths and mini-neptunes. There was much discussion in group meeting of this latter result, both what it might mean, and what predictions it might generate.

Highlights for Cantiello included results on the inflation of short-period planets by heating by their host stars. And, intriguingly, a possible asteroseismic measurement of stellar inclinations. That is, you might be able to tell the projection of a star's spin angular momentum vector projected onto the line of sight. If you could (and if some results about aligned spin vectors in star-forming regions hold up) this could lead to a new kind of tagging for stars that are co-eval!

2017-06-27

global ozone

In the morning, researchers from across the Flatiron Institute gathered for a discussion of statistical inference, which is a theme that cuts across the different departments. Justin Alsing (Flatiron) led the discussion, asking for advice on his project to model global ozone over the last few decades. He has data that spans latitude, altitude, and time, and the ozone levels can be affected by many things other than long-term degradation by pollutants. So he wants to build a non-linear, data-driven model of confounders but still come to strong conclusions about the long-term trends. There was discussion of many relevant methods, including large linear models (regularized strongly), independent components analysis, latent variable models, neural networks, and so on. It was a wide-ranging and valuable discussion. The CCB at Flatiron has some valuable mathematics expertise, which could be important to all the Flatiron departments.

2017-06-26

statistics is hard

OMG much of my research time today was spent trying to figure out everything that is wrong with Section 7 (uncertainties in both x and y) of the Hogg, Bovy, and Lang paper on fitting a line. Warning to users: Don't use Section 7 until we update! The problems appeared early (see the GitHub issues on this Chapter), but came to a head when Dan Foreman-Mackey (UW) wrote this blog post. Oddly I disagree with Foreman-Mackey's solution, and I don't have consensus with Jo Bovy (Toronto) yet. It has something to do with how we take the limit to very large variance in our prior. But I must update the paper asap!

2017-06-22

the variance on the covariance of the variance

I had a long set of conversations with Boris Leistedt (NYU) about various matters cosmological. The most exciting idea we discussed comes from thinking about good ideas that Andrew Pontzen (UCL) and I discussed a few weeks ago: If you can cancel some kinds of variance in estimators by performing matched simulations with opposite initial conditions, might there be other families of matched simulations that can be performed to minimize other kinds of estimator variances?

For example, Leistedt wants to make a set of simulations that are good for estimating the covariance of a power-spectrum estimator in a real experiment. How do we make a set of simulations that get this covariance (which is the variance of a power spectrum, which is itself a variance) with minimum variance on that covariance (of that variance)? Right now people just make tons of simulations, with random initial conditions. You simply must be able to do better than pure random here. If we can do this well, we might be able to zero out terms in the variance (of the variance of the variance) and dramatically reduce simulation compute time. Time to hit the books!

2017-06-21

fast bar

Stars group meeting ended up being all about the Milky Way Bar. Jo Bovy (Toronto), many years ago, made a prediction about the velocity distribution as a function of position if the velocity substructure seen locally (in the Solar Neighborhood) is produced (in part) by a bar at the Galactic Center. The very first plate of spectra from APOGEE-South happens to have been taken in a region that critically tests this model. And he finds evidence for the predicted velocity structure! He finds that the best-fit bar is a fast bar (whatever that means—something about the rotation period). This is a cool result, and also a great use of the brand-new APOGEE-S data.

Bovy was followed by Sarah Pearson (Columbia) who showed the effects of a bar on the Pal-5 stream and showed that some aspects of its morphology could be explained by a fast bar. We weren't able to fully check whether both Bovy and Pearson want the exact same bar, but there might be a consistent story emerging.

2017-06-20

MCMC

The research highlight of the day was Marla Geha (Yale) dropping in to Flatiron to chat about MCMC sampling. She is working through the tutorial that Foreman-Mackey (UW) and I are putting together and she is doing the exercises.
I'm impressed! She gave lots of valuable feedback for our first draft.

2017-06-19

learning

I spent time working through the last bits of a paper by Dun Wang (NYU) about image modeling for time-domain astrophysics. I asked him to send it to our co-authors.

The rest of the day was spent in discussions of Bayesian inference with the Flatiron Astronomical Data Group reading group. We are doing elementary exercises in data analysis and yet we are not finding it easy to discuss and understand, especially some of the details and conceptual arguments. In other words: No matter how much experience you have with data analysis, there are always things to learn!

2017-06-16

cosmic rays, alien technology

I helped Justin Alsing (Flatiron) and Maggie Lieu (ESA) search for HST data relevant to their project for training a model to find cosmic rays and asteroids. They started to decide that HST's cosmic-ray identification methods that they are already using might be good enough to just rely upon, which drops their requirements down to asteroids. That's good! But it's hard to make a good training set.

Jia Liu (Columbia) swung by to discuss the possibility of finding things at exo-L1 or exo-L2 (or the other Lagrange points). Some of the Lagrange points are unstable, so anything we find would be clear signs of alien technology. We looked at the relevant literature; we may be fully scooped, but I think there are probably things to do still. One thing we discussed is the observability; it is somehow going to depend on the relative density of the planet and star!

2017-06-15

Bayesian basics; red clump

A research highlight today was the first meeting of our Bayesian Data Analysis, 3ed reading group. It lasted a lot longer than an hour! We ended up going off into a tangent on the Fully Marginalized Likelihood vs cross-validation and Bayesian equivalents. We came up with some possible research projects there! The rest of the meeting was Bayesian basics. We decided on some problems we would do in Chapter 2. I hate to admit that the idea of having a problem set to do makes me nervous!

In the afternoon, Lauren Anderson (Flatiron) and I discussed our project to separate red-clump stars from red-giant-branch stars in the spectral domain. We have two approaches: The first is unsupervised: Can we see two spectral populations where the RC and RGB overlap? The second is supervised: Can we predict relevant asteroseismic parameters ina training set using the spectra?

2017-06-14

cryo-electron-microscopy biases

At the Stars group meeting, I proposed a new approach for asteroseismology, that could work for TESS. My approach depends on the modes being (effectively) coherent, which is only true for short survey durations, where “short” can still mean years. Also, Mike Blanton (NYU) gave us an update on the APOGEE-S spectrograph, being commissioned now at LCO in Chile. Everything is nominal, which bodes very well for SDSS-IV and is great for AS-4. David Weinberg (OSU) showed up and told us about chemical-abundance constraints on a combination of yields and gas-recycling fractions.

In the afternoon I missed Cosmology group meeting, because of an intense discussion about marginalization (in the context of cryo-EM) with Leslie Greengard (Flatiron) and Marina Spivak (Flatiron). In the conversation, Charlie Epstein (Penn) came up with a very simple argument that is highly relevant. Imagine you have many observations of the function f(x), but for each one your x value has had noise applied. If you take as your estimate of the true f(x) the empirical mean of your observations, the bias you get will be (for small scatter in x) proportional to the variance in x times the second derivative of f. That's a useful and intuitive argument for why you have to marginalize.

2017-06-13

Renaissance

I spent the day at Renaissance Technologies, where I gave an academic seminar. Renaissance is a hedge fund that created the wealth of the Simons Foundation among many other Foundations. I have many old friends there; there are many PhD astrophysicists there, including two (Kundić and Metzger) I overlapped with back when I was a graduate student at Caltech. I learned a huge amount while I was there, about how they handle data, how they decide what data to keep and why, how they manage and update strategies, and what kinds of markets they work in. Just like in astrophysics, the most interesting signals are at low signal-to-noise in the data! Appropriately, I spoke about finding exoplanets in the Kepler data. There are many connections between data-driven astrophysics and contemporary finance.

2017-06-12

reading the basics

Today we decided that the newly-christened Astronomical Data Group at Flatiron will start a reading group in methods. Partially because of the words of David Blei (Columbia) a few weeks ago, we decided to start with BDA3, part 1. We will do two chapters a week, and also meet twice a week to discuss them. I haven't done this in a long time, but we realized that it will help our research to do more basic reading.

This week, Maggie Lieu (ESA) is visiting Justin Alsing (Flatiron) to work (in part) on Euclid imaging analysis. We spent some time discussing how we might build a training set for cosmic rays, asteroids, and other time-variable phenomena in imaging, in order to train some kind of model. We discussed the complications of making a ground-truth data set out of existing imaging. Next up: Look at what's in the HST Archive.

2017-06-11

summer plans

I worked for Hans-Walter Rix (MPIA) this weekend: I worked through parts of the After Sloan 4 proposal to the Sloan Foundation, especially the parts about surveying the Milky Way densely with infrared spectra of stars. I also had long conversations with Rix about our research plans for the summer. We have projects to do, and a Gaia Sprint to run!

2017-06-08

music and stars

First thing, I met with Schiminovich (Columbia), Mohammed (Columbia), and Dun Wang (NYU) to discuss our GALEX imaging projects. We decided that it is time for us to produce titles, abstracts, outlines, and lists of figures for our next two papers. We also realized that we need to produce pretty-picture maps of the plane survey data, and compare it to Planck and GLIMPSE and other related projects.

I had a great lunch meeting with Brian McFee (NYU) to catch up on his research (on music!) and ask his advice on various time-domain projects I have in mind. He has new systems to recognize chords in music, and he claims higher performance than previous work. We discussed time-series methods, including auto-encoders and HMMs. As my loyal reader knows, I much prefer methods that deal with the data probabilistically; that is, not methods that always require complete data without missing information, and so on. McFee had various thoughts on how we might adapt methods that expect complete data for tasks that are given incomplete data, like tasks that involve Kepler light curves.

2017-06-07

post-main-sequence stellar evolution

At Stars group meeting, Matteo Cantiello (Flatiron) had us install MESA and then gave us a tutorial on aspects of post-main-sequence evolution of stars. There were many amazing and useful things, and he cleared up some misconceptions I had about energy production and luminosity during the main-sequence and red-giant phases of stellar evolution. He showed some hope (because of convective-region structure, which in turn depends on opacity, which in turn depends on chemical abundances) that we might be able to measure some aspects of chemical abundances with asteroseismology in certain stellar types.

In the Cosmology group meeting, we discussed many topics, but once again I got fired up about automated methods or exhaustive methods of searching for (and analyzing) estimators, both for making measurements in cosmology, and for looking for anomalies in a controlled way (controlled in the multiple-hypothesis sense).
One target is the neutrino mass, which is in the large-scale structure, but subtly.

In the space between meetings, Daniela Huppenkothen (NYU) and I worked with Chris Ick (NYU) to get him started building a mean model of Solar flares, and looking at the power spectrum of the flares and their mean models. The idea is to head towards quantitative testing of quasi-periodic oscillation models.

2017-06-06

don't apply the Lutz-Kelker correction!

One great research moment today was Stephen Feeney (Flatiron) walking into my office to ask me about the Lutz–Kelker correction. This is a correction applied to parallax measurements to account for the point that there are far, far more stars at lower parallaxes (larger distances) than there are at smaller parallaxes. Because of (what I think of as being) Jacobian factors, the effect is stronger in parallax than it is in distance. The LK correction corrects for what—in luminosity space—is sometimes called Eddington bias (and often wrongly called Malmquist bias). Feeney's question was: Should he be applying this LK correction in his huge graphical model for the distance ladder? And, implicitly, should the supernova cosmology teams have applied it in their papers?

The short answer is: No. It is almost never appropriate to apply the LK correction to a parallax. The correction converts a likelihood description (the likelihood mode, the data) into a posterior description (the posterior mode) under an improper prior. Leaving aside all the issues with the wrongness of the prior, this correction is bad to make because in any inference using parallaxes, you want the likelihood information from the parallax-measuring experiment. If you use the LK-corrected parallax in your inference, you are multiplying in the LK prior and whatever prior you are using in your own inference, which is inconsistent, and wrong!

I suspect that if we follow this line of argument down, we will discover mistakes in the distance-ladder Hubble-constant projects! For this reason, I insisted that we start writing a short note about this.

Historical note: I have a paper with Ed Turner (Princeton) from the late 90s that I now consider totally wrong, about the flux-measurement equivalent of the Lutz-Kelker correction. It is wrong in part because it uses wrong terminology about likelihood and prior. It is wrong in part because there is literally a typo that makes one of the equations wrong. And it is wrong in part because it (effectively) suggests making a correction that one should (almost) never make!

2017-06-05

buying and selling correct information

Well of course Adrian Price-Whelan (Princeton) had lots of comments on the paper, so Lauren Anderson (Flatiron) and I spent the day working on them. So close now!

I had lunch with Bruce Knuteson (Kn-X). We talked about many things but including the knowledge exchange that Kn-X runs: The idea is to make it possible to buy and sell correct information, even from untrusted or anonymous sources. The idea is that the purchase only goes through if the information turns out to be true (or true-equivalent, like useful). It has lots of implications for news, but also for science, in principle. He asked me how we get knowledge from others in astronomy? My answer: Twitter (tm)!

Late in the day, Dan Foreman-Mackey (UW) and I had a long discussion about many topics, but especially possible events or workshops we might run next academic year at the Flatiron Institute. One is about likelihood-free or ABC or implicit inference. Many people in CCA and CCB are interested in these subjects, and Foreman-Mackey is thinking about expanding in this direction. Another is about extreme-precision radial velocity measurements, where models of confusing stellar motions and better methods in the pipelines might both have big impacts. Another is about photometry methods for the TESS satellite, which launches next year. We also discussed the issue that it is important, when we organize any workshop, to make it possible to discover all the talent out there that we don't already know about: That talent we don't know about will increase workshop diversity, and increase the amount we ourselves learn.

2017-06-02

oscillation-timing exoplanet discovery

First thing, Ruth Angus (Columbia) and I discussed an old, abandoned project of mine to find exoplanets by looking at timing residuals (as it were) on high-quality (like nearly coherent) oscillating stars. It is an old idea, best executed so far (to my knowledge; am I missing anything?) by Simon Murphy (Sydney). I have ideas for improvements; they involve modeling the phase shift as a continuous function, not binning and averaging phase shifts (which is the standard operating procedure). It uses results from the Bayesian time-series world to build a likelihood function (or maybe a pseudo-likelihood function). One of the things I like about my approach is that it could be used on pulsar timing too.

For the rest of the day, Lauren Anderson (Flatiron) and I did a full-day paper sprint on her Gaia TGAS color-magnitude diagram and parallax de-noising paper. We finished! We decided to give Price-Whelan a weekend to give it a careful once-over and submit on Monday.

2017-06-01

Simons

It was a low-research day. But I did learn a lot about the Simons Foundation, in a set of meetings that introduce new employees to the activities and vision of the Foundation.

2017-05-31

variational inference

Today was a great day of group meetings! At the stars group meeting, Stephen Feeney (Flatiron) showed us the Student t distribution, and showed how it can be used in a likelihood function (and with one additional parameter) to capture un-modeled outliers. Semyeong Oh (Princeton) updated us on the pair of stars she has found with identical space velocities but very different chemical abundances. And Joel Zinn (OSU) told us about new approaches to determining stellar parameters from light curves. This is something we discuss a lot at Camp Hogg,
so it is nice to see some progress!

We had the great idea to invite David Blei (Columbia) and Rajesh Ranganath (Princeton) to the Cosmology group meeting today. It was great! After long introductions around the (full) room, we gave the floor to Blei, who chose to tell us about the current landscape of variational methods for inference in large models with large data. His group has been doing lots there. The discussion he led also ranged over a great, wide range of things, including fundamental Bayesian basics, problem structure, and methods for deciding which range of inference methodologies might apply to your specific problem. The discussion was lively, and the whole event was another reminder that getting methodologists and astronomers into the same room is often game-changing. We have identified several projects to discuss more in depth for a possible collaboration.

[With this post, this blog just passed 211.5 posts. I realize that a fractional power of two is not that impressive, but it is going to be a long time to 212 and I'll be lucky to ever publish post number 213!]

2017-05-30

interdisciplinary inference meetings

Justin Alsing (Flatiron) organized an interdisciplinary meeting at Flatiron across astrophysics, biology, and computing, to discuss topics of mutual interest in inference or inverse problems. Most of the meeting was spent with us going around the room describing what kinds of problems we work on so as to find commonalities. Some interesting ideas: The neuroscientists said that not only do they have data analysis problems, they also want to understand how brains analyze data! Are there relationships there? Many people in the room from both biology and astronomy are in the “likelihood-free” regime: Lots of simulations, lots of data, no way to compare! That will become a theme, I predict. Many came to learn new techniques, and many came to learn what others are doing, so that suggests a format, going forward, in which we do a mix of tutorials, problem statements, and demonstrations of results. We kicked it off with Lauren Anderson (Flatiron) describing parallaxes and photometric parallaxes. [If you are in the NYC area and want to join us for future meetings, drop me a line.]

2017-05-27

measuring the velocity of a star

Yesterday and today I wrote code. This is a much rarer activity than I would like! I wrote code to test different methods for measuring the centroid of an absorption line in a stellar spectrum, with applications to extreme precision radial-velocity experiments. After some crazy starts and stops, I was able to strongly confirm my strong expectation: Cross-correlation with a realistic template is far better for measuring radial velocities than cross-correlation with a bad template (especially a binary mask). I am working out the full complement of experiments I want to do. I am convinced that there is a (very boring) paper to be written.

2017-05-25

what is math? interpolation of imaging

The research highlight of the day was a long call with Dustin Lang (Toronto) to discuss about interpolation, centroiding, and (crazily) lexicographic ordering. The latter is part of a project I want to do to understand how to search in a controlled way for useful statistics or informative anomalies in cosmological data. He found it amusing that my request of mathematicians for a lexicographic ordering of statistical operations was met with the reaction “that's not math, that's philosophy”.

On centroiding and interpolation: It looks like Lang is finding (perhaps not surprisingly) that standard interpolators (the much-used approximations to sinc-interpolation) in astronomy very slightly distort the point-spread function in imaging, and that distortion is a function of sub-pixel shift. He is working on making better interpolators, but both he and I are concerned about reinventing wheels. Some of the things he is worried about will affect spectroscopy as well as imaging, and, since EPRV projects are trying to do things at the 1/1000 pixel level, it might really, really matter.

2017-05-24

chemical correlates of planet system architecture

At Stars group meeting, Jo Bovy (Toronto) demonstrated to us that the red-giant branch in Gaia DR1 TGAS is populated about how you would expect from a simple star-formation history and stellar evolution tracks. This was surprising to me: The red clump is extremely prominent. This project involved building an approximate selection function for TGAS, which he has done, and released open-source!

John Brewer (Yale) showed relationships he has found between planet-system architectures and stellar chemical abundances. He cleverly designed complete samples of different kinds of planetary systems to make comparisons on a reasonable basis. He doesn't have a causal explanation or causal inference of what he is finding. But there are some very strong covariances of chemical-abundance ratios with system architectures. This makes me more excited than ever to come up with some kind of general description or parameterization of a bound few-body system that is good for inference.

I spent the afternoon at CIS 303, a middle school in the Bronx, as part of their Career and College Day. This is an opportunity for middle schoolers to discuss with people with a huge range of careers and backgrounds what they do and how they got there. So much fun. I also ran into Michael Blanton (NYU) at the event!

2017-05-23

mentoring and noise

Today I was only able to spend a small amount of time at a really valuable (and nicely structured) mentoring workshop run by Saavik Ford and CUNY faculty. The rest of the day I sprinted on my Exoplanet Research Program proposal, in which I am writing about terms in the extreme precision radial-velocity noise budget!

2017-05-22

quasi-periodic solar flares; TESS

In the morning, Daniela Huppenkothen (NYU) and I discussed Solar flares and other time-variable stellar phenomena with Chris Ick (NYU). He is going to help us take a more principled probabilistic approach to the question of whether flares contain quasi-periodic oscillations. He is headed off to learn about Gaussian Processes.

Armin Rest (STScI) was around today; he discussed image differencing with Dun Wang (NYU). After their discussions, we decided to make Wang's code easily installable, and get Rest to install and run it. Rest wants to have various image-differencing or transient-discovery pipelines running on the TESS data in real time (or as real as possible), and this could form the core of that. Excited!

2017-05-19

exploding white dwarfs

Abi Polin (Berkeley) came through NYU this week. Today she delivered a great seminar on explosions of white dwarfs. She is looking at different ignition mechanisms, and trying to predict the resulting supernovae spectra and light curves. This modeling requires a huge range of physics, including gastrophysics, nuclear reaction networks, and photospheres (both for absorption and emission lines). The current models have serious limitations (like one-d, which she intends to fix during her PhD), but they strongly suggest that type Ia supernovae (the ones that are created by white-dwarf explosions) do seem to come from a narrow mass range in white-dwarf mass. If you go too high in mass, you over-produce nickel. If you go too low in mass, you under-produce nickel and get way under-luminous. In addition to the NYU CCPP crew, Surabh Jha (Rutgers) and Armin Rest (STScI) were in the audience, so this talk was followed by a lively lunch! Jha suggested that the narrow mass range implied by the talk could also help with understanding the standard-candle-ness of these explosions.

2017-05-18

epoch of reionization

I had the realization that I can reduce my concerns about radial-velocity fitting (given a spectrum) to the problem of centroiding a single spectral line, and then scale up using information theory. So there is a paper to write! I sketched an abstract.

In the morning, Andrei Mesinger (SNS) gave a talk about the epoch of reionization. He argued fairly convincingly that, between Planck, Lyman-alpha emission from very high-redshift quasars and galaxies, and the growth of dark-matter structure, the epoch of reionization is pretty well constrained now, around redshift of 7 to 8. The principal observation (from my perspective) is that the optical depth to the surface of last scattering is close to the minimum possible value (given what we know out to redshifts of 5 or 6). He discussed also what we will learn from 21-cm projects, and—like Colin Hill a few weeks ago—is looking for the right statistics. I really have to start a project that finds decisive (and symmetry-constrained) summary statistics, given simulations!

2017-05-17

Nice; counter-rotating disks

At Stars group meeting, Keith Hawkins (Columbia) summarized the Nice meeting on Gaia. Some Gaia Sprint and Camp Hogg results were highlighted there in Anthony Brown's talk, apparently. There were results on Gaia accuracy of interest to us (and testable by us), and also things about the velocity distribution in the Galaxy halo.

Tjitske Starkenburg (Flatiron) talked about counter-rotating components in disk galaxies: She would like to find observational signatures that can be found in both simulations and also the data. But she also wants to understand their origins in the simulations. Interestingly she finds many different detailed formation histories that can lead to counter-rotating components. That is consistent with their high frequency in the observed samples.

2017-05-16

falsifying results by philosophical argument

I finally got some writing done today, in the Anderson paper on the empirical, deconvolved color-magnitude diagram. We are very explicitly structuring the paper around the assumptions, and each of the assumptions has a name. This is part of my grand plan to develop a good, repeatable, useful, and informative structure for a data-analysis paper.

I missed a talk last week by Andrew Pontzen (UCL), so I found him today and discussed matters of common interest. It was a wide-ranging conversation but two highlight were the following: We discussed causality or causal explanations in a deterministic-simulation setting. How could it be said that “mergers cause star bursts”? If everything is deterministic, isn't it equally true that star bursts cause mergers? One question is the importance of time or time ordering (or really light-cone ordering). For the statisticians who think about causality this doesn't enter explicitly. I think that some causal statements in galaxy evolution are wrong on philosophical grounds but we decided that maybe there is a way to save causality provided that we always refer to the initial conditions (kinematic state) on a prior light cone. Oddly, in a deterministic universe, causal explanations are mixed up with free will and subjective knowledge questions.

Another thing we discussed is a very neat trick he figured out to reduce cosmic variance in simulations of the Universe: Whenever you simulate from some initial conditions, also simulate from the negative of those initial conditions (all phases rotated by 180 degrees, or all over-densities turned to under, or whatever). The average of these two simulations will cancel out some non-trivial terms in the cosmic variance!

The day ended with a long call with Megan Bedell (Chicago), going over my full list of noise sources in extreme precision radial-velocity data (think: finding and characterizing exoplanets). She confirmed everything in my list, added a few new things, and gave me keywords and references. I think a clear picture is emerging of how we should attack (what NASA engineers call) the tall poles. However, it is not clear that the picture will get set down on paper in time for the Exoplanet Research Program funding call!

2017-05-15

exoplanets

Today not much! I had a valuable conversation with Trisha Hinners (NG Next) about machine-learning projects with the Kepler data, and I did some pen-and-paper writing and planning for my proposal on exoplanet-related extreme precision radial-velocity measurements.

2017-05-12

vertical action is a clock

Ruth Angus (Columbia) and I discussed the state of her hierarchical Bayesian model to self-calibrate a range of stellar age indicators. Bugs are fixed and it appears to be working. We discussed the structure of a Gibbs sampler for the problem. We reviewed work Angus and also Melissa Ness (MPIA) did at the 2016 NYC Gaia Sprint on vertical action dispersion as a function of stellar age. Beautiful results! We had an epiphany and decided that we have to publish these results, without waiting for the Bayesian inference to be complete. That is, we should publish a simple empirical paper based on TGAS, proposing the general point that vertical action provides a clock with very good properties: It is not precise, but it is potentially very accurate, because it is very agnostic about what kind of star it is is timing.

2017-05-11

cosmological anomalies

I had lunch with Jesse Muir (Michigan), and then she gave an informal seminar after lunch. She has been working on a number of things in cosmological measurement. One highlight is an investigation of the anomalies (strange statistical outliers or badly fit aspects) in the CMB: She has asked how they are related, and whether they are really independent. I discussed with her the possibility that we might be able to somehow lexicographically order all possible anomalies and then search for them in an ordered way, keeping track of all possible measurements and their outcomes, as a function of position in the ordering. The reason I am interested in this is because some of the anomalies are “odd enough” that I would expect them to come up pretty late in any ordering. That makes them not-that-anomalous! This somehow connects to p-values and p-hacking and so on. I also discussed with Muir the possibility of looking for anomalies in the large-scale structure. This should be an even richer playground.

2017-05-10

BHs in GCs, and a new job

In Stars group meeting, Ruth Angus (Columbia) showed her catalog of rotation periods in the Kepler and K2 fields. She has a huge number! We discussed visualizations of these that would be convincing and also possibly create new scientific leads.

Also in Stars group meeting, Arash Bahramian (MSU) spoke about black holes in globular clusters. He discussed how they use simultaneous radio and X-ray observations to separate the BHs from neutron stars: Radio reveals jet energy and X-ray reveals accretion energy, which (empirically) are different for BHs and NSs. However, in terms of making a data-driven model, the only situations in which you are confident that something is a NS is when you see X-ray bursts (because: surface effect) and the only situations in which you are confident that something is a BH is when you can see a dynamical mass substantially greater than 1.4 Solar masses (because: equation of state). He highlighted some oddities around the cluster Terzan 5, which is the globular cluster with the largest number of X-ray sources, and also an extremely high density and inferred stellar collision rate. This was followed by much discussion of relationships between collision rate and other cluster properties, and also some discussion of some individual x-ray sources.

[In non-research news: Today I became an employee of the Flatiron Institute, as a new group leader within the CCA! Prior to today I was only in a consulting role.]

2017-05-08

looking at the Sun, through the freakin' walls

In the CCPP Brown-Bag talk, Duccio Pappadopulo (NYU) gave a very nice and intuitive introduction to the strong CP problem (although he really presented it as the strong T problem!). He discussed the motivation for the QCD axion and then experimental bounds on it. He mentioned at the end his own work that permits the QCD axion to have much stronger couplings to photons, and therefore be much more readily detected in the laboratory. He discussed an important kind of experiment that I had not heard about previously: The helioscope, which is an x-ray telescope in a strong magnetic field, looking at the Sun, but inside a shielded building (search "axion helioscope"). That is, the experiment asks the question: Can we see through the walls? This tests the coupling of the QCD sector and the photon to the axion, because (QCD) axions are created in the Sun, and some will convert (using the magnetic field to obtain a free photon) into x-ray photons at the helioscope. Crazy, but seriously these are real experiments! I love my job.

2017-05-05

Dr Yuqian Liu

Today it was a pleasure to participate in the PhD defense of Yuqian Liu (NYU), who has exploited the world's largest dataset on stripped supernovae, part of the huge spectral collection of Maryam Modjaz's group at NYU. She pioneered various data-driven methods for the spectral analysis. One is to create a data-driven or empirical noise model using filtering in the Fourier domain. Another is to fit shifted and broadened lines using empirical spectra and bayesian inference. She uses these methods to automatically make uniform measurements of spectral features from very heterogeneous data from multiple sources of different levels of reliability. Her results rule out various (one might say: All!) physical models for these supernovae. Her results are all available open-source, and she has pushed her results into SNID, which is the leading software supernova classifier. Congratulations Dr Liu!

2017-05-04

asteroseismological estimators; and Dr Hahn!

Because of the availability of Dan Huber (Hawaii) in the city today, we moved Stars group meeting to Thursday! He didn't disappoint, telling us about asteroseismology projects in the Kepler and K2 data. He likes to emphasize that the >20,000 stars in the Kepler field that have measured nu-max and delta-nu have—every one of them—been looked at by (human) eye. That is, there is no fully safe automated method for measuring these. My loyal reader knows that this is a constant subject of conversation in group meeting, and has been for years now. We discussed developing better methods than what is done now.

In my mind, this is all about constructing estimators, which is something I know almost nothing about. I proposed to Stephen Feeney (Flatiron) that we simulate some data and play around with it. Sometimes good estimators can be inspired by fully Bayesian procedures. We could also go fully Bayes on this problem! We have the technology (now, with new Gaussian-Process stuff). But we anticipate serious slowness: We need methods that will work for TESS, which means they have to run on hundreds of thousands to millions of light curves.

In the afternoon, Chang Hoon Hahn (NYU) defended his PhD, which is on methods for making large-scale structure measurements. We have joked for many years that my cosmology group meeting is always and only about fiber collisions. (Fiber collisions: Hardware-induced configurational constraints on taking spectra or getting redshifts of galaxies that are close to one another on the sky.) This has usually been Hahn's fault, and he didn't let us down in his defense. Fiber collisions is a problem that seems like it should be easy and really, really is not. It is an easy problem to solve if you have an accurate cosmological model at small scales! But the whole point is that we don't. And in the future, when surveys use extremely complicated fiber positioners (instead of just drilling holes), the fiber-collision problem could become very severe. Very. As in: It might require knowing (accurately) very high-point functions of the galaxy distribution. More on this at some point: This problem has legs. But, in the meantime: Congratulations Dr Hahn!

2017-05-03

Kronos–Krios; photometric redshifts without training

In the early morning, Ana Bonaca (Harvard) and I discussed our information-theory project on cold stellar streams. We talked about generalizing our likelihood model or form, and what that would mean for the lower bound (on the variance of any unbiased estimator; the Cramér–Rao bound). I have homework.

At the Flatiron, instead of group meeting (which we moved to tomorrow), we had a meeting on the strange pair of stars that Semyeong Oh (Princeton) and collaborators have found, with very odd chemical differences. We worked through the figures for the paper, and all the alternative explanations for their formation, sharpening up the arguments. In a clever move, David Spergel (Flatiron) named them Kronos and Krios. More on why that, soon.

In the afternoon, in cosmology group meeting, Boris Leistedt (NYU) talked about his grand photometric-redshift plan, in which the templates and the redshifts are all estimated together in a beautiful hierarchical model. He plans to get photometric redshifts with no training redshifts whatsoever, and also no use of pre-set or known spectral templates (though he will compose the data-driven templates out of sensible spectral components). There was much discussion of the structure of the graphical model (in particular about selection effects). There was also discussion about doing low-level integrals fast or analytically.

2017-05-02

don't cross-correlate with the wrong template!

In principle, writing a funding proposal is supposed to give you an opportunity to reflect on your research program, think about different directions, and get new insights about projects not yet started. In practice it is a time of copious writing and anxiety, coupled with a lack of sleep! However, I have to admit that today my experience was the former: I figured out (in preparing my Exoplanet Research Program proposal for NASA) that I have been missing some very low-hanging fruit in my thinking about the the error budget for extreme precision radial-velocity experiments:

RVs are obtained (usually) by cross-correlations, and cross-correlations only come close to saturating the Cramér–Rao bound when the template spectrum is extremely similar to the true spectrum. That just isn't even close to true for most pipelines. Could this be a big term in the error budget? Maybe not, but it has the great property that I can compute it. That's unlike most of the other terms in the error budget! I had a call with Megan Bedell (Chicago) at the end of the day to discuss the details of this. (This also relates to things I am doing with Jason Cao (NYU).)

In other news, I spent time reading about linear algebra, (oddly) to brush up on some notational things I have been kicking around. I read about tensors in Kusse and Westwig and, in the end, I was a bit disappointed: They never use the transpose operator on vectors, which I think is a mistake. However, I did finally (duh) understand the difference between contravariant and covariant tensor components, and why I have been able to do non-orthonormal geometry (my loyal reader knows that I think of statistics as a sub-field of geometry) for years without ever worrying about this issue.

2017-05-01

Dr Sanford

I gave the CCPP Brown-Bag talk today, about how the Gaia mission works, according to my own potted story. I focused on the beautiful hardware design and the self-calibration.

Before that, Cato Sanford (NYU) defended his PhD, about model non-equilibrium systems in which there are swimmers (think: cells) in a homogenous fluid. He used a very simple Gaussian Process as the motive force for each swimmer, and then asked things like: Is there a pressure force on a container wall? Are there currents when the force landscape is non-trivial? And so on. His talk was a bit bio-stat-mech for my astrophysical brain, but I was stoked with the results, and I feel like the things we have done with Gaussian Processes might lead to intuitions in these crazy situations. The nice thing is that if you go from Brownian Motion to a GP-regulated walk, you automatically go out of equilibrium!

2017-04-29

after-Sloan-4 proposal writing, day 2

I violated house rules today and spent a Saturday continuing work from yesterday on the planning and organization of the AS4 proposal. We slowly walked through the whole proposal outline, assigning responsibilities for each section. We then walked through again, designing figures that need to be made, and assigning responsibilities for those too. It took all day! But we have a great plan for a great proposal. I'm very lucky to have this impressive set of colleagues.

2017-04-28

after-Sloan-4 proposal writing, day 1

Today was the first day of the AS-4 (After-Sloan-4) proposal-writing workshop, in which we started a sprint towards a large proposal for the Sloan Foundation. Very intelligently, Juna Kollmeier (OCIW) and Hans-Walter Rix (MPIA) started the meeting by having every participant give a long introduction, in which they not only said who they are and what they are interested in, but they also said what they thought the biggest challenges are in making this project happen. This took several hours, and got a lot of the big issues onto the table.

For me, the highlights of the day were presentations by Rick Pogge (OSU) and Niv Drory (Texas) about the hardware work that needs to happen. Pogge talked about the fiber positioning system, that will include robots, and a corrector, and a [censored] of a lot of sophisticated software (yes, I love this). It will reconfigure fast, to permit millions (something like 25 million) exposures (in five years) with short exposure times. Pogge really convinced me of the feasibility of what we are planning on doing, and delivered a realistic (but aggressive) timeline and budget.

Drory talked about the Local Volume Mapper, which mates a fiber-based IFU to a range of telescopes with different focal lengths (but same f-ratio) to make 3-d data cubes at different scales for different objects and different scientific objectives. It is truly a genius idea (in part because it is so simple). He showed us that they are really, really good at making close-packed fiber bundles, something they learned how to do with MaNGA.

It was a great day of serious argument, brutally honest discussion of trade-offs, and task lists for a hard proposal-writing job ahead.

2017-04-26

void–galaxy cross-correlations, stellar system encounters

Both Flatiron group meetings were great today. In the first, Nathan Leigh (AMNH) Spoke about collisions of star systems (meaning 2+1 interactions, 2+2, 2+3, and 3+3), using collisionless dynamics and the sticky star approximation (to assess collisions). He finds a simple scaling of collision probabilities in terms of combinatorics; that is, the randomness or chaos is efficient, or more efficient than you might think. The crowd had many questions about scattering in stellar systems and equipartition.

This led to a wider discussion of dynamical scattering. We asked the question: Can we learn about dynamical heating in stellar systems by looking at residual exoplanet populations (for example, if the heating is by close encounters by stars, systems should be truncated)? We concluded that wide separation binaries are probably better tracers from the perspective that they are easier to see. Then we asked: Can the Sun's own Oort cloud be used to measure of star-star interactions? And: Are there interstellar comets? David Spergel (Flatiron) pointed out the (surprising, to me) fact that there are no comets on obviously hyperbolic orbits.

Raja Guhakathurta (UCSC) is in town; he showed an amazing video zooming in to a tiny patch of Andromeda’s disk. He discussed Julianne Dalcanton’s dust results in M31 (on which I am a co-author). He then showed us detailed velocity measurements he has made for 13,000 (!) stars in the M31 disk. He finds the velocity dispersion of the disk grows with age, and grows faster and to larger values than in the Milky-Way disk. That led to more lunch-time speculation.

In the cosmology meeting, Shirley Ho (CMU) spoke about large-scale structure and machine learning. She asked the question: Can we use machine learning to compare simulations to data? In order to address this, she is doing a toy project: Compare simulations to simulations. Finds that a good conv-net does as well as the traditional power-spectrum analysis. This led to some productive discussion of where machine learning is most valuable in cosmology. Ben Wandelt (Paris) hypothesized that a machine-learning emulator can’t beat an n-body simulation. I disagreed (though on weak grounds)! We proposed that we set up a challenge of some kind, very well specified.

Ben Wandelt then spoke about linear inverse problems, on which he is doing very creative and promising work. He classified foreground approaches (for LSS and CMB) into Avoid or Adapt or Attack. On Avoid: He is using a low-rank covariance constraint to find foregrounds (This capitalizes on smooth wavelength (frequency) dependences, but reduces detailed assumptions). He showed that this separates signal and foreground—by the signal being high-rank and CDM-like (isotropic, homogeneous, etc), while the foreground is low rank (smooth in wavelength space). He then switched gears and showed us an amazingly high signal-to-noise void–galaxy cross-correlation function. We discussed how the selection affects the result. The cross-correlation is strongly negative at small separations and shows an obvious Alcock–Paczynski effect. David Spergel asked: Since this is an observation of “empty space”, does it somehow falsify modified GR or radical particle things?

2017-04-25

Dr Geoff Ryan

Today Geoff Ryan (NYU) defended his PhD. I wrote a few things about his work here last week and he did not disappoint in the defense. The key idea I take from his work is: In an axisymmetric system (axisymmetric matter distribution and axisymmetric force law), material will not accrete without viscosity; it will settle into an incredibly long-lived disk (like Saturn's rings!). This problem has been solved by adding viscosity (artificially, but we do expect effective sub-grid viscosity from turbulence and magnetic fields), but less has been done about non-axisymmetry. Ryan shows that in the case of a binary system (this generates the non-axisymmetry), accretion can be driven without any viscosity. That's important and deep. He also talked about numerics, and also about GRB afterglows. It was a great event and we will be sad to see him go.

2017-04-24

hypothesis testing and marginalization

I had a valuable chat in the morning with Adrian Price-Whelan (Princeton) about some hypothesis testing, for stellar pairs. The hypotheses are: unbound and unrelated field stars, co-moving but unbound, and comoving because bound. We discussed this problem as a hypothesis test, and also as a parameter estimation (estimating binding energy and velocity difference). My position (that my loyal reader knows well) is that you should never do a hypothesis test when you can do a parameter estimation.

A Bayesian hypothesis test involves computing fully marginalized likelihoods (FMLs). A parameter estimation involves computing partially marginalized posteriors. When I present this difference to Dustin Lang (Toronto), he tends to say “how can marginalizing out all but one of your parameters be so much easier than marginalizing out all your parameters?”. Good question! I think the answer has to do with the difference between estimating densities (probability densities that integrate to unity) and estimating absolute probabilities (numbers that sum to unity). But I can't quite get the argument right.

In my mind, this is connected to an observation I have seen over at Andrew Gelman's blog more than once: When predicting the outcome of a sporting event, it is much better to predict a pdf over final scores than to predict the win/loss probability. This is absolutely my experience (context: horse racing).

2017-04-21

the last year of a giant star's life

Eliot Quataert (Berkeley) gave the astrophysics seminar today. He spoke about the last years-to-days in the lifetime of a massive star. He is interested in explaining the empirical evidence that suggests that many of these stars cough out significant mass ejection events in the last years of their lives. He has mechanisms that involve convection in the core driving gravity (not gravitational) waves in the outer parts that break at the edge of the star. His talk touched on many fundamental ideas in astrophysics, including the conditions under which an object can violate the Eddington luminosity. For mass-loss driven (effectively) by excess luminosity, you have to both exceed (some form of) the Eddington limit and deposit energy high enough up in the star's radius that there is enough total energy (luminosity times time) to unbind the outskirts. His talk also (inadvertently) touched on some points of impedance matching that I am interested in. Quataert's research style is something I admire immensely: Very simple, very fundamental arguments, backed up by very good analytic and computational work. The talk was a pleasure!

After the talk, I went to lunch with Daniela Huppenkothen (NYU), Jack Ireland (GSFC), and Andrew Inglis (GSFC). We spoke more about possible extensions of things they are working on in more Bayesian or more machine-learning directions. We also talked about the astrophysics Decadal process, and the impacts this has on astrophysics missions at NASA and projects at NSF, and comparisons to similar structures in the Solar world. Interestingly rich subject there.

2017-04-20

Solar data

In the morning, Jack Ireland (GSFC) and Andrew Inglis (GSFC) gave talks about data-intensive projects in Solar Physics. Ireland spoke about his Helioviewer project, which is a rich, multi-modal, interactive interface to the multi-channel, heterogeneous, imaging, time-stream, and event data on the Sun, coming from many different missions and facilities. It is like Google Earth for the Sun, but also with very deep links into the raw data. This project has made it very easy for scientists (and citizen scientists) from all backgrounds to interact with and obtain Solar data.

Inglis spoke about his AFINO project to characterize all Solar flares in terms of various time-series (Fourier) properties. He is interested in very similar questions for Solar flares that Huppenkothen (NYU) is interested in for neutron-star and black-hole transients. Some of the interaction during the talk was about different probabilistic approaches to power-spectrum questions in the time domain.

Over lunch I met with Ruth Angus (Columbia) to consult on her stellar chronometer projects. We discussed bringing in vertical action (yes, Galactic dynamics) as a stellar clock or age indicator. It is an odd indicator, because the vertical action (presumably) random-walks with time. This makes it a very low-precision clock! But it has many nice properties, like that it works for all classes of stars (possibly with subtleties), in our self-calibration context it connects age indicators of different types from different stars, and it is good at constraining old ages. We wrote some math and discussed further our MCMC sampling issues.

2017-04-19

after SDSS-IV; red-clump stars

At Stars group meeting, Juna Kollmeier (OCIW) spoke about the plans for the successor project to SDSS-IV. It will be an all-sky spectroscopic survey, with 15 million spectroscopic visits, on 5-ish million targets. The cadence and plan are made possible by advances in robot fiber positioning, and The Cannon, which permits inferences about stars that scale well with decreasing signal-to-noise ratio. The survey will use the 2.5-m SDSS telescope in the North, and the 2.5-m du Pont in the South. Science goals include galactic archaeology, stellar systems (binaries, triples, and so on), evolved stars, origins of the elements, TESS scientific support and follow-up, and time-domain events. The audience had many questions about operations and goals, including the maturity of the science plan. The short story is that partners who buy in to the survey now will have a lot of influence over the targeting and scientific program.

Keith Hawkins (Columbia) showed his red-clump-star models built on TGAS and 2MASS and WISE and GALEX data. He finds an intrinsic scatter of about 0.17 magnitude (RMS) in many bands, and, when the scatter is larger, there are color trends that could be calibrated out. He also, incidentally, infers a dust reddening for every star. One nice result is that he finds a huge dependence of the GALEX photometry on metallicity, which has lots of possible scientific applications. The crowd discussed the extent to which theoretical ideas support the standard-ness of RC stars.

2017-04-18

Dr Vakili

The research highlight of the day was a beautiful PhD defense by my student MJ Vakili (NYU). Vakili presented two big projects from his thesis: In one, he has developed fast mock-catalog software for understanding cosmic variance in large-scale structure surveys. In the other, he has built and run an inference method to learn the pixel-convolved point-spread function in a space-based imaging device.
In both cases, he has good evidence that his methods are the best in the world. (We intend to write up the latter in the Summer.) Vakili's thesis is amazingly broad, going from pixel-level image processing work that will serve weak-lensing and other precise imaging tasks, all the way up to new methods for using computational simulations to perform principled inferences with cosmological data sets. He was granted a PhD at the end of an excellent defense and a lively set of arguments in the seminar room and in committee. Thank you, MJ, for a great body of work, and a great contribution to my scientific life.

2017-04-17

accretion onto binary black holes

I talked to Ana Bonaca (Harvard) and Lauren Anderson (Flatiron) about their projects in the morning. With Bonaca I discussed the computation of numerically stable derivatives with respect to parameters. This is not a trivial problem when the model (of which you are taking derivatives) is itself a simulation or computation. With Anderson we edited and prioritized the to-do list to finish writing the first draft of her paper.

At lunch time, Geoff Ryan (NYU) gave the CCPP brown-bag talk, about accretion modes for binary black holes. Because the black holes orbit in a cavity in the circum-binary accretion disk, and then are fed by a stream (from the inner edge of the cavity), there is an unavoidable creation of shocks, either in transient activity or in steady state. He analyzed the steady-state solution, and finds that the shocks drive accretion. It is a beautiful model for accretion that does not depend in any way on any kind of artificial or sub-grid viscosity.

2017-04-14

writing

I worked on putting references into my similarity-of-objects document (how do you determine that two different objects are identical in their measurable properties>?), and tweaking the words, with the hope that I will have something postable to the arXiv soon.

2017-04-13

crazy space hardware

I spent today at JPL, where Leonidas Moustakas (JPL) set up for me a great schdule with various of the astronomers. I met the famous John Trauger (JPL), who was the PI on WFPC2 and deserves some share of the credit for repairing the Hubble Space Telescope. I discussed coronography with Trauger and various others. I learned about the need for coronographs to have two (not just one) deformable mirror to be properly adaptive. With Dimitri Mawet (Caltech) I discussed what kind of data set we would like to have in order to learn in a data-driven way to predictively adapt the deformable mirrors in a coronograph that is currently taking data.

With Eric Huff (JPL) I discussed the possibility of doing weak lensing without ever explicitly measuring any galaxies—that is, measuring shear in the pixels of the images of the field directly. I also discussed with him the (apparently insane but maybe not) idea of using the Sun itself as a gravitational lens, capable of imaging continents on a distant, rocky exoplanet. This requires getting a spacecraft out to some 550 AU, and then positioning it to km accuracy! Oh and then blocking out the light from the Sun.

Martin Elvis (CfA) gave a provocative talk today, about the future of NASA astrophysics in the context of commercial space, which might drive down prices on launch vehicles, and drive up the availability of heavy lift. A theme of his talk, and a theme of many of my conversations during the day, was just how long the time-scales are on NASA astrophysics missions, from proposal to launch. At some point missions might start to take longer than a career; that could be very bad (or at least very disruptive) for the field.

2017-04-12

ZTF; self-calibration; long-period planets

I spent today at Caltech, where I spoke about self-calibration. Prior to that I had many interesting conversations. From Anna Ho (Caltech) I learned that ZTF is going to image 15,000 square degrees per night. That is life-changing! I argued that they should position their fields to facilitate self-calibration, which might break some ideas they might have about image differencing.

With Nadia Blagorodnova (Caltech) I discussed calibration of the SED Machine, which is designed to do rapid low-resolution follow-up of ZTF and LSST events. They are using dome and twilight flats (something I said is a bad idea in my colloquium) and indeed they can see that they are deficient or inaccurate. We discussed how to take steps towards self-calibration.

With Heather Knutson (Caltech) I discussed long-period planets. She is following up (with radial velocity measurements) the discoveries that Foreman-Mackey and I (and others) made in the Kepler data. She doesn't clearly agree with our finding that there are something like 2 planets per star (!) at long periods, but of course her radial-velocity work has different sensitivity to planets. We discussed the possibility of using radial-velocity surveys to do planet populations work; she believes it is possible (something I have denied previously, on the grounds of unrecorded human decision-making in the observing strategies).

In my talk I made some fairly aggressive statements about Euclid's observing strategies and calibration. That got me some valuable feedback, including some hope that they will modify their strategies before launch. The things I want can be set or modified at the 13th hour!