Josh Bloom, Chair of Astronomy at UC Berkeley — The link between astronomy and ML
Josh explains how astronomy and machine learning have informed each other, their current limitations, and where their intersection goes from here.
Josh is a Professor of Astronomy and Chair of the Astronomy Department at UC Berkeley. His research interests include the intersection of machine learning and physics, time-domain transients events, artificial intelligence, and optical/infared instrumentation.
Much of Josh's current group activities can be found at ML4Science
Connect with Josh
0:00 Intro, sneak peek
1:15 How astronomy has informed ML
4:20 The big questions in astronomy today
10:15 On dark matter and dark energy
16:37 Finding life on other planets
19:55 Driving advancements in astronomy
27:05 Putting telescopes in space
31:05 Why Josh started using ML in his research
33:54 Crowdsourcing in astronomy
36:20 How ML has (and hasn't) informed astronomy
47:22 The next generation of cross-functional grad students
50:50 How Josh started coding
56:11 Incentives and maintaining research codebases
1:00:01 ML4Science's tech stack
1:02:11 Uncertainty quantification in a sensor-based world
1:04:28 Why it's not good to always get an answer
, the hypothesis that the universe will eventually rip itself apart
, a crowdsourcing project to classify galaxies
, a type of galaxy first dsicovered in 2007 as part of Galaxy Zoo.
(Berkeley AI Research), a research group that Josh is a faculty member on
Note: Transcriptions are provided by a third-party service, and may contain some inaccuracies. Please submit any corrections to firstname.lastname@example.org. Thank you!
Intro, sneak peek
Astronomy and physics work in a world that's sensor-based, fundamentally, in terms of our observations. Because it's sensor-based, there's noise. So, unlike in the AlphaGo-Atari world where every pixel has a perfect measurement, if you take an image of the sky or you measure some time series, there's noise associated with it.
Because there's noise and because there's a finite amount of training data, if you build models off of that, you get uncertainties in the models because of its lack of expressiveness or its overgeneralization or overfitting. Then, you also have a source of uncertainty in what it is that you're trying to understand, just because fundamentally you don't have a perfect measurement, your signal noise is imperfect.
You're listening to Gradient Dissent, a show about machine learning in the real world and I'm your host, Lukas Biewald.
Today, I'm talking to Josh Bloom who is the Chair of the UC Berkeley Astronomy Department. Astronomy has been the source of many innovations in data and machine learning. It's also changed a lot due to machine learning. I'm really excited talk to him about astronomy in general, but also how machine learning has affected the field.
How astronomy has informed ML
Josh, thanks so much for doing this. I have so many questions about astronomy in general, as someone interested in it but not very knowledgeable about it. I'm gonna try to control myself from just going down that path.
One thing I was thinking about is, it seems like astronomy has informed...or ex-astronomers have done so much interesting work in machine learning. I was kind of wondering, if you have any thoughts on why that is, why there's such a path from astronomy into machine learning?
I feel like it must have something to do with the large data sets that you all deal with, but is there something there? I mean even you went to a startup at some point and came back into the field, right?
Yeah. The way I put it this way is that astronomers are quite good at using and co-opting tools that are built elsewhere to get our work done. Maybe the most famous example is this guy named Galileo who heard about this thing called the telescope and instead of pointing at the horizon looking for enemy ships, he pointed up that way, and the rest is history.
We have been co-opting tools for centuries for our own benefit. And partly that's because I think astronomers are naturally curious people, but also because we're looking for an edge, fundamentally. We are often working right at the limit of where there's an obvious answer where you have a lot of data and it's high-signal noise to where it's just complete noise. And the real discoveries are happening essentially at the 5-sigma level. We are incentivized in many ways to pull in all these different tools and toolkits from all over the place.
Astronomers, obviously, aren't just using these tools, we're using a whole bunch of inference techniques and problem-solving skills in a way that I think becomes very valuable outside of the specific questions that we ask.
So, for sure, when I started a company in the machine learning space — and we can talk about the origin story of that if you're interested, and how we came to ML — we started hiring. And while we certainly weren't looking to hire people that had a similar background to us, often times when we got into coding exercises and we got into solving problems, a lot of the people that were making it through that we were excited about had a physics, more broadly, an astronomy background.
They were people that could work with something that they had potentially never seen before, analyze it in a way an engineer might to get it down to its constituent parts, and then innovate on top of that.
But I think you're right, the other big component, at least in these days, is the availability of just so much data, and our need to do something with that data in real time with limited resources is a natural entrée into where machine learning comes in.
The big questions in astronomy today
From your perspective, what do you feel like the big interesting questions are right now in astronomy? What do you feel like you might learn in the next, I don't know, a couple decades that would really change this field?
Well, it's all over the place.
First of all, one way to think about astronomy is as a great laboratory for physics. So if we start there, and I think it's maybe somewhat apocryphal but Einstein really didn't like astronomers, but it turns out most tests of general relativity happen in the astronomy context. There are some terrestrial ways in which we can test GR, but most of the really interesting tests of GR these days, and has been for 100 years, is by looking at the skies.
Specific events and specific large-scale structure of the skies gives us clues in some of the very basics of how the universe works at not just global scale, but at a microscopic scale. So we're also testing our understanding of how atoms work and understanding even what's going inside of the nucleus of atoms by looking at what happens on extremely large scales, which is just-mind blowing to think about.
So, if we think about astronomy as that laboratory for physics, another way to ask that question would be "What are the really important physics questions that we have?"
One is, "What is the nature of matter at extremely high densities and temperatures beyond nuclear density?"
We have objects like neutron stars, which are extremely compact stars that have the same mass thereabouts of our sun, but are the size of San Francisco. So that density, we can't reproduce that in the lab. We need to look at how those stars behave when matter hinges upon them or just even what their static distributions are in radius and mass, to learn something about what's happening with nuclear matter at those really high densities.
We don't know whether general relativity is right. It looks like it's really, really good on a lot of different scales, a lot of different mass scales and a lot of different length scales. But we're constantly testing this hypothesis that is general relativity, of whether it is a perfect description of how matter moves in the universe and how the universe is shaped by matter.
We know it can't be perfect because it breaks down at the quantum mechanical scale, and there are things that happen in astronomy that allow us to test some fundamental precepts and hypotheses that come out of general relativity.
In the gravitational wave world, which is essentially the ripples of space-time due to the changing locations of matter around other pieces of matter, we've had massive breakthroughs in just the last couple of years observationally, where we've seen the inspirals of black holes, and potentially the inspiral of neutron stars, smashing into each other.
In the last few seconds, there is a huge burst of gravitational wave energy which we can detect on Earth, but we can also now start to see glimmers of the idea that we can start testing some basic ideas of general relativity in those last even milliseconds. So as instrumentation gets better there, I suspect our understanding of where GR is working and where it potentially breaks down will become really interesting.
We're also interested in, at cosmological scales, in understanding the expansion history of the universe, the origin of the universe. Why did the universe appear to inflate and exponentiate so rapidly in just...less than a millisecond, 10⁻⁴³ seconds? Why did it grow so quickly? We know it had to based on observations at later times.
What's absolutely remarkable right now is that when we look at the constituent parts that drive the dynamics of how the universe we think changes — as in how fast it grows and how fast it appears to be accelerating in its growth — ordinary matter is, as I'm sure you and your listeners know very well, makes up only a few percent of that recipe.
Dark matter is a quarter of it and then dark energy is the other part of it. We really don't know anything about dark energy. We don't know whether it's a particle. We don't know whether it's something even deeper than that. We don't know whether dark matter is a particle on a tiny scale that isn't predicted by the standard theory or whether it's large clumps of black holes that were left over after the primordial expansion of the universe.
So the biggest breakthroughs may come in a deep and fundamental understanding of what are those constituent parts. It may also come with a recognition that the framework that we have for understanding how the universe unfolds is right now fundamentally wrong and we'll look back on this in a couple decades and say, "Boy, we were only looking at just part of the elephant, and now when we have a bigger picture of it, things become more clear."
There's more obviously. The last thing I'll just say because I'd be remiss not to, is understanding the origins of life and the prevalence of planets that can sustain life outside of our solar system.
There is a huge push, both at Berkeley where I am and then across the world, in building new instrumentation and new theory that helps us understand how planets evolve, where habitable planets could be around sun-like stars, and how we're actually going to find them, characterize them, and potentially even understand what potentially primitive forms of life there are in those atmospheres.
On dark matter and dark energy
So I have a feeling this is probably an annoying question, but it comes up a lot when I talk to ML people just in casual conversation who don't really know but astronomy, so I'll just ask it because I hear it a lot and I'm kind of curious.
When I hear about dark energy and dark matter, I wonder do you really...is that just sort of like a fudge factor that shows that we don't really understand what the physical laws of the universe are? Is there a reason to call it matter and energy? Is there some sense that you're sure that it is matter?
In some sense, there are kind of two fudge factors. Fudge factor A which we'll call dark matter and fudge factor B which we'll call dark energy.
Dark matter is much better understood in how it behaves than dark energy. There's a lot of evidence that this stuff actually exists. I won't go into all the details here, but on many different scales, we have observational evidence that shows that while there are some people in the theory world that feel like they can explain away some pieces of that evidence, there is no successful alternative theory for explaining away this fudge factor with just sort of a different way of thinking about the universe.
It looks like it's actually stuff. We know it interacts gravitationally and we hope that it interacts weakly in other ways. There are lots of endeavors actually looking for dark matter within a lab or within a cave and there's some other ideas of how astronomers could actually find the details of how dark matter interacts with itself, maybe with ordinary matter.
So yes, it's a fudge factor in some sense to explain the overall evolution of the universe, but it was originally discovered to explain the anomalously fast motions of galaxies and clusters of galaxies. You just sort of add up the total mass associated with the light of galaxies, because we know how to roughly map the light of a star to its rough mass and the distribution thereof. There just wasn't enough mass, so there was this missing mass that's associated with galaxies.
It turns out there's also missing mass associated with our own galaxy. We've been able to systematically rule out ordinary matter like electrons and protons, but I think the best bet is that it is some other series of particles that we haven't yet envisioned, but one day we may be able to find. On the dark energy side-
Do you know the distribution on a scale of a solar system? Can you tell where it would be from gravitational effects? It sounds like it follows a similar distribution to matter we can observe.
Actually, we think within our own galaxy, the dark matter, which is all around us, either as, essentially, a fluid — there's particles of dark matter running through you all the time — or in extreme clumps in the form of primordial black holes. That's the other extreme that has the mass of a comet or a mountain, there'd be dozens of those flying through our solar system.
There are potential ways in which we could actually discover these dark clumps and we have a whole series of observations looking for the particle version of that.
It behaves a lot like ordinary matter, but in our own galaxy, while gas and stars — at least in the local solar neighborhood — are moving of order something like 200 kilometers a second, around the galaxy we think that there is a fluid, or these large clumps of it, which are moving in slightly different ways than the ordinary matter.
So the ordinary matter and the dark matter, by definition of the gravitational interactions, actually do talk to each other and they do influence each other. But because the dark matter is sort of non-compressive and unlike gas, when you smash it together you get heat, this stuff, this fluid, sloshes around back and forth.
I don't know the way in which we'd be able to detect the amount of dark matter that we think must be, let's say, in the sun because there's almost certainly some amount of dark matter that's been captured in the sun. It's such a small fraction compared to ordinary matter around us that...There are plenty of ways in which it could be hiding in plain sight.
On the dark energy side, that is very much more of a fudge factor to explain the dynamics of how the universe expands. In fact, again, going back to Einstein, when he was working out some of the dynamics of the universe, he had this thing that he called his biggest blunder, which was coming up with this fudge factor constant to make the left-hand side and the right hand side of the equation work.
Then when it was found in the '30s and '40s and '50s that there wasn't any of this accelerating expansion, he thought it was a big blunder, but ironically we actually needed that fudge factor.
What's interesting is that we have that as a constant. It's got a constant amount of energy per unit volume, that's the simplest way to think about it. But we also don't know whether it's constant in time. It could actually be changing its constant. So there could be a temporal dynamic.
Wouldn't you see that in different rates of expansion?
Yeah. So there would be different rates. There's already different rates of expansion just because in the early history of the universe, ordinary matter and dark matter dominated the expansion, as in it was sort of slowing up.
As the universe became more tenuous and this material basically lost its dominance in these equations, there was a time several billion years ago when dark energy sort of took over and is now the thing driving the dynamics.
If dark energy is a constant and we measured it well enough, then the universe will just continue to exponentiate and just grow, and will be this big...it won't be exactly an evaporation, but it will be called the Big Rip¹.
It will basically all just rip apart from each other, and cosmology in the next hundred billion years will stop being about the observations of 40 billion galaxies and turn into just observations of stuff in our solar neighborhood because all the other galaxies will run far away from us.
But we don't know that that's the case. It could actually...that constant could turn off for some reason. It could have other terms that haven't yet expressed themselves.
Finding life on other planets
Well, I have to also ask you about the other topic that you brought up on finding so many habitable or seemingly possibly habitable planets in the universe, at least that we can see.
Do you have any kind of thoughts on that? Are there theories why we don't see life on... If there's so many planets out there, why we don't see other life?
Well, I think we know now that life, at least intelligent life, is not teeming, right? Enrico Fermi had the Fermi Paradox, "If the idea of if life is so ubiquitous, why aren't they all around?"
It's pretty clear that it's not as ubiquitous as "Every solar system has intelligent life", that's not a big surprise. What we haven't yet nailed down in the overall demographics is "What is the exact set of conditions that could give rise to any sort of life?"
We have a reasonable understanding now that of order...one solar system around a sun-like star will have of order one habitable planet. Maybe it's two or maybe it's a half, but it's not zero, and it's not ten.
Then getting into the actual chemistry of what leads to — and biology of what leads to — life that's sustainable, that's really kind of where the cutting edge questions are on the theory side and obviously we have some great laboratories in our own solar system to ask those questions — form of atmospheres of other planets — and we're just now entering an era where we have sensitive enough equipment to be able to measure detailed chemical properties of atmospheres of other planets and other solar systems.
What I think will become clear over the next, let's say, two decades is exactly what the rate is of planets that are in habitable zones around their sun-like stars that appear to be in some sort of disequilibrium when you look at the overall chemistry and the temperature profile of those atmospheres.
How is it that we have something that is a volatile element that is still around? It means that there something else on the surface that's producing it. It won't guarantee that it's life.
The question about finding other intelligent life that we could potentially interact with in some sense is beyond the horizon of modern astronomy, but there are groups, as you know, that are using modern astronomy tools to do those sorts of searches.
When you say disequilibrium, is that something that we would notice about Earth if we were far away from it and looking at it?
It's a little bit outside of my field, but if you took a spectrum of the Earth's atmosphere...and people have done this by looking at the Earth shine.
So, right around the time of the crescent moon as it's setting right after the sunset, you often can see the un-illuminated part of the moon and that's because what you're seeing is the sun's light reflecting off of the Earth's atmosphere, bouncing off the moon, and coming back to your eyes.
You can take a spectrum of that earthshine and there are signatures in that that if we saw that in other planets, we would say, "Aha, there is something that..." — and I don't know the details of which element does what — "...that is not in an equilibrium given the temperature of the Earth."
Oh cool, interesting.
Driving advancements in astronomy
It seems like astronomy has had so many advances in my lifetime, which is so cool. Do you think that that's mostly due to better equipment to see more or do you think it's like better use or figuring things out from what we're seeing?
I guess it must be both, but some of the astronomy experiments that I read about just seem totally brilliant of synthesizing...it seems like we could get like one snapshot of the world around us, and it's incredible to me how much physics, or how many things we discover from just looking up into space.
I think a big part of that is indeed the 20th century was the opening of our eyes beyond visible wavelengths.
X-ray astronomy really only started in the '50s, gamma ray astronomy around that time as well. Once we get above the Earth's atmosphere, which absorbs a lot of the light thankfully, at other wavebands we just see a whole universe that we either didn't imagine or only had a vague idea that could be out there.
So a big part of the 20th century was just opening up our eyes to new wavebands and understanding the connection between different objects and events like supernovae, how they are connected to each other across different wavebands, and what their role is in driving the dynamics of a specific galaxy and what the role is in the creation of elements.
We didn't really even know how to ask those questions I think, properly, until the last several decades. So part of it's that, and that opening of the eyes is driven a lot by technology. But then it's, "Okay, well, I have my eyes open, but they're blurry. So how do I sharpen them?"
There are plenty of things again back to the original conversation at the beginning around co-opting tools.
Astronomers learned about adaptive optics being used for military purposes and were able to get much clearer images of the sky because we're now pointing lasers up into the atmosphere and exciting a sodium layer high up in the atmosphere, which acts as a temporary star. We have corrective lenses that at many, many times a second are sort of correcting the waveform errors that come as the star's light comes through the atmosphere and gets blurred.
So we have all that kind of technology. Of course, digital technology starting in the early 1980s meant we were taking very high-dynamic range images of the same parts of the sky we were looking at before. So we were able to see farther away, see fainter objects, and at the same time there were number of innovations in the telescope world even on the ground that allowed us to build bigger and bigger telescopes.
In the end, we haven't gone that far from Galileo's telescope to the world's largest telescopes, 10-meter class telescopes today. That's just bigger and bigger, collecting light.
But the innovations that it's taken for us to get there have been real, and have been driven by the need for seeing fainter objects, seeing with greater clarity, seeing across more wavebands.
Some of the biggest discoveries in some sense happened outside of the electromagnetic band, over the last many years. The observations of very high-energy cosmic rays — so these very high-energy charged particles moving very close to the speed of light, understanding the origins of those is still an active topic — and the discovery of gravitational waves directly using interferometers on the ground is a massive innovation that took arguably 40 years for us to get there technologically and several billion dollars of taxpayer money that went into that.
It took a large number of people to be very, very convinced that the physics was right and they'd be able to get there. So the fact that they were is...one of the great crowning achievements of our field is a recognition that, driven by theory, we were able to invest billions of dollars to get to a set of discoveries that we could have only dreamed of 10 years ago.
Do you think that gravitational wave sensor was more of an engineering feat? Because it just seems so incredible to be able to send something so small, or was it more of a theo-...what was the hardest part of that?
Well, the early days — that predated me — where theorists were in active discussion about whether you could even use these circle interferometers with lasers to look for this deformation.
Once people became convinced that the theory was right...you're exactly right, this became an engineering feat, which — to maybe more of an interest of your listeners — is about project management and about people management, and bringing the right people to the table with the right skill set. And recognizing, of course, that the entire endeavor doesn't need to be one big innovation, right?
There are places where you absolutely need to innovate and create new things that don't exist for you to get to your goal. But to do this essentially on time and on budget on a 20-, 30-year time scale is just mind-boggling.
Is the achievement of that just sort of verifying that gravitational waves exist or do we have kind of a new type of sensor that might somehow find interesting stuff in our world?
Well, without trying to prejudice where things are going, I will say that the history of astronomy — in that context of opening up your eyes to new things — invariably leads to discoveries that were unexpected.
So far, I'd say the only large unexpected thing that's come out of the gravitational wave set of observations is the sheer mass, the enormity of the individual black holes that are colliding.
There wasn't a lot of great motivation to say that we'd be seeing 100- and 200-solar-mass black holes that were colliding into each other. In some sense, it comes back to the astrophysicist to ask the question, "How do you even make 100-solar-mass black holes and then put them in the vicinity of another 100-solar-mass black hole?"
We were thinking in the end it would be 10-solar-mass and 20-solar-mass black holes, that was the best bet if you asked most astronomers. So there's a little bit of surprise on that.
None of us were surprised of the existence of gravitational waves. There had actually been Nobel prizes given out for the indirect discovery of the existence of gravitational radiation by looking at the orbit decay of neutron stars in a binary system.
We had known that the...it was very likely that this existed, but the direct detection of that was a very beautiful vindication. Now that we're there and we're having to grapple with understanding the demographics of the black hole population, a real interesting question is how, as I was saying earlier, how can we use our observations going forward to test general ideas about general relativity?
Putting telescopes in space
When I was a kid, I remember learning about the Hubble Telescope and the excitement around... I mean, in general putting telescopes into space was this big exciting project that seemed really cool.
Have we gotten so good at signal processing or undoing the effects of the atmosphere that that's no longer such an important thing to do to get our telescopes up in space?
It also seems like when I was a kid I had this sense that telescopes were getting bigger and bigger and we were seeing more and more things, but has that maybe stopped? Do we still aspire to make even more gigantic telescopes to see deeper in space? Where do you think that's going?
It's a great question. It depends on who you ask. There isn't a general consensus of the right answer for that, which is good because the right answer is you do what the science demands.
There is a very successful satellite that was launched into space called the TESS satellite², whose sole purpose was to look for Earth-mass planets around sun-like stars and to find those just using the so-called transit technique, where one star...where one planet moves in front of its parent star and that star slightly dims.
To do that, you need to see the dimming of a star in one part in 10⁵ or one part in 10⁶, which you just can't do from the ground. There's just too much atmospheric flickering that you just can't correct.
We can get down to one part in 10³ or maybe one part in 10⁴ from the ground, but that's pretty much as far as we're able to go. So for finding exoplanets of Earth-mass size, you pretty much have to go into space. Rather than build a huge telescope, what they did is mount the equivalent of a bunch of glorified cannons and a bunch of glorified iPhones to look at a very large swath of the space so they could study many, many stars simultaneously.
There, they weren't all that interested in looking at stars that were faint because once you discover one of these planets, you want to have lots and lots of photons with other follow-up facilities to actually do all the work. So they actually needed a very wide field to get very bright stars.
But there are other people who are launching large satellites with large mirrors because they want to look at very faint explosions, supernovae in very, very distant galaxies. Yes, you can do that from the ground. It just turns out from a price perspective, there are some types of science that are actually easier and cheaper to do from space.
My own interest depends upon, "Can I do this from the ground? If not, what's the simplest and cheapest thing that we can do from space?"
One of the things I'm really excited about, which you may not be aware of, is there is quite a big and interesting push now towards smaller format satellites, i.e. CubeSats, in part because if you have a very dedicated science goal, and you need to look at, let's say, one object for just a month but you need to do that at one second cadence, that's really hard to do from the ground.
But you could potentially do it from space very, very cheaply now because the actual parts are largely commoditized, and the launch — which is a very dominant cost for heavy space vehicles — is more or less zero because there's so many launch vehicles going up in space for all these different reasons. You can piggyback a whole bunch of these small satellites more or less for free.
So, what I think you'll see in the next 10 years or so is a Renaissance not so much at the large-telescope level, but at the small-telescope level in space.
And the last thing I'll just say is that we sometimes have to go to space because the Earth's atmosphere blocks certain wavelengths. So if we're interested, for instance, in the ultraviolet sky — in phenomena at the ultraviolet sky — because of our ozone layer, we'd block off most of the UV light. So we couldn't do anything from the ground.
Why Josh started using ML in his research
Cool. Well, I guess I want to make sure I ask you some questions about machine learning also.
I wanted to ask you about...so you have this group ML4Science³, right? I'm curious, what inspired you to put that together and what kinds of stuff you work on there?
Well, it might be worth talking a little about how we stumbled upon machine learning in my research and where that's led to.
About 12, 13 years ago, we were actually dealing with lots of images coming off of telescopes from the ground. The normal behavior when you get lots of data had been — and in many circles still is — just hire more grad students to look at the data.
I was looking for ways to scale our way out of what was a very repetitive inference task, which was the discovery of new events in the sky. What we typically deal with is a new image that's taken of the sky, and you have a template image of that same part of the sky taken in the months prior where you've stacked up a whole bunch of really good images, and you subtract the two off.
That subtraction process is imperfect because of the atmosphere, because of instrumentation effects. What people would do is look at postage stamps around all the five-sigma positive signals, but most of those are actually spurious.
The first place where we landed in the utility of machine learning for my own research was creating what we call a real bogus detector where we trained off of good subtractions, i.e. of real objects, and bad ones because of all these different detector effects and instrument effects.
We were able to build something with good enough false positive and false negative rates that we were able to put that into production and reduce the amount of time it would take a person to look at a whole night's worth of candidates from hours down to minutes, and still keep a person in the loop.
At the time, I had the conceit that if we can do this, it means then we don't need people to look at the follow-up data. We can actually just get to the point of almost writing a paper without any people in the loop.
But as you know well from your current work in your previous company, people in the real-time loop is still important and can be very important even when it's machine learning-assisted.
So, that was very successful in that...and that was back in the old days of random forests, before deep learning kind of had its Renaissance. Now, this idea of real bogus discovery is...it happens pretty much in every project going way beyond where we were a while ago, now using modern deep learning techniques.
Crowdsourcing in astronomy
Before you go further...in my previous work, I always admired the site Galaxy Zoo⁴ where they kind of got lots of people to crowdsource some of the labeling of these images. Did you look at that at all? That always seemed like such a cool project.
I did look at that. Yeah, I did look at that.
I think crowdsourcing in astronomy has been really wonderful as an outreach tool and there certainly have been some scientific papers that have come out of that. In particular, there was the discovery of a weird class of gas around certain types of galaxies that was made by somebody looking at galaxies of images⁵.
But a lot of the labeling, if I'm being really honest, by people in the Galaxy Zoo world could have been done and ought to have been done by a machine learning classifier.
Is this a spiral galaxy? Is this a red galaxy? The questions that generally are asked in that world...I've done this in classes that I've taught, we have a student for a final project try to reproduce the ROC curves of people in classification, and they can do well. We actually showed for the supernova classification challenge, we were able to build a machine learning classifier off of the original training data from Galaxy Zoo and outperformed Galaxy Zoo in a false positive, false negative sense.
One of the challenges that I think all of us have in employing people to do repetitive inference tasks is to ask ourselves the hard question of "Can I have a machine do it?"
If the goal is to involve people so that they're involved in research and they're helping, that's fantastic. If the goal is to get people looking at data because maybe they'll also see something and answer a question that we didn't even ask, that's fantastic as well.
But for the specific task that a lot of crowdsourcing questions have asked, I think especially with where computer vision has arisen, we're able to do that better. Moreover we're able to do it faster and moreover we're able to do it in a repeatable way. So one of the other challenges, of course, if you ask somebody to label a bunch of data and then you ask them to come back tomorrow after they've had a beer and label the same data, you'll get a different answer.
From understanding the demographics of everything we see, I think it becomes a lot harder when you have people that are deeply part of that process.
How ML has (and hasn't) informed astronomy
So, I cut you off though. I mean, you were doing this quite a while ago and especially vision techniques, I think, have massively improved. I don't know even "especially vision techniques", but there's this moment where vision got quite a lot better.
Did that affect the way you used machine learning in your work?
Yeah. We always are careful in a sense to try not to look around in astronomy and say, "That's a computer vision task. That's clearly solvable now by CNNs, so let's go work on that problem."
There is a little bit of "everything looks like a nail because we have this really cool hammer". That was a computer vision task, this real bogus detector that we had to solve if we were to break this grad student bottleneck.
There are plenty of tasks that people are doing asking questions of images that were around before, but perhaps weren't as interesting because we had no way of solving those problems and now we can do those at scale.
I actually focus less on images now and focus more on irregular time series data. But I think one of the important things to recognize about where astronomy is, maybe relative to many of the other fields of the people you've talked about on this podcast, is that we haven't had that moment that maybe that existed in NLP where Jelinek said "Every time I fire a linguist, my language detector gets better."
The idea that if you start removing domain experts out of the loop and you actually start building language models, just learning off of data, and it gets better and better — we don't have that moment in astronomy.
Computer vision is the same thing too. You fire a bunch of old-school computer vision experts that learned about Hough transforms and stuff, and now you just throw it into a big CNN with lots of training data, and you get better answers than what you were ever able to do in the past.
That hasn't happened in astronomy. We've used ML in lots of different places as accelerants and as surrogates to harder computations so we can get faster answers. We can do inference at scale in ways we were unable to do before. But it's the same thing in biology, right? ML didn't invent CRISPR and Katalin Karikó — who was this person who toiled away for decades trying to understand how mRNA could lead to a vaccine — she was actively denied tenure and actively denied grants. She had nothing to do with ML, but if it wasn't for her we wouldn't have vaccines for COVID.
Biology, I think, also hasn't had its ML moment where you can start firing domain experts and start doing things. Right now, physics, astronomy, chemistry, for the most part we're working in a world where machine learning is this really big important tool in our toolbox but it's not become the fundamental driver of how new insights happen.
I guess one key difference here though is the work product of astronomy is kind of explaining the world that we live in, right?
Whereas...first of all, I'm not sure if I agree with the comment about linguists. I don't want to go on record —
It's not my quote.
No, I know, I know what you mean.
I think definitely linguists still do the best job of explaining language in the... I mean, I think linguists would probably say we use ML techniques to understand language better, not like we've replaced ourselves...although, it seems like modern translation techniques are less informed by linguistics than I might have expected when I was younger.
I wonder if it's like a function of a domain being more "trying to engineer a certain solution to a specific problem" versus "do some kind of explanation". I mean, we've actually talked to a whole bunch of biologists and it does sort of seem like some of the processes around drug discovery are starting to be more and more informed by ML, and moving in that direction.
I think that's where we exactly are right now, is there is a huge amount of ML that is informing astronomy, but I don't think we're there anywhere near where the NLP world is.
In part it's because we haven't...to your point, we haven't really been able to articulate a set of outcomes that are comparable or have as much weight or as much import as an NLP task like translation. You can directly correlate, I assume, the quality of a translation from language A to language B to some dollar outcome. And in astronomy, we don't have the ability to do that.
So our loss function is a little bit more complicated. As we're learning these various different tasks as part of our workflow, we don't have the ability in the same way many other fields do to articulate that loss function in terms that have this monetary value.
When you ask this question about "What is the nature of dark energy or dark matter?" or "How many exoplanets are out there that host life?", those are in some sense quantifiable answers. But as you're saying, that's sort of where more of the explainability has to come in.
I certainly don't think we're even trying to get to the point where we fire a physicist so that I can hire a computer scientist. It's going to be the marriage of those two people, or as an individual and their skill sets, who are going to make a lot of progress. I think the really exciting place where we could get to — and there are little tiny pieces of this starting to happen — is whether an application of ML to a bunch of data can be something that leads to a discovery on a bunch of questions that we didn't even know how to ask.
That would be a real hallmark moment in our field. Right now everything is done in largely a supervised context. Obviously, we've sort of had some semi-supervised and unsupervised ways of looking for anomalies and outliers, and things like that. But even that, it becomes a guide to a domain scientist looking at this and say, "Oh, yeah. Of course, I know what these things are", or "This is because the data is spurious."
Maybe what's really fundamental, if I think about it, is that the job of these ML pipelines that we build on different parts of our data isn't so much about prediction in the same way that if I need to predict what the next word is or I need to predict if this is a cat or a dog or what the best thing to show somebody is next, that is the proof in the pudding and you've done well because you can measure what the outcomes are after that. If I make a prediction in astronomy, that's really just for hypothesis testing.
If I have a new theory that's gleaned off of data, the job of that theory is to make a prediction about what happens if I observe outside of the domain of the data that I already have, to falsify itself. We haven't really wrapped their head around the idea that ML in the context of the physical sciences isn't just about making predictions at scale so that we can get slightly better data farther down the work chain.
If it's going to actually drive our deeper understanding of how the universe works, it has to couch itself in the terms of hypothesis testing, Occam's razor. We haven't really gotten there yet.
I'm so surprised to hear you say that because it seems like we fund all this work to make better devices and telescopes, and it seems like they pay us back in terms of these really awesome new understandings about the physical world.
It seems like you make a bigger telescope, that's just seeing things slightly better however you put it, right? Isn't it similar?
If ML can help you get better data to inform your predictions, wouldn't that be a big deal? Does it really need to... Do you really need to be like completely replaced by ML for it to be...?
No, I certainly don't want to come off in trying to make the argument that ML hasn't been important.
We're currently working on a project where a big part of that whole data taking, data planning, data reduction, initial discovery, initial inference, initial follow-up, that all happens. There's little pieces of ML through that entire chain, that all happens without people in the loop now, which is absolutely incredible.
Telescopes more or less talking to telescopes mitigated by ML. This is where we are. There's only going to be more of that going on. What we're doing is we're optimizing our resources and our resource allocation because we're using ML.
But I still see that as fundamentally an accelerant and a surrogate to what we were pretty much doing in the past. I haven't seen anything that fundamentally changes the way we conduct ourselves as physicists. But again, as I said, there are little pieces starting to show up, [like] the rediscovery of the Higgs boson using pure ML without reference to the basic physics of how particles interact⁶.
Do you think that would've worked without knowing about it? I mean, is that-
That's the question, right?
Until we get to the point where someone says, "I ran my ML pipeline on this particle physics data and I saw this new thing. And everyone in the group didn't believe me until they got 10 times more data and it turns out it was there." We haven't really, really gotten there yet.
There's been a few places where people have found another exoplanet in a complicated data set that people hadn't seen before. But astronomers for the most part are still Bayesians, and we're still governed by Bayes' rule where we come to our problem with a bunch of priors. We get data that updates our beliefs and we get slightly better, or sometimes much better understandings, in our posteriors.
If we talk about inference and understanding, we need to couch it in terms of what we think are the physical properties and the physical things and the parameters that describe the object that we're looking at. We're getting better at that.
One of the big things I'm doing in ML right now is trying to use different types of networks in a whole new class of approaches called likelihood free inference to go directly from raw observations to posteriors or proximate posteriors. I think that's extremely exciting and can be applied to a whole bunch of different places.
Cool. That's so cool.
The next generation of cross-functional grad students
One thing that I wonder about...it must be interesting being in your shoes. Are you doing most of this with ML grad students? Are you at the point with your data pipelines where you need to pull experts from industry or maybe...
I mean, it's so funny how much of our data pipeline stuff has come originally from astronomy and sciences in general, so maybe it's always at the cutting edge, or do you feel like you need to get experts in terms of just handling these volumes of data sets and building these gigantic models?
It's a great question. So again, I think the answer depends.
There are lots of examples in my own research and my own work where hiring very good data engineers, and having some ML expertise on the team, suffices. It's where you actually need to innovate, create new algorithms, take some existing network, and completely blow it up and change the way that it works, that you do need somebody with deep domain background in CS, ML.
One of the beautiful things about being on the Berkeley campus is how just everyone across campus is looking to work with each other because, again, we all recognize, at least from the physical domain side, that there is incredible work that's happening in the Computer Science department, the Stats Department. I've just become a member of the faculty of BAIR⁷, the Berkeley AI Research group, so I get to interact with those grad students and those postdocs.
We still, I think, face the challenge that any academic arena does when crossing over into other fields of trying to make the kinds of problems we have compelling for the other side, and have the other side recognize that they're not just setting up a Spark cluster for us, and downloading ResNet.
What the people in computer science and stats need to realize is that we are asking questions of data in a way that they are not — of the benchmark data sets that they're often working on algorithmically — and because of that there needs to be some real fundamental innovation.
I've been really fortunate in my career to have gotten grants that have allowed me to hire people outside of the traditional astronomy background. I hired PhDs in computer science and statistics, and that's where some of the most interesting innovations happen. Where we're at, I'd say now as a profession, is really struggling with the idea of how much should our students have to learn for them to be able to work on this as their main endeavor.
We don't have as part of our curriculum a deep training in stats, let alone ML, let alone software engineering. I don't know where they find the time, and where they will find the time going forward, to be able to get all of that in at a fundamental level.
We're working on it, we're trying. Berkeley has started a new data science major that the astronomy department is connecting up into with their own classes. But there isn't, at the national level, a holistic understanding of how we're going to do training of the next generation of physical scientists, so they're not just conversant in ML but they can actually do a bunch themselves.
How Josh started coding
Actually, the question I wanted to ask you — which, this is a good segue — when I was looking at your website, I found hundreds of research papers, but also mixed in some opinionated blog-like posts on programming language details.
I'm actually wondering for you, and maybe something I'm asking myself, how do you stay current on... I mean, how did you even find time to get to a high-level programming at all and how do you stay on top of that? Are you spending time writing code yourself as a professor?
Yeah, I write a lot of code. That's some of my happiest times. In some sense, that's my hobby.
I came to programming early on in my academic career when I was an undergraduate, where I was basically told by my future advisor at Los Alamos I can't work there for the summer unless I've taken a class in C, and I did. That was more or less the only class I ever took in computer science.
Again, it was this matter of necessity, just like it is with building better telescopes.
I decided, when I was a postdoc, to automate an old mothballed telescope — which was a fairly large one-meter class telescope in Arizona — and take all the pieces that had been manual when the telescope had been run before and automate every single piece of it so it could run autonomously.
I asked a friend of mine at Los Alamos, "Which language should I write in?" And he said "Python." I said, "What's that?" He said, "Just do it. It's a cool language."
Wait. What year was this?
That was 2002.
I wrote a whole telescope automation software package using state machines and connecting up to device drivers in C++ in 2002, where I was just kind of feeling my way through it. I think I wrote my own datetime module and I didn't realize that datetime was there.
So, I just stumbled upon it. And then what you do when you're an academic and you wind up realizing something is interesting, is that you feel bad that you're not teaching it to your students, so you do.
So I started in 2008 a bunch of Python boot camps on campus to get people into Python, in part because especially at Berkeley, we caught the open source ethos pretty early and the kinds of languages that people around me were using — like IDL, interactive data language, and MATLAB — were just expensive.
Moreover, as scientists, we certainly want to understand the algorithms that we apply and we want to at least be able to look under the hood if we need to. I started evangelizing Python around these parts and started building classes on top of Python. So, a graduate level seminar on "how do you actually use Python in the real world", ranging everywhere from doing stats to scaling up Python programs to testing frameworks to interacting with hardware.
That class still goes on, but I've got to say I've ossified a little bit around Python. I've spent a little bit of time with a few other languages, but for me I've become conversant enough and gotten fairly deep into this scientific Python community.
Jupiter, for instance with Fernando Perez here in the Stats Department, has really been a huge part of what I've used for teaching for a long time. The numpy and the SciPy stack have a lot of activity here on campus as well. Stefan van der Walt has a huge role in that.
So, it's sort of in the water, I'd say. Definitely, the proof is in the pudding, having recognized that Python is extremely versatile as a sort of superglue language for all the kind of stuff that we do.
Yes, I still code. Last summer during the pandemic, the happiest times were me learning React, so I could build this large-scale React app that we're doing for astronomers to interact over data.
Isn't React fun? React is so much fun for me, I thought I hated frontend.
I don't know if I would use the word fun. What I love about it, it's just so wonderfully different than the way you think about Python programming.
And obviously, it's rewarding in a sense that you build it, you ship it, and users see it right away, in a way that if you build some cool Python tool, you may be the only one in the world that uses it. Just because it's on PyPI, it doesn't mean that somebody is actually going to download it, use it.
Well, did you use TypeScript with React?
We were like GSX kind of...
I see, cool. Maybe I just like React because I think I was writing a lot of frontend stuff around 2008, and found it frustrating, and then went back to it a few years ago, and just was impressed how much things had evolved in the decades.
I love React, but I don't like testing React apps.
I was trying to do some typing stuff recently, actually it was with your student Danny, and I was really wishing that Python's typing worked a little bit more elegantly, especially in the scientific computing domain.
Incentives and maintaining research codebases
I felt like when I was doing research briefly, the code bases were truly messy in a way I've never experienced in industry — this may be a long time ago — do you do things in your lab to keep things maintainable?
Maintainable, as like students come in now and they need to do various research projects. Are you able to find time to clean up code and eliminate tech debt and things like that?
I think we're probably better than most, but never as good as I'd like to be is probably a reasonable answer. I'm at least aware of the existence of things like unit tests, unlike many of my colleagues in our field.
Yeah, it is a mess. Again, it comes back down to loss functions and incentives, right?
When we write a grant, there is no imperative — as much as I think it'd be great — to say, "By the way, one of the outcomes if you're writing code has to be that this is going on GitHub and that it's going to have like a CI/CD like a Travis attached to it so that when pull requests come in, you know whether they're going to be working or not."
There's none of that at all. So if you do any of it, you're doing it out of the goodness of your heart at zeroth-order, but as you know first order is because you're doing it to help yourself in the future.
Often times, in a research context, — and this gets back to a question you were asking about, "Do you need to hire ML people to work with massive amounts of data?" — what I was going to say is that not all of what we do is massive data. Astronomy has a lot of data, but we have only a small number of labels for instance.
It's a big data problem, but actually a tiny number of labels for the kinds of stuff that I'm interested in, or zero labels. So how you do one-shot learning is a really interesting kind of problem in a physical context in the presence of noise and uncertainty and model uncertainty.
There's lots of questions that we ask in the context of ML that are actually kind of small-ish data problems, or they're large computation problems because the forward model is extremely expensive and requires a supercomputer, but the amount of data we're dealing with is thumb drive-level.
But because of that, we tend to atomize our activities around projects, around papers. I read a paper with a student, we figure out a cool new thing to do in the machine learning context, and unless that is going to be like a major new widget that gets plugged into some new facility or existing facility, then it's just out there in the world and people can write papers saying their scaling curve is better than my scaling curve and we can have an argument at a conference one day.
That's sort of the end of that code base, right? Whereas, as you know, in the industry world, you're generally not writing code as a one-off and then just casting it aside. So the incentives there to keep things maintainable, keep things up to the latest versions of Python and blah, blah, blah, they just really aren't there for most of what we do.
There is a subset of what we do where it absolutely has to be battle-tested because more and more people are going to be downloading it and using it. I tend to see those projects as extremely exciting, but there's not a lot of, I'd say, astronomers who have the experience with full CI/CD pipelines and in production dev ops that I've been lucky to have in my career.
ML4Science's tech stack
Let me ask you this, what does your lab's tech stack look like? Are you using PyTorch? What's your standard tooling? Are you on Python 3.7?
It's actually pretty agnostic as students have come in, because I tend to...students tend to gravity towards me who are interested in ML and I naturally gravitate towards them. That's how, I guess, gravity works.
I've been agnostic to whether it's TensorFlow-land or PyTorch-land. I think that's becoming less and less important as TensorFlow has evolved through more towards the PyTorch way of thinking about the problem.
If you said, "Build me an ML thing right now," I'd probably start in Keras just based on my own past experience. But obviously I'm looking at code with PyTorch, and PyTorch Lightning I think is — from a teaching perspective — that's...the last time that I had to teach some ML, I was doing it in PyTorch Lightning.
Although, I had a notebook in Keras and I reproduced the same thing in PyTorch Lightning. Of course, we had Weights & Biases there as well for monitoring.
Nice. That really warms my heart.
I've been introducing a new cohort of people to your product.
That's obviously top of the stack. It very much is a Pythonic world now.
As I was saying before in this other large project, which is called Sky Portal, that we're using as an interaction platform — with, now, hundreds of people are using it on a daily basis looking at real data as it's flowing in and interacting over individual objects — that tech stack is obviously more complicated.
There is a component of it which is slightly external to the stuff that I built in my group — but is part of our project — which is more or less a large MongoDB engine that's dealing with terabytes of data, and there's a bunch of ML plug-ins to that that run in real time. And that's, I think, using TensorFlow. Then what we've built is essentially a Tornado-based, API-first backend and it attaches to a really large Postgres database, and on the frontend is React.
Uncertainty quantification in a sensor-based world
Well, we're almost out of time, maybe we're even overtime, but we always end with two questions that I'd love to ask you. The second-to-last is, basically, is there a topic in ML that you think should be studied more than it is? Is there something that you would look into if you had extra time on your hands?
Yeah. I mean, there are a lot of things I wish I had more time for.
Where I think there needs to be more work in the ML world is around UQ, uncertainty quantification. Astronomy and physics work in a world that's sensor-based, fundamentally, in terms of our observations. Because it's sensor-based, there's noise.
So, unlike in the AlphaGo-Atari world where every pixel has a perfect measurement, if you take an image of the sky or you measure some time series, there's noise associated with it. And because there's noise and because there's a finite amount of training data, if you build models off of that, you get uncertainties in the models because of its lack of expressiveness or its overgeneralization or overfitting.
Then, you also have a source of uncertainty in what it is that you're trying to understand, just because fundamentally you don't have a perfect measurement, your signal noise is imperfect. I see some of that research, again, coming out of the ML world, but I see some of the stuff I'm most interested in as coming out of the physics-and-astronomy-meets-ML world.
I'd love to see more of that more broadly.
I think it's partly our fault as domain scientists for not coming up with the equivalent of grand challenges like with protein folding, where if we had this, we would be able to make great strides. We need to have the..not just benchmark data sets for other fields to be playing with, but we also need to be really clear about some of the important questions that we're asking.
I think in the end — back to a lot of what we were talking about throughout this whole interview — doing inference and doing interpretability on the models that we build requires a fundamental understanding of the noise model of the data. And without that, nothing of what we do is going to be believable.
Why it's not good to always get an answer
I guess that's a good segue into my final question, which is, when you look at making the machine learning models actually work for you — like actually do something useful — what are the big challenges that you typically run into?
Well, it is a good segue from the previous one, because we are struggling, I'd say, as a community with recognizing that there is this large algorithmic toolkit that has been developed in the computer vision / NLP world that we could just take, make a couple modifications to, and do what were already doing better, faster, and at scale. And as I was sort of arguing through the middle part of the interview, that isn't where I think the biggest revolutions are going to come from, or at least I hope that's not where they come if ML is going to wind up being involved.
One of the harder problems is articulating what are the really hard problems in astronomy that can only be solved with new ML tools or new ML innovation. We're all working on it in different ways, we all have our different biases, I think we may wind up getting there.
The other one is maybe more practical, which is that is very hard to put machine learning into practice. It's easy to write a paper on machine learning and convince a referee that you're doing pretty well. Maybe release some code. Maybe have the referee kick the tires on that code. That's pretty much where we're at as a community.
But, trying to get it into a real workflow that affects real people's lives on the other side of that, there's not a lot of us that have experience with it. No one's really trained to do it well. So most of the time when it's done, it's done in an ad hoc way, you know, leveraging some understanding of how software engineering is supposed to work, but as you know well, machine learning in production is a very different beast than ordinary software in production.
I don't think as a community, we fully grasp how hard it is. The other side of that, of course, is that because machine learning is so exciting to so many, we're starting to train a number of students that have kind of just enough knowledge to be dangerous. But because again everything looks like a nail when you've got a new hammer, a lot of people, I think, are going off hitting nails that they ought not to be.
One of the things that I always say when somebody says, "What's the worst thing about machine learning?", is I always say, "It's because you always get an answer."
Especially in the context that we're looking at, if we always get an answer and we're getting data that's outside of our original domain or some notions of concept drift or something because the instrument is changing, we don't have any guardrails against that.
Luckily, unlike in many of the fields that your listeners work in, if we make a mistake, people don't die, and we don't blow up billion-dollar facilities, and things like that. So we live in a little bit of a nice sandbox where the mistakes that we make may have implications for lack of good resource allocation. But we still could wind up making statements about how the universe works that is fundamentally wrong because we don't know enough about what's happening under the hood.
Josh, thank you so much for your time. I really appreciated it, that was super fun.
This is great. Thank you, Lukas. Great questions.
If you're enjoying these interviews and you want to learn more, please click on the link to the show notes in the description where you can find links to all the papers that are mentioned, supplemental material, and a transcription that we work really hard to produce.
So, check it out.