Chris Anderson on Robocars, Drones and WIRED Magazine

Chris shares his journey starting from playing in R.E.M, becoming interested physics to leading WIRED Magazine for 11 years. His robot fascination lead to starting a company that manufactures drones, and creating a community democratizing self-driving cars.
Angelica Pan

Listen on these platforms

Apple Podcasts Spotify Google Podcasts YouTube Soundcloud

Guest Bio

Chris Anderson is the CEO of 3D Robotics, founder of the Linux Foundation Dronecode Project and founder of the DIY Drones and DIY Robocars communities. From 2001 through 2012 he was the Editor in Chief of Wired Magazine. He's also the author of the New York Times bestsellers `The Long Tail` and `Free` and `Makers: The New Industrial Revolution`. In 2007 he was named to "Time 100," most influential men and women in the world.

Show Notes

Topics Covered

0:00​ sneak peek and intro
1:03​ Battle of the REM's
3:35​ A brief stint with Physics
5:09​ Becoming a journalist and the woes of being a modern physicis
9:25​ WIRED in the aughts
12:13​ perspectives on "The Long Tail"
20:47​ getting into drones
25:08​ "Take a smartphone, add wings"
28:07​ How did you get to autonomous racing cars?
33:30​ COVID and virtual environments
38:40​ Chris's hope for Robocars
40:54​ Robocar hardware, software, sensors
53:49​ path to Singularity/ regulations on drones
58:50​ "the golden age of simulation"
1:00:22​ biggest challenge in deploying ML models

Transcript

Note: Transcriptions are provided by a third-party service, and may contain some inaccuracies. Please submit any corrections to angelica@wandb.com. Thank you!
Chris:
So how do we make it so that more people can engage with self-driving cars without working for Google or Waymo or whatever? And the answer is you take the essence and you reduce it to a unit that anybody can have access to, exactly as we did with drones. I didn't have a Predator, so I made one out of Lego and foam. And I didn't have a self-driving car, so I made one out of toy parts and a Raspberry Pi. And so what you're seeing is this incredible diversity of people who are engaged.
Lukas:
You're listening to Gradient Dissent, a show about machine learning in the real world. And I'm your host, Lukas Biewald. I knew Chris, originally, or knew of Chris as the Chief Editor of Wired and the author of The Long Tail, but it turns out that, in the last decade, he's gotten super into drones and started a company around DIY drones and now works on 3D robotics and DIY robot racing. So it's a super interesting conversation and I think you'll enjoy it.
Lukas:
Chris, it's such an honor to meet you and you have such an interesting kind of arc to your career. Before we get into the stuff you're doing now, could you kind of tell us about the highlights of what you've done before the robot stuff?
Chris:
Sure. It looks really chaotic and random, but every step made sense at the time and possibly, if I do my job here, I can make it make sense in retrospect. So I was a terrible student, essentially failed out of high school and failed out of college, then played in punk rock bands for most of my 20s, working as a bicycle messenger.
Lukas:
Wait, is that right?
Chris:
Yeah.
Lukas:
Really?
Chris:
Yeah. The best story is that I was in REM.
Lukas:
No. Wait. Really? Wow.
Chris:
Well, there's a little bit of a footnote to that, which is not the REM. No, I was in a band called REM in Washington, DC. We were really good. We were about to release our first album and our manager said, "It's the weirdest damn thing. There's this other band called REM, but they're releasing their album on the same day, but don't worry. They're from Athens, Georgia. How good could they be?" And so we thought it'd be really funny that we, sort of the famous, big city REM, would invite the little country REM up to Washington, DC for a battle of the REMs and the winner would get to rename the loser, which they agreed to. And they came up and we played a joint record launch party.
Lukas:
Wait. So this is early '80s. Right?
Chris:
Yeah. I guess probably about '83-ish.
Lukas:
'83?
Chris:
Maybe '85-ish, something like that. And so we got the famous 930 Club in Washington, DC. So we played and then we flipped a coin to see who goes first. We went first and we played good sets. We got decent applause. We went to the bar to celebrate our inevitable victory. Then they came on second and their first song was Radio for Europe, which was their first single.
Lukas:
Yeah.
Chris:
And our jaws were on the floor and our beers unfinished and we realized we were completely sunk. And they were great, as you might imagine. Won, as you might imagine. And Mike Mills, the bass player, stayed around just long enough to rename us Ecoslavia because we were so arrogant to think that we would win. And we released our album under that name and the rest is history. Anyway ...
Lukas:
Wow. That's amazing.
Chris:
Yeah. So that was my little sort of brush with fame. But yeah, so complete fuck up student. Oh, sorry. We have to beep that. But eventually, in my last 20s, I was like, "You know, I don't think this punk rock thing is really working out for me. I should probably use my brain again." And as a teenager, I'd been thrilled by physics and I got the Feynman Lectures on my 16th birthday, which was ... I read those instead of going to school. And so I went back to college and I decided, at this point, I had so much to prove that I was going to do the hardest thing possible, which was physics.
Chris:
And so I got a degree in computational physics, which was a new thing at the time. And the job of all of us physicists, those days, was basically to understand the nature of matter as we go closer and closer to the Big Bang, so higher and higher energies. And that means bigger and bigger and more expensive particle accelerators. And we were all sort of queued up to work on something called the superconducting supercollider, which was going to be in Texas. And the problem is the cost of the collider kind of scales with the energy it produces and went from $8 billion, to $16 billion, to $19 billion and then Congress canceled it.
Chris:
And that was it. There were no more interesting experimental facilities in the United States and it was all going to be queuing up for CERN, the LHC in CERN, Switzerland. And I realized I could see my career. I was going to be an assistant professor at Iowa State, waiting for my experiment to run at CERN. And 20 years later, it would run and it would probably fail and I would be author 300 on a paper about an experimental failure. And I was like, "That sucks." And I wasn't even very good at it. So it just was time to move on.
Chris:
So I went to the adjacent space, which was the science journals, Nature and Science, to write about science rather than be a scientist. And then went from there to The Economist to lead their tech coverage. One of the things that we learned from that generation of physicists who just, basically, their careers vaporized with the SSC, was that, although physics was not going to be our future, we had accidentally created the internet, as physicists.
Chris:
So the internet, as you know, was created largely at the Link Research facilities. The web was created at CERN, a physics lab. And we, as physicists, had the only big data out there. We were the only people doing big data because we had all this data coming from the particle accelerators. So we had these skills, big data and internet. And so when this generation vaporized to the winds, most of them went to Wall Street to become quants, which was the next source of big data. And the ones who didn't do that went to sort of create the emerging internet industry, which is kind of what I did as a writer and sort of kind of moved media onto the internet.
Lukas:
Before you go further, getting a PhD in quantum physics is no joke. I have a lot of friends who feel stuck in academia and have trouble getting out. Even though the careers available to physicists, for example, are quite good, I think most people feel a little bit like failures because, inside of it, you're so funneled through this escalator to success. I mean, you speak so rationally about it, but actually I feel like most people aren't able to make that leap. Do you think it was your perspective of having not jumped from undergrad to grad school or something else?
Chris:
Yeah. I mean, to be clear, I don't have a PhD. I dropped out of the PhD program. Not even at ABD. I didn't even get that far, but if you love physics, it's kind of heartbreaking to see what's happening now. So you're inspired by the greats, but like all scientific disciplines, you need theory and experiment to be matched by a limited amount of time. So a theory comes out and you want the experiment to be able to falsify or not within, say, five years. If that gap grows, then theory becomes unmoored, In reality, and it becomes almost like poetry and, now, it's the coolest theories and the ones that are best told or the ones that spark the imagination. And it's almost like metaphysics. It's no longer physics. It's almost philosophy.
Chris:
And that's a really weird place for a scientific discipline to be. And I think that the people who stick with it and all they can really do, they can either line up for an experimental facility and see you in a generation or they can go into theory. And it's seductive in that it's math, but it's not real. And I think you can really get lost there. It's almost religious. There is a slight ray of hope, though, in cosmology in that, rather than having physics facilities, terrestrial physics facilities, that we use the stars to create energies and to observe
Chris:
So we're getting much, much better at using astrophysics as an experiment, but it can't do everything. So I'm not sure I answered your question, but basically, if you fall in love with physics, what you get is a really good grounding in statistics and math, but it's not a great career and it's probably best to use that grounding. And there's plenty of physicists out there doing good work in machine learning and elsewhere. So that's why my degree was actually computational physics, which in retrospect was more about compute than it was physics.
Lukas:
I see. Interesting. But then you sort of left that whole thing to be a journalist.
Chris:
Well, yeah. I mean, again, it was stepwise. My parents were journalists, so it was kind of like the one thing I was sure I was not going to do was journalism, but writing about it for ... but again, Science and Nature are scientific journals. So it wasn't like grubby newspapers. And then The Economist, again, everybody was a PhD of one sort or another. Most people were. So it really felt like you were part of extended academia. Then moving to Wired and to take over Wired in 2001, that was the first leap into traditional media owned by Conde Nast, which owns Vanity Fair and New York and things like that, so traditional media, but they bought Wired. They hadn't created it. And Wired was created largely as an evangelical bible of the emerging internet.
Chris:
And one of the reasons I'd left science was because, in '93 when Wired was launched and the internet was just forming, I wasn't sure what it was. Again, we thought it was just a way to telnet into the cray at Los Alamos. And then this magazine comes out with these dayglow colors saying, "No, this is a cultural revolution. This is going to change the world. This is going to change everything." And it just blew my mind. I suddenly realized this thing I was kind of good at actually had these big implications. That dictated the direction of my career. And so when the opportunity came to lead it, I was like, "Yeah, this is the religion I believe in."
Lukas:
Well, that's funny because 2001 is a really interesting year. Right? Was this pre bubble collapse or post bubble collapse?
Chris:
No, post. It was the best and the worst time. So the bubble collapsed in March of 2000.
Lukas:
2000.
Chris:
Yeah.
Lukas:
Got it. Right, right.
Chris:
So at that point, most of the world was saying, "This is a sub-prime mortgage. This is a hoax, perhaps even worse," but you had to believe that the internet was not the stock market, that there was something real at the core of the stock market and that the bubble was a finance artifact, but the underlying trends were real. And it was a very unpopular and somewhat minority view, at the time, that the internet was real, but if you were to bet at that time, as I did, that the internet was real and the stock market thing was a stock market thing, then you're buying at the bottom, essentially.
Chris:
So I don't think I would've been offered the opportunity to take over Wired if everybody knew it was the hot place to be and I wouldn't have been able to hire the people I did. And as I say, the best time to take over when you're not particularly experienced with this kind of stuff is at the bottom because you can hire people. Your lack of success is cloaked by the market's lack of success. It's impossible to succeed in that environment, so no one could tell whether your failures are yours or exogenous ones.
Chris:
And then, thirdly, once things start to pick up again, year-on-year growth looks amazing. So it works out really well, but it was a very countercyclical bet, at the time. And if you look back at underlying internet adoption trends, you almost can't see the bubble bursting. It was really isolated to the stock market and all that capital created a huge amount of infrastructure, which we still enjoy today.
Lukas:
Interesting. So what about The Long Tail? How did that come about?
Chris:
Yeah. Thank you. When you take a physicist, basically, by heart and you stick him in media ... I'm not trained as a writer or as an editor, didn't have any particular interest in media. What I was really interested in was the story, but I'm a nerd. So what I'm going to do is I'm going to try to do research about the story. And so not trained as a journalist, I was trained as a data analyst. And so I was like, "Well, something important is going on in the server farms of Amazon and Netflix. We can probably see it as a lens on human behavior in a way that we never have before." We're basically instrumenting society in a way we never had before. And this is obvious today, but it wasn't at the time.
Chris:
And I said, "I bet, if I could get that data to see how consumer preference actually looked at scale, I bet it would be interesting. I bet we'd learn things that we weren't seeing with the ..." I don't know, Department of Commerce reports or the Walmart quarterly earnings. And so I asked. I asked the Yahoos and the Netflixes and the Amazons for their data and, weirdly, they had to sign a couple NDAs and anonymize some stuff, but they gave it to me and I just got these massive datasets. I did really dumb stuff. I just stuck it into a spreadsheet.
Chris:
And you had, basically, sales of a set of products. Take music, for example. You get a million tracks and then you rank them in terms of popularity. And I stuck them in a spreadsheet and nothing showed up. The graph was empty and I was like, "Wait, what happened here?" And I said, "Well, let me just cut off the first hundred and just graph from 101 down to a million." And then I could see the line and I realized what happened is that the inequity of the marketplace, the incredible scale differences between the number one track and the number one million track basically compressed my scale. So the scales are set by the number one. The Y is set by the number one and the X is set by the number million. And so the line was basically just right all along the axis. And until you cut off the head, you couldn't see the tail.
Chris:
And it was simply that dumb thing that I did one night with a spreadsheet that kind of created this, that just shifted my gaze to the right. And I realized there was a lot there that we weren't paying attention to because it was a high number, but low magnitude. And that created the notion of The Long Tail and then got the other datasets and they all confirmed that, if you have basically infinite inventory and mechanisms for people to explore that inventory, that consumer preference shifts down the tail. Not entirely. We still have network effects and hits and things like that, but basically there's a lot of suppressed preference for niche stuff that was suppressed by the scarcity function of shelf space that was opened up by the non-scarcity by the abundance of online databases and ecommerce, et cetera.
Chris:
What's interesting is the caveat to that story is that I got a bunch of datasets and then, about a year later, AOL was also sharing some data and shared with some academics and somebody figured out that you could de-anonymize.
Lukas:
Oh, the search data.
Chris:
The search data.
Lukas:
Oh, I remember that well.
Chris:
They could de-anonymize the search data. Yeah. It was a shit show. And as a result of that, all the companies stopped sharing data. So there was basically a 12-month period where you could do the work I did. And internally, companies do it all the time, but externally, you can't get the data anymore.
Lukas:
I feel like you named this really important phenomenon. Right? I think it's still called this today. It seems like there's a real skill in ... You just nailed something that's so important.
Chris:
So I have to confess that ... Yes, I did kind of come up with that name and it turns out that, actually, that phrase has been used before. People talk about fat tails, et cetera.
Lukas:
Fat tails. I heard it. Yeah.
Chris:
Yeah. It probably has been used, but I think I called it ... I think I at least invented it in my own head, but I didn't think it was a big deal. It was slide seven in my presentation. And I went to see Reed Hastings, the CEO of Netflix, and kind of walked through my presentation, walked through my analysis with him because they'd helped. And he got to slide six and he says, "There's your headline right there." And so Reed Hastings was the one who actually identified The Long Tail as being the mot juste, if you will, that captured it.
Lukas:
Wow. And I guess it's funny. I feel like that's maybe the thing you're best known for. Are you kind of sick of talking about it?
Chris:
No. You know what? I'm not doing the act of research in it, but no. What's interesting is that any sufficiently novel idea will separate the audience into two halves. There's those who say, "No way," and those who say, "Duh." And it almost goes generationally. So anybody who grew up on the internet was like, "Duh. Of course," lots of products, lots of choice, lots of niches are a thing. And anybody who kind of grew up before that ... And I don't mean to be ageist, but it's kind of cultural age, if not chronological. There's a lot of people who grew up, culturally, in the era of Blockbusters and Top 40 radio and three TV channels, et cetera, who basically argued that the Blockbuster was forever and that The Long Tail was a mirage that probably wrongly gave hope to niche artists that they could somehow work.
Chris:
And of course, they're right. And it was clear. I never said it was the end of the Blockbuster. I said it was the end of the monopoly of the Blockbuster. It was clear that the economic rewards would be felt largely by the aggregators rather than the creators. And the cultural rewards were felt by all of us, of course, and the creators obviously, take music or writing or whatever, there's certainly some psychic rewards of being listened to or read, but the fact that the internet exists doesn't mean that a struggling musician is going to be any less struggling.
Chris:
So I think there's a lot of people who just kind of read it as only, "Blockbusters are dead. Therefore, The Long Tail is wrong." And they still say that. And then there's a lot of other people who feel that it's completely self-evident. One of the kind of tragedies is that I wrote the book before YouTube existed. And YouTube, of course, is the canonical long tail marketplace of all cultures and niches, et cetera. And so, on one hand, it's kind of weird. I still have academics who show me people really don't understand the math of The Long Tail and they keep saying percentages. It's like, "Well, the top 1% of X still has 90% of the ..." They don't realize that it's 1% of 100 million. In absolute numbers is a lot, but still.
Chris:
And I still get this all the time from academics who, like, "The Long Tail's a hoax because top 1%," cetera. Meanwhile, anybody who ... I should be able to say, "YouTube. Discuss," but for some reason, some people just don't want to see it that way. So I do end up still trying to find evidence of it. Actually, it was a lot less controversial than my next book, Free, which was the economics of free stuff. And obviously, economics is largely focused on monetary economics, and yet there's obviously a non-monetary marketplace out there, as well. I mean, we're doing it right now. You don't charge your listeners for this and I don't charge you for this. We're doing some exchange, some non-monetary exchange that has value, but economists don't know how to measure it. So that one was actually much more controversial.
Lukas:
Interesting. What was the controversy? Did you feel like you got a lot of negative feedback?
Chris:
Yeah. I mean, especially from media. I have kind of a love hate relationship with the media, which is increasingly becoming a hate hate relationship, but the newspaper business was imploding and they largely believed that the canonical error that the newspaper business made was putting their content free on the internet. And had they only set up paywalls at the beginning, that somehow media would be preserved. And people in media take themselves pretty seriously. They feel like they're the fourth estate and protectors of democracy and the only people who can keep us from the mob, et cetera. And so they believed that free content on the internet was destroying this foundation of democracy and that I was helping. I was not helping, if you will.
Lukas:
Okay. So what happened next? Then you got into drones?
Chris:
Yeah. So running a magazine by day, but I was still a nerd by night. So my first nerd thing was The Long Tail and the statistical analysis and writing books. They were largely economic books because, even though I'm not trained in economics, my time at The Economist sort of osmotically gave me some exposure, but still basically I'm a programmer by heart. And as my kids got older ... I've got five kids and my wife's a scientist, as well, and we tried to get them interested in science and technology. As they got older, I was thinking of cool things to do with it and I actually started a site called Geek Dad, which is all about-
Lukas:
Oh. You started Geek Dad? I know Geek Dad. That's awesome.
Chris:
Yeah, although I think Geek Mom is actually doing even better right now.
Lukas:
Nice.
Chris:
Also a spinoff this. So I started Geek Dad. And largely, the notion was stem projects that were sort of fun for the kid and fun for the adult because there was a lot of things that were fun for the adult and not fun for the kid or fun for the kid and not fun for the adult, but the ones that kind of got it exactly right. So in the course of doing that, I was like, "Robots. We should probably do something with robots." And the kids are like ... So Lego, I was on their advisory board and Lego sent me the first Lego Mindstorms.
Lukas:
Whoa. Man, wow. That's awesome.
Chris:
It was pretty cool. So they sent me the first Lego Mindstorms, like beta testing. And so I showed it to the kids and the kids were like, "Yeah, we'll do it." And so you follow the instructions, you put it together and it takes all morning. And then you have a little wield robot that'll kind of move towards a wall and then back away. And the kids were like, "Are you fucking kidding me?" No, sorry. Definitely bleep that. They did not use that kind of language, but internally, whatever the sort of nine-year-old equivalent of that is. And I realized that Hollywood has ruined robotics for kids because you've got transformers and this incredible stuff. And meanwhile, real robots just don't, at least most of them, don't really do anything. You're talking about Roomba, et cetera.
Chris:
So the gap between the Hollywood version of robots and the prosaic reality was such that it was really hard to get them excited. So I thought, "Well, what would be cooler than a rolling robot?" I thought, "A flying robot." And so I'm like ... I don't think I actually know what a flying robot is. Astroboy or something, I'm not sure. So I literally googled, "Flying robot," and the first result was drone. And I was like, "Huh, I hadn't thought about it. I guess a drone is a flying robot. Wait, what's a drone?" So I googled, "Drone," and a drone is like a-
Lukas:
Wait, what year is this? This is hard to imagine.
Chris:
This is '96, '97.
Lukas:
Got it.
Chris:
Sorry, sorry. 2006, 2007.
Lukas:
Okay, wow. So drones are not in the zeitgeist yet.
Chris:
No. Well, drones were in the zeitgeist as a military thing.
Lukas:
I see. Right.
Chris:
But there were no consumer drones. You couldn't buy one. I know, I know. It seems so crazy now, but at the time, drones were like a predator that shot hellfire missiles, et cetera. It was really a purely military thing.
Lukas:
Right.
Chris:
So I googled, "What's a drone?" And a drone's basically a plane with a brain. It had an autopilot. And I'm like, "Okay. Wait. What's an autopilot?" And you googled the autopilot and it's basically sensors and compute and it figures out which way is down, which way is up, GPS, et cetera. Those sensors and that compute, that's kind of what we have here in the Lego Mindstorms box, which came with accelerometer and magnetometer and gyro, et cetera. And I was like, "Let's just do it."
Chris:
And so, right on the dining room table, we built an autopilot out of Lego, stuck it in a radio-controlled airplane and it kind of almost worked. And the kids thought that was mildly amusing for about a minute and I was blown away. I was like, "What just happened? Did we really just build a drone with children on the dining room table out of Lego and it worked?"
Lukas:
Wait. Can I ask you a very ... Just having messed around with drones quite a bit, I feel like you're skipping over the part where the thing keeps crashing and breaking and then you spend an hour putting it back together and it crashes and breaks again. It's something maddening, right?
Chris:
Oh, no. Yeah, yeah. I just told you the bit that got me excited, that put the idea in my brain. The next five years were just horrible, but I couldn't let it ... So basically, what had happened in 2007 was a bunch of things that, in retrospect, seem obvious, but in 2007, it was the beginning of the maker movement. So it was 3D printing, it was Arduino came out, but what it really was was the launch of the iPhone, 2007. So what's in an iPhone? A bunch of things, but including our MEMS sensors, these sensors that were chips. And previous sensors were mechanical. A gyro was literally a mechanical gyroscope and it was just unaffordable, unattainable.
Chris:
And so I call this the peace dividend of the smartphone wars, but basically the components of an iPhone had now been so cheap and available that you could then put them together in different ways and explore adjacent space. So a Fitbit ... Well, the Wii controller, for example, was an accelerometer, a MEMS accelerometer. The Fitbit guys got a Wii controller. And just like I got a Mindstorms set, "Huh, what else could I do?" They got a Wii controller, opened it up, saw the accelerometer and thought, "What else can we do?" And they came up with Fitbit.
Chris:
And so there's a bunch of people who were looking at the components that came out of smartphones and thinking, "How do I recombine them to create something new and transform an industry?" And so that's what we did. We basically, rather than drones which had been aerospace-grade stuff, so you'd basically take an airplane, subtract the pilot, we're like, "Take a smartphone and add wings." And that bottoms-up approach was completely radical and transformative and initially was horrible. I mean, nothing worked. They crashed all the time, but because they were small and foam and cheap, nobody got hurt.
Lukas:
Right.
Chris:
And because they were small and foam and cheap, we could actually build a community and we got tens of thousands of people contributing and beta testing for all the right reasons. And we innovated, collectively as a community, innovated super fast so that we went from Lego, to foam, to plastic, to basically dominating the drone world, including becoming the biggest drone producer in North America five years after that with no funding. That all just happened and it just kind of exploded out of nowhere. It was kind of like the way the internet took over the telecom sector or PCs took over compute. This is a bunch of amateurs with open source software and hacked together stuff, basically took over the future of aerospace with classic Gandhi stuff.
Chris:
First they ignored, then they laughed, then they fought, then they lost. And today, it's pretty evident that the future of aerospace looks unmanned, it looks electric, it looks more like Silicon Valley than it does like Boeing or Airbus, just like SpaceX did to the Launch Alliance, kind of the tech, the Silicon Valley drone model seems to be the future of aviation everywhere.
Lukas:
That's so cool. And then, wait, how did you get into racing autonomous robots?
Chris:
Right, right. Okay. I started with the hobby, to industrialize my hobby. The drone community turns into a company, the company gets big, and now I'm running a company, which is all well and good, but again, still nerd. Still wanting to get my hands dirty. Drones, at this point, this is now 10 years on, so this is ... What year are we in? This is 2017 or so. So at this point, drones are kind of a solve problem. It was really hard for awhile, the common filters and building robust, reliable systems and connecting to the internet and the data payloads and the computer vision, all that kind of stuff. It was really hard for awhile, but now it's kind of solved. And I'm always looking for some unsolved problem, something that's challenging.
Chris:
And you would think that drones, as a 3D problem, would be harder than cars for a 2D problem, but they're not. And the reason being is that you can get away with all sorts of slop up there in the air. The air is largely empty. You have GPS. And so we didn't really care whether we were a meter off. We just basically had a GPS. Position pose were kind of given to us. Pose was given to us by the autopilot. It was hard to get there, a lot of work to figure out where down is in an inertial frame. And then position is given to us by GPS, just for free.
Chris:
So you can't assume that you have GPS with a car. You often don't. So you need to establish position some other way and also you need a kind of level precision that's a centimeter or less because there's lots of clutter on the street. And so it basically became a computer vision problem. So drones were an inertial problem, basically. Cars are a computer vision, deep learning problem. And computer vision, deep learning was just less advanced than classic control theory.
Chris:
So it was an opportunity to go deep on computer vision and deep learning and kind of get my brain going again. And once again, DIY drones led to an industry. So we're like, "What should we call it? DIT robo cars," because, I don't know, we hadn't figured out a name for autonomous cars yet. I went with robo cars. "And let's do it again. This time, I'm not going to screw up my hobby by turning it into a company. I'm just going to leave it a hobby, but let's get this flywheel going again." And once again, we had the enabling technologies, which were finally ready. We had good compute in forms like ... We started with Raspberry Pis threes and fours and then Jetson Nanos.
Chris:
The job is to keep is affordable, democratize the technology. So we put a limit of $400. Nothing could cost more than $400. This is kind of what it looks like.
Lukas:
Oh, no way.
Chris:
Yeah. This is a variant of it, but this is just basically an RC car chassis with a ... This one happens to be a Raspberry Pi Four on the top.
Lukas:
Oh, nice.
Chris:
And a camera. This one also happens to have an Intel RealSense, a T265, which I'm playing with right now, but basically that's all you need. You need a camera, you need a Raspberry Pi, and you need an RC car.
Lukas:
Ooh, can I show you ... I've got one that I ... Well, this is not yours, but I made one that's kind of similar.
Chris:
Yeah.
Lukas:
It's Raspberry Pi Three here with a similar camera. I guess the chassis is a lot crappier than your chassis there.
Chris:
Well, this one's actually not that good. I probably fiddled with it a lot. I added a wheel encoder. Sorry, to nerd out a little bit, the Intel RealSense T265 is a really interesting sensor. It gives you basically position. It's a visual slam sensor, so it gives you position, but it's a lot better when it has an encoder, when it's matched with an encoder. It actually knows where it is. So it's doing it all visually with IMU and stereo vision, et cetera, and it records what it sees and then records that as you follow a path and then tries to replicate it again, all visually, but it can tend to drift over time.
Lukas:
Wait. Can I see that sensor again?
Chris:
Yeah. It's this one right there.
Lukas:
So that's two cameras?
Chris:
It's two cameras and an IMU.
Lukas:
What's an IMU?
Chris:
An inertial measurement unit.
Lukas:
Oh, I see.
Chris:
It's a combination of accelerometers, gyros, and a magnetometer that gives you a position.
Lukas:
Gotcha.
Chris:
So your phone has one in it. So this one's using a framework called Donkey. So DIY Robocars is the community, but the actual project that is mostly used it called the Donkey Car. I would call it an MVP of self-driving cars, which is it's end-to-end deep learning and it works in the real world, it works in simulation. The basic model is behavioral cloning. So what you do is you drive with the PlayStation controller. You drive it around a track and it records the video, samples the video with stills as it goes around and then matches those with the inputs from your controller. And so you now have a pair. You have, basically, here's what this camera saw and here's what the driver did. And you send them out to the cloud and you run TensorFlow or whatever Fast.ai, whatever you're using, and you come back with a model inference layer.
Chris:
And then the model runs locally. So we train in the cloud or on your PC and you create a model, then the model runs locally, and then you switch into auto mode, and then it drives by itself by simply doing what you did, more or less, in the training session. So you just drive around three or four laps, maybe go clockwise, counterclockwise, little domain randomization. And it should learn how to drive. Now, that's one technique. In the physical world, that's the easiest way to train it.
Chris:
In our virtual environment, in our simulator, you can use different methods. So there, we use things like reinforcement learning and we give it reward functions and all that sort of thing, where during COVID, it really pushed it towards simulation. The exact same code works. It doesn't have to be on a physical card. It'll work on your laptop and you're running in a Unity-based simulator. And so it's been a really good time for us to push hard on our simulation side of the equation.
Chris:
And one of the questions we'll have, as COVID ends and we return to physical races, is how well do our models translate to the real world? Our sim to real gap.
Lukas:
Yeah. Always a challenge.
Chris:
Exactly. And so we're working pretty closely with Unity right now to try to figure out how to improve the probability that our simulated-created models will translate well. And so we think a lot about domain randomization, but one thing ... It's hard to remember, but this car, that camera's 12 inches off the ground. Try putting your head 12 inches off the ground and try to see whether you can detect anything. Everything's so distorted and reflections and shadows, it's really hard to see the world from there. And so what we're trying to do is we're trying to ... Simulators are too perfect. It's perfect information. We can create any level of resolution. They don't have motion blurred. So we're actually trying to make this simulator worse.
Chris:
And one of the problems we have here is that you'll train on the track and, on your own, it works great and then, during the race, the crowd comes and now you have spectators all around the track. And now, you have all these legs and it completely throws off the model. And so we're actually modeling people and randomly putting people around the track to train model to ignore that. And we're trying to figure out, what is it really looking at? Which color channel, what contrast? What do shadows do? And we're trying to understand better how to robustify the model to do this into real well.
Lukas:
Man, what a cool project. I have so many questions. Is it in the scope of Unity? I should probably know this, but I really just don't. So I think of Unity as a graphics company. Does their engine also model physics?
Chris:
Yeah. They've really ramped up the robotics side. So you think of them as a game engine. And of course, they're good at that. Competing with Unreal, they're kind of open source and Unreal is less so, is not, I guess, but they're really pushing the robotics side. And yes, they use physics. So they use the Nvidia physics engine in the background.
Lukas:
Cool.
Chris:
And so it's quite good. And they have a whole team right now focused on robotics. They were initially focused on things like segmentation classification. So let's say, for example, you want to model a factory or a warehouse or the shelves of the 711, et cetera. How do you identify an object that's a carton of milk? rotated, bad lighting, how do you make sure you can identify it well? And so they focus a lot on that, just sort of taking objects and then sticking them in virtual environments and just creating a lot of noise and train the system to understand that.
Chris:
They're also used a lot in full-size self-driving cars because they create beautiful photorealistic environments and that's important, as well, but what we're working on with them is video. I mean, yes, we screen grab the video, but the image moves. And so there's a correlation between the previous image and then the next image. So that includes things like motion blur because our cars go really fast. They go probably 20, 30 miles an hour, but scale speed is 150 miles an hour. And when your camera's a foot off the ground, it is a lot of motion blur and things like that.
Chris:
So we're starting to model that. We want to procedurally generate tracks so that we can do domain randomization with tracks, make sure to give the tracks certain parameters that at least don't break the physics. So one thing you could do is you could create a virtual model that can handle any track, but in the real world, you've got things like physics, like the traction budget of your wheels, et cetera. So we have to model at least some physics of the tracks realistic.
Chris:
And basically, your training, you want to be able to say, "Here's my model, here's my code, here's my hyper-parameters," whatever, stick it into the simulator, ideally in headless mode, so just running in the cloud, and I want you to run a thousand iterations and then I want to turn randomization on. So I want you to do a thousand iterations of randomizing lighting, shadows, motion blur, objects that are surrounding, textures. I want you to go through, randomize the courses, as well. I want you to go clockwise and counterclockwise. I want you to change which track you're in at any point. Then I want you to add other cars that are also random.
Chris:
And so when you think about that, when you think about the industrial scale of just scenarios you can create, it gets really exciting. And so that's where Unity is focused right now.
Lukas:
Cool. What's your hope for this? Is it the joy of making something or is there-
Chris:
As you know, one of the rules of the maker movement is you never ask why because the answer is always because we can. My personal thing is that it's just really engaging. It gives me a reason to explore the cutting edge of machine learning and data science and things like that. So I need a reason. I'm like probably you. I can only learn by doing and it gives me a reason to do it. As a community, our nominal reason is to democratize the technology, to basically ... I don't have a real self-driving car. You probably don't have a real self-driving car. And that ain't right.
Lukas:
Man, well said. I love it.
Chris:
Yeah. So how do we make it so that more people can engage with self-driving cars without working for Google or Waymo or whatever? And the answer is you take the essence and you reduce it to a unit that anybody can have access to, exactly as we did with drones. I didn't have a Predator, so I made one out of Lego and foam. And I didn't have a self-driving car, so I made one out of toy parts and a Raspberry Pi. And so what you're seeing is this incredible diversity of people who are engaged.
Chris:
We have virtual races every month. Two races ago, the number one winner was Japanese ... I don't know what he does, but let's imagine just Japanese engineer. Number two was French teenager. Number three was a 12-year-old Indian girl from Canada. And then down the line are University of San Diego professors, retired people. It's just incredible diversity of people who can participate because, if you do it virtually, it doesn't cost anything. It's just download some code and run it.
Chris:
And so we're really feeling like we're opening up the excitement of the industry to people who, otherwise, wouldn't have access to it. And some of them are doing it for fun, some of them are doing it to get smart on ... a tangible reason to learn machine learning, and some of them are doing it because they want it to be their next career. So we find we have a lot of people who are mid-career. They're an engineer, whatever, they've got a job, but it's not exciting for them. And this is super exciting. And so it gives them the chance to sort of fall in love with tech again.
Lukas:
And what are the axis that you can change stuff? I think one of the challenges with these simulations is it kind of constrains the hardware a bit. Doesn't it? How do you think about that?
Chris:
The axis that we don't really mess with are things like cost and danger. So we like to keep them small, we like to keep them cheap. I mean, there's some exceptions and I can get into that later, but by enlarge, it should be something you can do indoors, it should be something that, if it goes wrong, nobody gets hurt. So that's where we limit. Beyond that, there really aren't any constraints.
Chris:
So for example, there are a lot of ways via self-driving cars. There are a lot of sensors that are available. So one of the things that's gotten super interesting of late is that 2D LiDAR has gotten really cheap.
Lukas:
I have one of those, yeah.
Chris:
So you can get 2D LiDAR now for about 80 bucks in a range of about 10 to 12 meters. So we can explore that. Right now, we just used LiDAR for obstacle avoidance because our courses don't have a lot of structure and they're basically just white lines on carpet or on the pavements. I should you the RealSense, the sensors. This particular one was position, but they also have one that's depth sensing, which is useful for, again, obstacle avoidance.
Lukas:
Sorry. What is step sensing?
Chris:
Sorry, depth sensing.
Lukas:
Oh, depth sensing.
Chris:
Depth sensing. Forgive me. Another one is that we can actually go outdoors and use a drone autopilot on a car and simply navigate by GPS, alone. Now, GPS is not high enough resolution, but now RTK GPS, which uses a base station and a moving one, is quite affordable and can get you centimeter-level resolution. So this one here matches another GPS that's a base station that you have, locally.
Lukas:
It's interesting. But you're not using any sonar anywhere, huh? Is it-
Chris:
Sonar's really not useful for us.
Lukas:
... too unreliable?
Chris:
There used to be something called the SparkFun Autonomous Vehicle Competition, which is no longer around. And that one was outdoors. And people originally used sonar to do things like avoid their hay bales on the side, et cetera.
Lukas:
Yeah. Right, right.
Chris:
Very noisy. So there is not a sensor that exists that we haven't explored. So yes, we had sonar, but then we would create sonar rays-
Lukas:
Whoa. Cool.
Chris:
... of 360-degree sonar.
Lukas:
Nice.
Chris:
Then, of course, the sonar's really old school, but the more recent ones are these time-of-flight sensors, these little, tiny time-of-flight sensors. So this one actually was just to compare sonar with time-of-flight sensing.
Lukas:
What's time-of-flight? Is that LiDAR?
Chris:
It's like LiDAR. It shines a light beam out and then measures the time it takes to come back.
Lukas:
I see.
Chris:
So basically, sonar is quite a wide beam and very noisy. The environment can obstruct. Time-of-flight is much better and cheaper and smaller, et cetera.
Lukas:
What about radar?
Chris:
We have radar, as well. Radar is still relatively expensive. Also, radar tends to be relatively broad beam and that's not a problem. So if you're in a full-sized car and you want to detect the car in front of you, it's fine for that, but we have other ways to do it, cheaper ways to do it, time-of-flight, for example, because remember, our distances are a couple meters, not tens of meters. So we don't have any need for radar because we can solve it with time-of-flight.
Chris:
Then we have solid-state LiDAR, which, again, is affordable and, mechanically, a little simpler. We do a lot of crashing, so mechanical robustness is a good thing. The spinning LiDAR I just showed you is basically a 2D, plainer one. The solid-state LiDAR has kind of a wedge shape. And so you get a little bit more structure that way, but again, the depth-sensing cameras can give you much of the same information and they also give you sort of visual texture information, which is useful on top of that.
Chris:
I'm trying to think what other sensors we play with. Oh, there's a really smart one. So you can do a lot with cameras and one of the winners uses ... So most of these cameras, as you saw, are looking out, looking forward and a little bit down. And we're racing indoors. So what people realized is that, if you know where you are on the track, you have a huge advantage because you know where the curves are. You can go fast on the straightaways and slow on the curves. Basically, you have foresight into what's going to happen.
Chris:
So how do you localize on an indoor track? We have cones at the corners to detect when people are disqualified. And so people realized the cones were sort of a foot signature, a fingerprint, if you will, for the track. And so they would use LiDAR to identify the cones. Now, you can do it optically, as well, because the cones are orange. And so they would basically localize that. And then a genius guy named Andy Sloan realized that there's another fingerprint of the track, of the course, which is that the lights on the ceiling had a distinctive pattern. And so his car actually has a fisheye lens and the camera looks up, as a fisheye lens, and it can see around it a little bit, but it also sees the ceiling. And it basically just steers by looking at the lights above it, which is absolutely brilliant.
Lukas:
And you don't consider that cheating? Just any way to hack you-
Chris:
It works great indoors, but now we make them go outdoors, as well.
Lukas:
I see. Nice.
Chris:
And so it'll fail outdoors. We do races in a place called Circuit Launch in Oakland, near the airport. And they just renovated it during COVID and they changed the lights. But yeah, so every trick you can think of. So it's called cone slam, by the way.
Lukas:
Cone slam.
Chris:
Simultaneous location mapping. So cone slam and light slam. Anyway, I could go down the rabbit hole, but I just wanted to say that we do racing, which is largely about going fast and beating other people, but there's also ways to explore self-driving cars at tiny scale in a city environment. This is one cute version of it. Actually, I'm trying to remember, actually, what it's called. We'll put it in the show notes afterwards, but things like this use cameras and little Raspberry Pis. And it's called a Zoomy. There, it just told me.
Lukas:
Nice.
Chris:
And you can build a Lego-sized city with stop signs and street corners, et cetera. You can go to IKEA and get these kids carpets that have cities for toy cars, et cetera, and you could actually run one of these in it and it'll navigate the city. So these things are super ... they use Jupiter Notebooks and Python and they're really fun and easy and super cute. You don't have to race to be able to participate.
Lukas:
Its eyes are so evocative, too. I love it.
Chris:
They are. Yeah. It just said, "Find Zoomy on your wifi," and then if you go there, it runs a little web server and it's running a Jupiter Notebook and you can do things like drive in the town.
Lukas:
What are the people that are winning these things focused on? Is it actually knowing your position and orientation really accurately or is it sort of strategizing your path through the course? What's the challenge?
Chris:
All the above. It's things like racing lines, which is find the ... Basically, racing lines are the shortest path around the track, and going fast in straightaways and then braking at the right time, the classic racing stuff. Localization helps a lot. It allows you to create a strategy. Then there's passing strategies and avoidance strategies and how do you win when you're going head to head, as they always are?
Lukas:
Is drafting relevant? These low speeds]?
Chris:
No, it's not. It's not. Yeah. Yeah, it going 20 miles an hour, but they're small. The biggest challenge, though, and this is one that does not show up a lot in real self-driving cars, is we're going freaking fast. So 20 miles an hour in a one-tenth car, that's 200 miles an hour. And so this is realtime robotics. And I don't know how much you've spent with realtime robotics, but 20 milliseconds is slow. And so our interloops could be running them at a thousand hertz.
Lukas:
So you do inference at 20 milliseconds on a Raspberry Pi Three?
Chris:
Depends. So no, we're not doing 20 milliseconds on Raspberry Pi Three, but we can do 100 milliseconds on a Raspberry Pi Four.
Lukas:
Right, right.
Chris:
That's sort of your AI loop. Then you might have a motor controller loop that's running faster if you're running an IMU or essentially you might be detecting. The IMUs, we're just getting the inertial measurements, would be detecting something like drifting. So if you're supposed to be going straight and you actually have some lateral movement, that means that your tires have lost traction and you're skidding.
Chris:
So how do we do it real time? And the answer ... So you need at least, I would say, 30 frames per second, at least 30 frames a second. Real cars are not sampling that fast. And if you're going 30 frames a second, you may have to make some concessions. So first of all, our cameras are relatively low res, so we're running at 320. And our models are pretty simplified, might have three or four layers, but no more than that. We're not running a lot of models simultaneously, so it's end-to-end neural networks. So basically, it's just pixels come in and commands to the steering go out. So we're not running parallel networks, et cetera.
Chris:
But yeah, these are all great challenges. If you tell somebody, "Keep it under $400 and win," it requires a lot of creative thinking about that. And you can't just throw compute at it. It's not okay to show up with the kind of stuff you'll find in the trunk of a Waymo. That's cheating. You show up with your Jetson Nano or your Raspberry Pi Four and then you use some creative algorithm or technique to win. And that's the fun.
Lukas:
Yeah. That's so fun. I mean, just a Nano or even Raspberry Pi Four, that's not joke these days, though. It's funny. To me, it's just amazing what we can do.
Chris:
Yeah. I don't know. The Nano right now is 60 bucks or something, the two-gigabyte one, and the Raspberry Pi Four is about the same. So it's really great, but what's really important is the software frameworks now support them. So TensorRt, TensorFlow RT, keras, Fast.ai, they're all starting to think about edge compute.
Lukas:
I just want to say, they've put in so much effort and they're so friendly. I feel like, when I've asked questions, they've just been unbelievably helpful. So I don't know, I feel like I just need to give them a thank you for that.
Chris:
Absolutely. And everyone's doing it. So Nvidia, obviously, they didn't have to come out with a Jetson that cost 59 bucks, but they did. Amazon's set up the RoboMaker, which is their virtual environment for this. Microsoft is investing a huge amount into edge AI. The Intel RealSense I just told you about, Raspberry Pi, et cetera, all the Google stuff is focused on edge AI, as well. So the notion that the edge ... So the cloud, the core is one thing, but the edge is completely different in that you have real-world inputs, realtime inputs, realtime outputs. And they tend to be small, cheap, power-efficient, et cetera. And so you realize that the internet has always been this way, that it's a combination of the edge and the core and that it shifts. Where's the thinking done? Where's the intelligence? And it's going to be some balance.
Chris:
We got the cloud, we got the core down right, but the edge is an opportunity to basically pre-process a lot of data before you get it. Because we can gather so much data, if we can pre-process it with deep learning and the edge, it actually makes the core smarter, as well.
Lukas:
Totally.
Chris:
So it's really exciting, what's happening right now, not only with deep learning, but also computer vision. A big fan of our project called Open MV, which is basically ... It looks just like one of these cameras, actually. So we've been talking a lot about deep learning, but computer vision is equally exciting. This is an Open MV and it's basically, again, a $50 board, but it's camera and it's got compute onboard and it's basically running Open CV. And it runs it really well with a Python interface, a fantastic IDE, and you basically just stick this on anything. It can run a car just all by itself.
Chris:
And now you've got the stuff that was like a PhD 10 years ago of edge detection and some simple deep learning networks, object detection, all sorts of transforms, et cetera, are all just built in, already built into this thing. And any kid can now use this to do sophisticated computer vision. So actually, cars that use nothing more than this have consistently scored in the top 10.
Lukas:
Wow.
Chris:
And you can literally make a self-driving racing car for less than $100 with something like this.
Lukas:
So cool. Before I let you go, I'd love to ask you a couple broader questions. I think you watched the Peter Norvig episode and I was really curious to ask him this. You're here, too, as someone who's been watching machine learning for longer than most. And I'm really curious what your perspective is, having sort of seen a long arc of this stuff. I guess everyone must ask these questions, so I feel a little shy asking them, but I'm really curious what you think. When do you think we'll see, for example, autonomous cars working in our life at all times? And where do you think this goes? Do you feel like there's probably fundamental limitations to what we're doing with neural networks now or do you feel like just kind of scaling up what we have leads to singularity-like outcomes?
Chris:
Everything I know about deep learning I probably learned from listening to your podcast because I'm dabbling. Peter Norvig's a legend, but I-
Lukas:
But you were training neural nets back in grad school, no?
Chris:
Yeah, but these were hopfield nets. And we hadn't really figured out the whole notion of layers and convolution and all this kind of stuff. So there was a real dead end and it was very frustrating. So look, with drones, once we got one drone to fly, I was like, "The sky's going to be dark with these things." They're essentially free. It's done. Think of how great it would be to have total information awareness of our planet. Rather than waiting for the satellites to come by or for the clouds to clear or having cameras in every stoplight, what if we could just sort of have a camera anywhere, anytime, to measure our planet so we could manage it better?
Chris:
So it seemed to be obvious that the missing middle, if you will ... We had cameras on the ground and we had cameras in space and the missing middle was the air, which is an opportunity to be anywhere, anytime, higher resolution. Just seemed like a good thing to instrument our planet, and yet here we are. There's nothing in the air. I can't believe it. It's been 15 years and we still don't have sky dark with these things. We really don't have any autonomous drones at all in operation, except for the military, like we had back then. So what happened?
Chris:
Well, the problem wasn't technical. The problem was regulatory. It is the FAA will not allow drones to fly beyond vision line of sight, won't allow them to fly without one-to-one pilot with sticks, like an animal. Basically, the FAA will not allow drones to be autonomous. It won't allow us to break the one-to-one ratio, which we've achieved nothing, in a sense. Imagine a robot that could only work tele-operated. What have you achieved? You still have one person, one robot, and that's where we are. Drones essentially have to be tele-operated or at least have someone monitoring autonomous operations, which is even worse because now they're not doing anything.
Chris:
So that was disappointing. It was disappointing for a regulatory reason. And I can understand it and I work with the FAA pretty closely on trying to resolve it, but the question about cars is more about society and regulation than it is about the cars. Can cars be autonomous today? Yes. Can they be autonomous everywhere perfectly? No. Should it be okay for cars to be deployed autonomously in some places where they can be highly reliable, but not everywhere? Absolutely. And companies like Voyage are doing that with retirement communities, closed courses, if you will.
Chris:
So I think the question is, are drones used today autonomously? Yes. Are they overhead right now? No. Am I disappointed there aren't more of them? Yes, but obviously they go where they're needed most. And I presume that self-driving cars ... I think we're setting the wrong standard. Should we have self-driving Ubers in all cities? Probably not. There's not a lot of advantage to it. Waymo's doing a little bit in Arizona, but that's probably not a game changer. Where would self-driving cars be a game changer? I think, actually, retirement communities are a really good example. They're quite empowering and liberating for people.
Chris:
So I think, if you reset and say, as the technology gets better, will we identify really useful places just where it wants to be and focus less on the tech and more about the marketplaces and the demand? Will we find those places? And the answer is yes. And so I think that all the questions about when self-driving cars come, they all kind of come from a technology place. And I think we're in our Silicon Valley bubble. We really need to understand the needs, the use cases, the places that would benefit most from them and think less about the tech and more about how it's going to be used.
Lukas:
Interesting. Interesting perspective. Thanks. So there's two questions that we always end with. And the second to last one is, from your perspective, especially from drones and robots, what's one underrated aspect of machine learning that you think people should pay more attention to?
Chris:
I think I mentioned I'm really into simulation and synthetic data. And I know you had a couple episodes now on synthetic data creation, but I do think this is the golden age of simulation. I work really closely with Microsoft and, if you've used Microsoft Light Simulator 2020, which basically uses satellite and aerial data to recreate the entire planet, photo-realistically, with using photogrammetry to create 3D models of the planet, but realtime with weather and everything, as it really is. I think this is the golden age of simulation, the golden age of rendering that.
Chris:
And as a result, our opportunity to use these powerful engines to train models better ... We talked about domain randomization, we talked about synthetic data, but I'm most excited about that because I feel like we've kind of hit some limits in the ability of humans to train models. And even CPT3 is limited by the, as you've mentioned before, is limited by the amount of data on the internet, which sounds like a lot, but is never enough.
Chris:
And so I think that we need to think really hard about our synthetic data generation strategies so that we can break through the limits of real data and start training them on things that we can only imagine.
Lukas:
Totally. Okay. And then final question is, for you ... And you've actually built a pretty sophisticated end-to-end ML system now. What's the biggest challenge of getting that to work or what's a challenge of getting that to work that people might not expect when you just sort of lay out what you're doing?
Chris:
First of all, I should say I did not build this. This is the Donkey Car team and there's a lot of people there who get credit for that. Tom Kramer was the originator of the current stack. First of all, one thing you should know about end-to-end is that it is end-to-end. All we have is one channel. Pictures come in and controls go out. We're blessed to have things like TensorFlow that'll do that, but once we start introducing other things like depth sensing and those other sensors we talk about, we're probably going to need to introduce multiple parallel networks running.
Chris:
Now, should the obstacle avoidance be also running on machine learning or should that be more classical control theory, if you will? How do we combine classic robotics control theory with deep learning? One's probabilistic, the other one's deterministic. How do we merge them? And so I think there's some interesting work to do to start to introduce multiple inputs? Right now, we have one input, one output, but of course in robotics, it's MIMO, multiple input, multiple output.
Chris:
And I think if you stick to the $400 limit to be able to do multiple input and multiple output with deep learning in all these channels is super interesting. I don't know whether we're there yet, but that's sort of ... You asked, what have we learned? And we've learned that you can do one channel in one network pretty easily and it works amazingly well, but it doesn't scale to multiple inputs. And if you really want to start winning in competitive races with other cars and actually doing what a human would do in a race, we're going to need to bring in all the channels and sensors and data we can and that means at a different architecture.
Lukas:
Although, the part of that car that's going to come down is the running neural networks. Right? I mean, I feel like that's the thing that seems to be dropping the fastest.
Chris:
Well, that is good news. The Raspberry Pi Five or the Xavier Jetson can do that, then yeah, maybe we can just apply our same technique and just say, okay, let's add another network to keep track of the other cars. Add a third network to keep track of the sliding, the friction, how the car's actually mechanically moving on the track with the IMU and then find some way to merge them. That would be super exciting. To do the whole thing at 30, 50, 60 frames per second under $400, I don't think we're quite there yet, but you're right, that's going to be the focus over the next couple years.
Lukas:
Awesome. Well, thanks so much. It's an honor to talk to you. That was so much fun.
Chris:
This was a pleasure.
Lukas:
Thanks for listening to another episode of Gradient Dissent. Doing these interviews are a lot of fun and it's especially fun for me when I can actually hear from the people that are listening to the episodes. So if you wouldn't mind leaving a comment and telling me what you think or starting a conversation, that would make me inspired to do more of these episodes. And also, if you wouldn't mind liking and subscribing, I'd appreciate that a lot.