June 22, 2021
EP. 110 — Why Self-Driving Cars Aren’t Coming Any Time Soon with Dr. Missy Cummings
Tesla and other automakers have convinced the public that fully automated vehicles are just around the corner. But what if … they aren’t? Dr. Missy Cummings, AI researcher and director of the Humans and Autonomy Laboratory at Duke, joins Adam to detail the massive gap between Silicon Valley’s promises and the technology’s limitations, and explain the real benefits that might come when we use AI to enhance human capability rather than replace us.
Transcript
FACT-110-20210621-MissyCummings-ACv02-ALT-DYN.mp3
Speaker 1 [00:00:02] Hello, everyone, welcome to Factually. I’m Adam Conover. Before we get started, I just want to tell you about a new project that I got that is out right now. It is a narrative comedy podcast called ‘Edith!’ It’s all about Woodrow Wilson’s Wife, Edith Wilson, who incredibly (not many people know this story) practically ran the American government after Woodrow had a stroke. It’s an incredible story from history. This is a very funny satirical podcast about that untold story. It was created and written by Gonzalo Cordova and Travis Helwig; two original ‘Adam Ruins Everything’ writers and very good friends of mine, incredibly funny guys. It stars Rosamund Pike and me. That’s right. I’m in the show with Rosamund. We had a great time (actually we never met); we recorded it at home in our closets during the pandemic, but it came out fantastically and it is out now. Wherever you get your podcasts, look up ‘Edith!’ with an exclamation point or go to EdithPodcast.com, check it out. Now on to today’s episode. You know, the pace of innovation in the tech sphere, in my lifetime, has been so swift that we have come to expect that basically anything is possible. Now that you can summon a hamburger from a little rectangle in your hand; you literally have powers they did not have on Star Trek. It is difficult to believe that anything couldn’t happen. Basically, any promise the tech industry makes us, we tend to swallow even though over and over again those promises turn out to be wrong. The iPhone: very impressive. But so much of the time, in reality, the biggest promises made to us by our technological innovators, and by even people writing in the media, don’t come to pass. We are, as a species, incredibly bad at predicting our future despite the fact that we feel we know what is just around the corner. There’s this term that’s been going around on the Internet for a while called ‘retrofuturism.’ Maybe you’ve heard of it. It’s the idea of looking at the vision that Americans had in the 50’s of what the future would be like. You can picture: it flying cars, monorails, video phones, very ‘Jetsons’ stuff. We like to look at it and we like to laugh and say ‘Haha, that didn’t come to pass. People back then were so dumb.’ But today we make almost the exact same mistake. We are constantly swallowing these visions of the future given to us by people who don’t really know. Let me give you one example that really jumped out at me when it happened. Remember Google Glass, those dumb looking glasses with a little piece of glass on the side that Google said were going to take over the world? We’d all be wearing them. Everyone bought this. The release of Google Glass was huge news. People literally thought that we would be walking around with these things on our face, despite the fact that there was no real use for them. No one could describe what you were actually supposed to do with these things; other than maybe see a little number in the corner of your vision telling you how many emails you had, I guess you could take photos with them. They were released. They sucked. Nobody wanted them and they ceased to exist and that was ten years ago. Yet we are still seeing so many articles about the future of AR; of ‘Augmented reality is coming,’ despite us still having no idea what we would actually use these things for in our daily lives. Another great example of this is VR. VR has been constantly touted as the future of entertainment, that we’d all be living inside VR simulations. It dates back to the 80’s and then in the last ten years, there’s been an immense resurgence of interest in VR with Oculus and the HTC Vive and all these things. But unfortunately, the VR revolution has not come to pass. I mean, you can buy the headsets but in my case, I currently have two of them collecting dust in a box in my closet because wearing them makes me nauseous and there’s nothing good to do on them. You could play one fun game for half an hour and then you’re like, ‘I feel sick. I never want to do this ever again in my fucking life.’ Now, that’s not to say that these things will never come to pass, that they’ll never go mainstream. After all, it took video chatting decades from conception to eventually become reality. But the fact remains that everyone who up until now has been beating their drum, saying ‘These things are about to become huge and transform society.’ All those people have been wrong. And yet, despite their terrible track record, we end up believing them anyway. We want to believe that there is an incredible future right around the corner that’s going to transform our world, despite the fact that we’re almost always wrong. The examples I just quoted are not that big a deal, right? I mean, they’re just consumer tech products. Who cares? Why not believe a little bit, and if you’re let down, no big whoop. Right? Well, unfortunately, this same pattern also applies to bigger transformations in our society. For instance, let’s talk about self-driving cars. People today believe, they believe as a matter of faith, that fully self-driving cars are just around the corner and that we need to prepare for that future. Companies are telling us that this future is on our doorstep, but it still doesn’t exist. Tesla advertises a mode called autopilot, but that mode is only supposed to be used with a fully attentive driver. Which means that a lot of people believe that the car can do things that it can’t really do because of that halo effect that makes us want to believe that technology is capable of more than it is. And if you look at the broader landscape of self-driving, you start to see a lot of companies sucking up venture capital money and a lot of press attention, but that they’ve produced nothing but a couple of cool pilot projects and a bunch of blown deadlines. Not to mention some unfortunate deaths. When it comes to something as important (and dangerous) as our transportation system, we need to have a lot more skepticism about what we’re willing to swallow as being ‘just around the corner.’ No one articulates that better than our guests today. Her name is Missy Cummings. She’s an engineering professor who directs Duke’s Humans and Autonomy Lab. She is an A.I. researcher, and she makes a compelling case that the claims made by the proponents of self-driving cars and the companies that claim to be producing them are vastly overblown and do not reflect the actual technology that we have available to us or the technology that’s coming in the next couple of years. She makes the case very compellingly that this entire industry is making promises that it simply cannot keep and that the vision of the future that a lot of people have is not going to come to pass, at least anytime soon. So, look, with all that being said, let’s get to the interview so you can hear it from herself. Please welcome Missy Cummings. Missy, thank you so much for being here.
Speaker 2 [00:07:01] Thanks so much for having me.
Speaker 1 [00:07:03] Let’s jump right into it. There’s so much talk right now about how we’re going to be getting self-driving cars in the next couple of years. You’re a skeptic on this topic. Tell me why.
Speaker 2 [00:07:14] Well, I’m a skeptic because I do this research every day in my lab at Duke University, and so I see the realities of what is and is not possible. I’m not saying there’s no chance that self driving is going to come in the next couple of years, but the way it will come is going to be substantially in smaller markets and in limited applications as compared to the dream that everyone has to jump in your car, jump in the back seat and tell the car to take you to Vegas.
Speaker 1 [00:07:45] Mm hmm, and this dream is really widespread. The dream is literally being sold by some of the automakers: Tesla currently charges you more if you want to have the ability to get this dream of self-driving later. So it’s very much part of their entire marketing strategy, and the public truly believes it; people are are planning their calendars around the fact that they believe they’re going to have that ability in the next couple of years. Why do you think the reality is going to look so different? What are we all missing?
Speaker 2 [00:08:19] Well, the big problem is centered on what we call the ‘perception problem;’ so perception systems in self driving cars (or flying cars or really any system that uses artificial intelligence), the perception systems are the long pole in the tent, meaning that everything relies on them working correctly. But the problem is that we really still do not know what we’re doing when it comes to artificial intelligence and ‘seeing.’ Computer vision systems are still extremely brittle, meaning that they can not work in unexpected ways. The best illustration I can have your listeners go look at is, there’s a YouTube video of a WayMo1 car that had to be rescued when it got stuck by a single orange construction cone.
Speaker 1 [00:09:18] I saw this.
Speaker 2 [00:09:19] Yeah, it freaked the car out and eventually the rider had to be rescued by a human. So we’re still learning a lot and I tell people, ‘If we’re still doing basic research on a technology like computer vision systems, it’s probably not ready for commercialization.’
Speaker 1 [00:09:40] Yeah, I mean, I’m so struck by – I saw that video and I’ve seen lots of videos of self-driving cars and they’re sort of like vision overlaid: we’re looking at the camera, like what they’re identifying things as. And it’ll draw a box around a person and be like ‘person, person, person, tree, person, person.’ The brittleness that you describe is extremely apparent and also it strikes me that figuring out what to do with that information is then even more difficult. The video is very striking: the car sees the orange cone and it stopped, it pulls in the wrong direction. It just entirely doesn’t know how to – Its entire reasoning system ground to a halt because it saw something that it extremely briefly couldn’t identify.
Speaker 2 [00:10:28] That’s correct, and there are other ways, other things, events that we’ve had that will give you that same information. So one of the issues that we’re really worried about, from a cyber security perspective, is this idea of passive hacking: the way that these convolutional neural nets work (these are the algorithms that power a computer vision system), you have to ‘teach’ them using potentially millions of images. For example, to make sure that the car understands what a stop sign is, I had to show it a thousand different examples of stop sign. Or a million different examples of stop sign. But if it sees a stop sign with a half inch of snow on it and it’s never seen that before and has not been trained to see it, then it won’t know what that is. So that orange construction code, it just appeared in a slightly different configuration than it was trained against. I’m quite sure that WayMo did train their cars to see the cone. But the problem with passive hacking is that if we know that there are these vulnerabilities, then people can use them to exploit their vulnerabilities. And so MobilEye went out and put a piece of black tape on a 35 mile per hour stop sign, it extended the 3 out just a little bit and the Tesla came by and interpreted the sign as 85 miles per hour.
Speaker 1 [00:12:01] Wow.
Speaker 2 [00:12:02] Then it started to speed up to achieve that. So that’s bad.
Speaker 1 [00:12:07] Yeah. I’ve seen lots of examples on the Internet, viral videos of ‘Here’s a mask that you can wear to fool facial recognition cameras.’ It’s like a video game, if I understand the way a video game works (having grown up with video games) I can sort of, ‘Oh, here’s how the game logic works. If I do this, I’ll be able to trick it.’ It’s a very common thing that humans do once we understand technological systems, of course we’ll start doing that. If we have self-driving cars roaming around the streets, people will understand how to manipulate their mental models and that’s bad. If you’re in a self-driving car and someone’s able to change what your car does simply by baffling it with some unusual stimuli, that’s really bad. Are we just looking to do something that is too difficult here? I understand why A.I. researchers would think, ‘Hey, this isn’t so hard.’ Driving is a system with rules, especially in the United States where we have a very rule-following traffic culture. We’ve got signage everywhere, the signage is universal. I understand why that would seem like an easy problem to solve. But then whenever I watch one of these videos, I’m baffled by how complex the job actually is. I was watching one in which a car, I think it was a Tesla, pulled up behind some people loading a van in the middle of the street. They had double parked and they were loading a van with some stuff, briefly. And if I’m a human driver, I’m like, ‘OK, I understand who these people are, what they’re doing. I understand that they might suddenly walk into the street, but probably pretty slowly. So I’ll just give them a wide berth. I’ll wait.’ There’s a whole social dimension to this that is extremely complex. And I looked at it and went, ‘How could a Tesla possibly understand all of those things?’ This is a novel situation, we don’t understand how the human mind processes processes such a thing that well. So are we being incredibly hubristic to think that we can design a machine that can handle all this?
Speaker 2 [00:14:23] Yeah, I’m a futurist. I’m an academic professor, and it’s my job to look in a crystal ball and try to start working on technologies 5 to 25 years before you might ever see them. So I wear two hats here. The first hat is, I’m very optimistic that we will create new technologies that are beneficial to society or/and even beneficial to companies wallets, but I have to wear another hat that says, ‘OK, but we need to be realistic’ and academics, I think, bear a responsibility to notify the public when perhaps technology is escaping the barn a little too quickly before it’s been properly vetted. I think that’s what’s happened here; we’ve got some basic research technologies that have been overhyped by Silicon Valley (and by some other academics) because people want to capitalize on this big VC startup culture. There was a big craze about six or eight years ago, where people started jumping into the self-driving space, and this goes to the Silicon Valley mantra of ‘Fake it till you make it.’ I think that we’re seeing this still with Tesla, it is just in the Silicon Valley ethos to overpromise and then hope that your technology development can move along very quickly to live up to your promises. I’m not really even blaming people for that. If we’re talking about cell phone apps, I’m good with that. You can overpromise, because it’s just my cell phone app. But I think what we’re seeing, is that the ‘Fake it till you make it’ culture is bleeding over into safety critical technologies, and it’s just simply not going to work. I just wonder, there are so many – Just like you, I hear you say ‘There are people who really believe that it’s going to happen tomorrow.’ There are people who believe that, there are very smart people who believe that. There are very smart people who believe it and who should know better, and what that tells me is the psychology of wanting to believe that something magical can happen is so strong that it makes people who should otherwise know better completely divorce themselves from reality.
Speaker 1 [00:16:42] But you can’t blame the public for wanting to believe it, especially because the tech industry has transformed our lives in so many beneficial ways over the last 30 years. The smartphone, the iPhone is a thing from the future. That was what it was like when we all received it and it’s transformed our society in many beneficial ways, some non beneficial ways as well. Our expectations have been justifiably raised in a lot of ways, but you’re right. It feels to me that you’re responsible when you’re making these promises in an arena where – People die in transit every day, people die on our roads. And I don’t want to overhype, I think there’s a danger in overhyping these cases where someone died in a Tesla self-driving car. I certainly am of the opinion that their marketing is overhyped in an irresponsible way. But do you share that worry that, hey, these some of these companies are making specific marketing decisions that are leading to unsafe outcomes?
Speaker 2 [00:17:52] Oh, absolutely. In the Missy Cummings blame tree, I’d like everyone to be blameless. My personality profiles is an ENTJ. So I just run around judging everyone. I apologize if it’s not deserved. I fault the car companies the most because they are making huge claims and I fault the public the least; because if you’re being told that your car has a full self-driving chip and how many of us read the fine print (no one); it’s easy to see how people can develop these incorrect mental models about what their car can do. But while I blame the companies number one, I think that the government bears huge responsibility too because they have the power to step in and stop these technologies; or at least kind of rein them in a little bit. Indeed, in Europe, Tesla cannot market its autopilot as autopilot. It cannot call any system ‘full self-driving.’ So in other parts of the world where Tesla operates, they have been reined in and we don’t see as many egregious accidents in other parts of the world that we do in America. But there’s plenty of blame to go around, I do think that users need to understand that their cars need their attention, but we also know from years and years of aviation research that if automation performs pretty good, that’s actually more dangerous than if it were to perform terribly. Because if your system almost all the time, does a great job and only every now and then it will fail at whatever task: that sets you up to be complacent and it sets you up to look at that cell phone. We’re all so easily distracted (and in this last year where accidents have gone way up despite the fact that people have driven fewer miles), I suspect that this is actually the beginning of a larger trend that is not just related to covid. Where we are just increasingly becoming multitaskers and we feel that we’re so bored, even if we’re just driving. It’s just easy to look at your phone, to try to get that information that you seek.
Speaker 1 [00:20:30] And that’s the thing that frustrates me the most about these high profile cases where someone dies in a Tesla that was in autopilot and Tesla itself has said, ‘Oh, this guy was using his iPad when he crashed.’ And to me, it’s like ‘Yeah, he was doing what you promised your software would allow him to do, which was take his mind off the road.’ What is the point, if not to allow him to be distracted? What’s the point of putting what is basically an iPad in the – I’ve been in a Tesla with someone who’s had the autopilot engaged on the freeway, and they’re dicking around with the iPad. And I’m like ‘This feels very dangerous to me.’ And they’re like, ‘No, we’re going to be fine.’ Like and that’s why they made an entertainment center, and as opposed to taking some amount of responsibility for the fact that maybe there’s a gap. Or in a lot of cases (these are cases in which autopilot has perhaps steered someone into a median or made an error that led to a death that required that user interruption) our human desire to trust technology happens so quickly. It’s the phenomenon of someone following their GPS directions into a lake; that famous version of it. Or my favorite example is, I read an anecdote once from race directors; from people who run marathons. Now that everyone is wearing a Fitbit, every time they have a marathon; dozens of people emailed the race director and say, ‘Your race actually was too long or too short, because my Fitbit said it wasn’t 26.2. It said that it was 27 miles.’ It doesn’t occur to them to think, ‘Oh, the $50 GPS tracker I have on my wrist maybe isn’t one hundred percent accurate.’ They think, ‘No it is accurate and it’s the professional who’s wrong’ and that is very human to always do that. That seems very dangerous, in this case, to me.
Speaker 2 [00:22:28] Yes, I think humans can be predisposed to over-trusting technology, but certainly in my own research that spans different domains and different applications, universally we see young people (or younger people like you) trust the technology way more. Older people like me, who are curmudgeonly, there’s a great place for curmudgeon’s in the world. I’ve got news for you, you’ll get there, too.
Speaker 1 [00:22:59] I think I qualify now, as a curmudgeon.
Speaker 2 [00:23:02] It is amazing, though, how much trust that the younger people have. They’re just digital natives. So this is a world that is much more familiar to them. We see a lot of over-trust, especially with car technology. I do want to point out (going back to the blame game), I don’t want to leave anyone blameless here. We’ve picked a lot on Tesla in this conversation. But I think the new big area that I’m concerned with is what all other car manufacturers are starting to call their technologies that are similar to Tesla’s auto pilot. Look at any car and they’ll start billing their new ‘hands free’ capability. Calling your technology ‘hands free’ and then telling people they have to pay perfect attention, it’s maybe not quite as egregious as calling your system ‘autopilot,’ but still using the phrase ‘you can be hands free while driving our car’ is setting people up to be mind-free. So I think that this it’s a problem because we will believe. If you’re telling me I can drive hands free, great. Even though there’s a camera theoretically monitoring what I’m doing. But if you’re still giving people up to 30 seconds on the freeway to be hands free, you’ll get mind-free about 2 seconds after their leave the steering wheel. So I think we’re perhaps in the most dangerous point of history in driving, where we’re in this quasi automated role. We will be so much better once we get to automation everywhere, including in the cars and in the infrastructure. But for right now, I think we’re almost moving back to the days before we had tail lights; because it’s just a wild, wild west out there. We’re going to have all this automation, all these incorrect mental models. Some people hands free, lots of people mind-free. I think what this is begging for, and I definitely think this is what’s going to happen, is that there is going to be more regulation.
Speaker 1 [00:25:20] Yeah, but it depends on what the regulation is. My understanding is that NITSA, our government agency that regulates these things, has been trying to get ahead of the industry and say, ‘OK, this is going to happen. Let’s help make it happen’ in a way that, to me, certainly doesn’t seem like they are keeping a really, really close eye on it in the way that I would like them to. How do you feel about that piece of it?
Speaker 2 [00:25:51] Well, we’re also in a weird space where we just had a changeover of administrations. I think it’s too early to start pointing fingers at the Biden administration. But we can look back at what happened under the Trump administration, there was never a director (a head administrator) of the National Highway Traffic Safety Administration. So basically, the agency that was responsible for safety and driving, there was an absent leader for the last four years. And I think that is a big reason why we’re seeing the vacuum that we’re seeing now. And we can compare ourselves again to Europe; where Europe has been much more hands on in terms of regulatory – and by hands on, they haven’t introduced a lot of regulation as much as they’ve introduced things like; you can’t advertise your car as auto pilot, you can’t advertise ‘full stop stop driving.’ So I think that there’s a lot that we can learn from other countries. But we are also seeing – I read an article the other day, where currently 30 Tesla accidents are now under investigation by NITSA. So I can’t imagine that with those 30 accidents under that investigation, nothing’s going to happen. There’ll be some recommendation that comes out as a result of that.
Speaker 1 [00:27:19] I feel like we moved very quickly into the social consequences. I actually want to ask you more about the A.I. piece of it, and how the technology that we have (especially around perception) differs from our imagination of how it works. Let’s talk more about what is missing from A.I. when it comes to self-driving, that these systems are simply incapable of that humans do very easily.
Speaker 2 [00:27:47] So what I’m about to say applies broadly to all A.I., not just driving AI. Fundamentally what we’re missing from a science perspective, is replicating human judgment and reasoning. Though when we try to build computer vision systems, we are equating two cameras that can give you stereo vision, just like your eyes can. So that’s how they’re doing it from a sensor perspective. Then the ‘brain’ for these systems is this huge collection of neural net models that say, ‘If I see this thing, then I kick off a set of rules that go along with it’ are trained using millions of images. I train the car to recognize these different objects and potentially different situations. But because the situations can be rapidly evolving and can be incredibly dynamic, that causes problems for the vision system which is not really connected to a brain like humans have. I think the greatest creation of all time is the brain vision system, because we have a constant feedback loop that’s telling us how to not just classify the world; but how to have imagination about what might happen. That predictive ability allows us to avoid a lot of bad situations. One of the key elements missing from self-driving systems today – Bicyclists, boy. I mean, you’re in real trouble if you’re on a bicycle, still, around the self-driving car. Bicycles can become obscured so quickly, and while a human can see a bike dodge behind a FedEx truck and go up on the sidewalk and we can project, ‘That idiot on the bike, he’s going to go up on the sidewalk and then he’s going to come back and then get back on the road in front of me.’ So I know to watch out for that. We just can’t do that in a self-driving system. Maybe you could try to do that with both the vision systems and maybe prediction, but the number of ways that can happen; the number of different presentations combined with weird sun angles can cause this to be a very unsolvable problem by automation A.I. in the time frames that you would have to solve it.
Speaker 1 [00:30:22] Yeah, coming back again to that point: so much of our knowledge is social; of what’s going to happen. This is certainly the case driving around L.A. I drive around L.A. a lot less than I used to, talked on the show before about how I kind of quit driving and started taking a lot more public transit. But I did drive here for many years and driving in L.A. is almost like weaving through a subway station in New York City. There’s so much going on. You’re aware of, ideally, what every single car might do. You’re literally looking at a car that might be parked to you on the side and going, ‘Oh, that that car just parked. This person might open their door, and so I shouldn’t get too close’ or ‘This bike lane is blocked and this bicyclist is going to have to move around me’ or even just the thing of when you’re driving next to a bicyclist and you feel a little more nervous and you drive a little more carefully because you see an unshielded person next to you and you’ve got a social response to them. There’s all of this stuff that we’re bringing from the rest of our lives, of knowing the way that people operate: seeing a person on the curb and being able to tell from their body language whether it looks like they’re about to cross the street or not. Once you start thinking that way, there’s so much going into this that it seems very – I’m not saying an artificial intelligence could never have all of that, but it starts to look like one of the most difficult things to model, not one of the easiest.
Speaker 2 [00:31:53] Right. Making reasonable guesses and I tell people, that is truly important on so many levels. It is what is going to be the inability to put a lot more autonomy in medical applications, for example, real robots doing surgery. We aren’t going to be able to get there for a long, long, long time in medical applications (not in my lifetime) because human anatomy can be so different from person to person that we just can’t have an A.I. system that could have a good enough perception system to be able to cope with all that uncertainty. That’s really what it comes down to, is that the more uncertainty that you have in the world, the worse A.I. performs. It’s funny because I do like to come out to L.A. every now and then. But like every other person who visits L.A., the traffic drives me bananas. I would actually tell you that the traffic jams of L.A. are a perfect place to do – I think the only really good application of level three autonomy in cars: a slow crawl. So there it’s called the traffic jam pilot and Honda, in Japan, is starting to roll this out. Where the car can drive itself under slow conditions and you can do whatever you want while the car creeps along in slow traffic. Then once the speed gets to some predefined level, the speed picks up again and you have to take over the car. So indeed, I think L.A. could be served very well by that kind of technology. The caveat to that is: Audi did try to roll that out in Europe about a year or two ago, and the system folded it and it failed. They weren’t able to roll it out because of legal and liability restrictions. And so what that tells me is that Audi just was not able to get their system to perform at high enough reliability. They haven’t said, but I suspect the real problem was the human hand-off. So, yes, the car can crawl along and very slow traffic. But what happens if the car’s doing that in some traffic jams in L.A.? I know I’d fall asleep. Then the car would come to a stop and it wouldn’t move until you woke up and that alone would create a problem, if you had people falling asleep on the 405. So there are some bigger ramifications and I think that this variability in human behavior can be a big mystery to car manufacturers.
Speaker 1 [00:34:37] Yeah to a certain extent, it almost just seems like this fundamental mismatch of what we’re trying to do. Automation seems wonderful if you’re able to construct an automated arena for the automation to work in: if you’re able to construct a factory where everything is sort of separate. We’ve got robots moving around on little tracks or whatever. Our human streets are designed to be anti automated. We made a decision to not go with trains which run on tracks according to predictable schedules that keep the average person away from the machine, and you step on at your allotted time. Instead, we decided, ‘Let’s build streets where everyone gets a machine that they get to use willy nilly according to the whims that they have at the moment.’ Because that’s what we wanted, and now we’re going to try to insert automated machines into literally the messiest possible human system.’ Where you’ve got tens of thousands of individual people doing things basically randomly based on like, ‘Oh, my God, look, there’s a of a place let me screech into the parking lot real quick.’ We’re just being messy people and that’s the thing that we’re trying to automate. That seems like kind of a big mismatch to me on a fundamental level.
Speaker 2 [00:36:00] Well, I think your intuition is spot on. If you want to be my graduate student now, you passed the first gate. I think you’re right. I think it’s absolutely correct that there is just a fundamental mismatch. Now, that doesn’t mean we can’t have nothing. I do think that the slow speed shuttles that travel less than 25 miles per hour and going in short, geofenced areas is a good idea. The applications you would see there are last mile delivery. So I do think that you can get very safe systems that are doing slow speed deliveries in very well mapped areas that don’t require a lot of upkeep to have to keep remapping them. That’s one of the other problems: for these cars to be able to work anywhere all the time, they have to have very detailed maps and the labor cost to keeping up your maps; to keep them up to date and to make sure that they were as detailed as they needed to be. I used to live in Boston. Boy, the orange cones would pop up in unexpected places all the time. You couldn’t do that for real systems because you’d have to map all those systems in detail and then upload it to the cloud and make sure all the cars got it. So I just think that there are some infrastructure penalties that people weren’t thinking about, things that for last mile delivery don’t become cost prohibitive. But I also think we’ll see slow speed shuttles and in Vegas you can do this. You can take a self-driving shuttle from McCarran, the airport, down to the strip. Great application. You could layer in some additional infrastructure in lights along the roadways. It’s expensive, but if you’re only having to lay in additional infrastructure for a few miles, then that makes sense. So I do think we’re going to get something out of the self-driving crazy bonanza that we’ve been seeing for the last six, eight years. I do think that within the next one to two years, you’re going to see a lot of long faces.
Speaker 1 [00:38:09] Yeah, your focus on infrastructure actually gives me a question I really want to ask you about but we gotta take a really quick break. We’ll be right back with more Missy Cummings. OK, we’re back with Missy Cummings. So one of my worries that I have about this, you we’re talking about infrastructure that they can, for instance, build in Vegas to make a particular trip work better. One of the concerns I have, based on the way I’ve seen our societal conversation about self-driving cars shape up, is the rest of society being perhaps built too much around self-driving cars. You mentioned the ability of people to put a piece of masking tape on a sign in order to fool a self-driving car. Certainly, I can imagine, people wearing different outfits and stepping into traffic could baffle a self-driving car. I actually even saw an editorial (can’t remember where), I saw it about a year ago, but it was proposing, ‘Hey, maybe we should, in order to make the world easier for the poor self-driving car,’ because the technology is not really up to being a part of this human system, where people can step out into traffic at any time and there’s lots of chaos and stuff. ‘Maybe we should fence off our streets, make it more difficult for people to cross the street at an intersection, make jaywalking physically impossible. Maybe we should basically turn every street into a boring company tunnel where we’re reducing the amount of things that can happen on the street to make it easier for the cars.’ That sounds bad to me. I can easily imagine a world where, rather than making a transportation system that is more pedestrian friendly – I want to reduce deaths, but I don’t want to do that by making a world that disadvantages pedestrians even more and makes it even harder to get around unless you happen to be sitting in the back seat of a self-driving car. Because now, we’ve turned every street in Manhattan into a subway track that you are not allowed to cross. Do you have that concern?
Speaker 2 [00:40:25] Oh, yes, indeed. This these issues are what basically turned me from a regular stuffy professor of technology to a one woman Don Quixote-esqe attack against all the windmills of self driving. But literally, I was in a meeting with a group of people that I do research here with in North Carolina and we were listening to people from the North Carolina Department of Transportation start laying out plans for how they were going to – Literally, they were going to tear down some building garages and they were thinking about making parking lots on the edges of the downtown area because the idea is that you’re self-driving car would come in, drop you off, and then it would go to its self-driving car penalty box (wherever that was) in some out of the way place and it would wait there for you. Really planning, doing serious urban planning for no parking for cars. While I would love to see fewer cars on the road; I am a big biker, I’m a big pedestrian. I would love to have more accessible public transportation. I was askance that my taxpayer dollars were about to start going into a future that I knew was not coming. That’s when I started engaging people and started to become a very mouthy broad, I’m sure people would say. But I just couldn’t take it anymore. I couldn’t take it, because I work on these systems every day. I tell people this all the time, and I’m 100 percent dead serious: I would never get into a self-driving car that any of my students ever programed because I know how, first of all, just the basic problems with convolutional neural nets and then all the mistakes that programmers can make in the development of these cars. There’s just a phenomenal lack of testing to make sure that they’re safe enough. So, yeah, I absolutely think that people certainly shouldn’t be spending taxpayer dollars to start trying to build the city that you think is going to come, when self-driving cars get here in whatever year anyone has promised. Because it’s not coming. I’ve pointed a lot of fingers and I’ve made a lot of enemies by saying (both at the federal and all state levels) that these governments don’t have anybody on their staffs that know what they’re talking about, when it comes to self-driving and A.I. in general. It’s a bigger problem that we have in this country: that anyone who’s any good at A.I. goes to the sirens on the shore in Silicon Valley and all the really top people are working strictly for the commercial market. It’s just a supply and demand problem. So we are not getting good, talented people who are moving into governments at the federal or state level who can flag these problems and start to understand what is an overhype, what is a ‘fake it till you make it’ promise and what’s real and what’s the real timeline.
Speaker 1 [00:43:51] Yeah, if you look at the success these companies have of going into a particular municipality and saying, ‘Hey, let’s do a trial here,’ I think Elon Musk’s boring company is a different example but it’s the same phenomenon of, ‘Hey, here’s a very wealthy person who speaks very confidently about what the future is.’ I’m not an A.I. researcher. I don’t know, I’m not a researcher. We’re not hearing the other side. So the problem that you just laid out, though, is actually even more realistic than the one I laid out. The one I laid out is like, ‘If we have self-driving cars, what if we design our cities too much for them?’ You’re saying, ‘No, we’re going to build infrastructure that is not even going to connect to anything that exists because the technology is simply not there at all.’ So I’m curious, we have all these companies saying, ‘Hey, this technology is five years away. It’s five years away.’ What do you think is actually going to happen?
Speaker 2 [00:44:46] So I think in the next 5 to 10 years, you will see the slow speed driverless shuttles. I have a few former students who work at a company called Neuro, in the Bay Area. I think Neuro has a model and I think Amazon’s jumped in this game. Slow speed, robotic last mile delivery will become a reality and you’ll see more and more of these shuttles in limited areas, and maybe at a stretch in the southwest where there is no snow that can fall on a stop sign and confuse the car, you’ll see some very limited robot taxi applications like WayMo has. I think the real question about the robot taxi problem, is whether or not they can ever make it scalable and make a profit. So one of the hidden costs in these systems that people don’t really realize (and where I spend a lot of time in my research) are these remote operation centers. There’s no such thing as any autonomous vehicle ever in our world. You have to have humans overseeing them at some capacity. So when the WayMo car got stumped by the orange cone, there was a whole team. There was probably five to seven people who were involved in orchestrating the rescue of the passenger out of the backseat of the WayMo1 car. You know, it’s funny. It would make a hilarious Tik-Tok meme.
Speaker 1 [00:46:24] I think I actually saw it on a clip of it on Tik-tok
Speaker 2 [00:46:29] The fact of the matter is, is that going to be scalable? Every time a car gets stuck by an orange cone or a stop sign. Somebody spray painted something on the front of a stop sign and then you you have to engage five to seven people to rescue the passenger, that’s just not scalable. I do think that it is still yet to be determined, whether or not if the cost of the remote operations center: where you have to pay people a lot more because their skill sets are a lot higher, really outweighs the cost of having drivers.
Speaker 1 [00:47:11] Yeah, it’s so funny the way you describe that. My understanding for this WayMo trial in Arizona is that they’ve got the WayMo cars self-driving around, but they’ve also got a fleet of people in a van at all times that the cars are out, like a mile away so that whenever one gets stuck they can go nudge it and it makes it look like a glorified Roomba. If you have a Roomba, sure it uses ‘A.I.’ to clean your living room, but it’s going to get stuck under the couch and you’ve got to go get it. The ‘Waypoint Podcast,’ which I love, they described this once as being ‘a Roomba foreman.’ When you have a Roomba, you have to keep an eye on the Roomba and make sure that it actually gets things clean and doesn’t fall down the stairs or whatever. Maybe one of these companies comes up with one of these services, like Tesla says, ‘OK, you can turn on full self-driving,’ but it’s only in a city where they’ve got a rapid response team and you’re paying $100 a month in order to have the privilege of being able to call them and have them rescue you at any time? This is not what we were promised. It’s far from that.
Speaker 2 [00:48:25] You and I could cook up a whole new business because if they do charge you, we could come in and be like ‘Don’t pay them a hundred, pay us 50 bucks a month and we can just track your cell phone.’ It does raise this question about what kind of derivative technologies and companies and services with this will technology spring up. I do think that one of the interesting things, looking forward to more and more autonomy in cars are, are we going to have to have more driver education? Are you going to have to have adult driving schools? What’s that going to look like? If I used to be a fighter pilot for the Navy and I had to go to two years of flight school to learn how to fly these things, these cars are to become that level of advanced in terms of use and application. So what are we going to do in the future when your car significantly exceeds your cognitive abilities? The other area that I think is interesting that people don’t realize, even in Arizona this is a big deal, the sensors on the car must be kept clean. If you get dirt in your eyes, you can’t see. Dust accumulates pretty quickly on these sensors and even the lidar. So now, is there a whole new world of little tiny windshield wipers? Are we going to start seeing some kind of development? Are you going to have groups of people instead of windshield people like you see in New York City, or people with Special Clause just to clean your vision sensors and your lidar to make sure that it’s clear of dust and dirt and snow and what have you.
Speaker 1 [00:50:13] And once you put it that way, the number of things that can go wrong; the brittleness of the system because it is so complex, is massive and that’s disconcerting in its own way because we can end up – You’re an A.I. Researcher, so I apologize if I’m putting words in your mouth. But I’ve heard a researcher say ‘One of the weird things about these systems, is that when something goes wrong, you often don’t know what it is’ because you don’t fully understand the system that you’ve created. Because you’ve trained it on all this data and you’ve grown this neural network, these crashes can happen and it can be unclear exactly what the automated system did. And why: was the sensor dirty? Did it misidentify something? What is the deeper problem?
Speaker 2 [00:50:59] We do not know how convolutional neural nets do what they do, and I just gave a talk to my robotics brethren at a conference. I call it the ‘dark magic.’ Magic is wonderful because you know that there’s some trick behind it, but you don’t know what that trick is. But you know that the magician has full knowledge of the trick. So magic works and magic is delightful because you know that you’re being conned, but you’re OK with it and you don’t want to dig too deep because you know that the magician knows all. A.I. is a dark magic, because not only do you not know how it’s working, neither does the magician. I think that’s the real problem is, there’s dark magic in A.I. and it seems magical and if it seems magical to the people who created it, I’m just telling you, run for the hills. Because if we can’t figure it out and we don’t know, and that’s why WayMo has not been more successful and why all companies have not had successful self-driving companies, because they’re constantly surprised in ways that they have no knowledge of how to fix. Same thing for that orange cone. I can tell you the WayMo crowd was scratching their heads over that one for a long time.
Speaker 1 [00:52:28] Yeah, it was such a wild video. Well, how much of this, that we’re seeing from these companies, is coming from the VC culture? To me, a lot of it looks like when Uber was still working on self-driving cars, it seemed like a whole bunch of the point of the overpromising was, ‘Hey, if we just make the case that there’s a lot of investors who believe that this is going to transform transportation and whoever wins is going to be the new standard oil of transportation to monopolize the whole industry and so we simply need to make people believe that we are going to have this transformational technology. It doesn’t matter whether or not we do it. We just need to keep the con going.’ That’s how it often looks to me. Does it look that way to you ever?
Speaker 2 [00:53:20] Yeah, that’s part of the ‘fake it till you make it’ culture. One of the side hustles I do occasionally is that a VC will call me up and ask me to either look over some documents or, occasionally, go on a site visit to Company X. They’re thinking about investing and they want me to tell them whether or not the company is legit. Do they have sound science? What is the likelihood they can deliver on the promises that they’re making to the VC? I can’t tell you who all these companies and VCs were, but I can tell you that one hundred percent of the time I told the VCs that the technology promise was not good. It was going to fail and that the startup company just didn’t have any basic science on their hands. One hundred percent of the time, the VCs ignored me and 100 percent of the time they invested against my recommendation and 100 percent of the time I was right.
Speaker 1 [00:54:30] I’m not going to say that makes you happy, to be in that position, but it must be a little bit satisfying to feel that.
Speaker 2 [00:54:42] In the dark world of A.I. consulting when no one listens to you, yeah of course, I’d love to tell you that I’m not an ‘I told you so,’ but I think we’ve established that I’m a judger. I definitely judge them as dumb asses. I’m sorry, know you can’t say that on the radio
Speaker 1 [00:55:03] It’s a podcast, you can say whatever you like.
Speaker 2 [00:55:04] Well, if I’m telling you something (and I’ve spent my whole professional career doing this), and then you want to go against my recommendation; man, that’s not my fault. Right? But it does worry me. And again, this is one of the reasons why I increasingly get more vocal every year because I don’t want my taxpayer dollars invested in this. I actually want to know when people are in full self-driving mode when I’m on the road and they’re near me, because I think that they’re not just putting themselves at risk, they’re putting a bunch of other people at risk. Last year I think was the first year that we started seeing Tesla accidents that caused the deaths of other people; not just the driver. We are starting to get to an area where people – and this is where regulation will eventually finally kick in, because the VCs are there to make some serious cash. I get what they’re saying. And it true that the last couple of years, I’ve had fewer contact me because in the end they know what I’m going to tell them and they don’t want to know the truth. So this goes back to this magical thinking of, ‘If I believe strong enough in it, then it will come true.’ And I see all of the ‘progress’ that these companies are claiming, (not just Tesla, but WayMo, there are other companies and other car manufacturers) ‘Oh wait, if you think it’s crazy now: soon the flying cars are coming.’ So we’re just going to double down on some of the difficulties that we’re having with autonomous systems. I appreciate that desire and of course, as a researcher, I want this money to keep pouring in. But I think we need to start making a more clear, crisp threshold between what is still research and what is really ready for public use.
Speaker 1 [00:56:58] Yeah, look, we’ve been speaking so negatively for most of this conversation. You are, however, doing your own work on A.I. I’d love to know, what is the positive vision that you have about what A.I. could actually be used for (in transportation or in anything else) that is responsible? Things that could save lives rather than hurt them and that wouldn’t be overhyped. If you could go set A.I. policy for NITSA and for Tesla and make it all work right. What could it actually do to benefit us?
Speaker 2 [00:57:37] I am a big proponent of humans plus autonomy, instead of humans versus autonomy. I don’t think we should be worried so much about replacing people, as we should about combining the relative strengths and weaknesses of both humans and autonomy. For example, one really cool project I worked on that recently ended was a flying copilot robot. The idea was, that in the world today, we actually have way more planes than we do pilots. We need pilots. Even with COVID, there’s still a shortage of pilots. But we have a lot of older aircraft that it would just be cost prohibitive to try to retrofit them to be digital aircraft; so that they could have some kind of autopilot. So with a Boeing company called Aurora Flight Sciences, we built a robot arm that could listen to you and talk back to you, and it could do everything in the cockpit in terms of basic flying. It could grab the yoke, it could flip switches. And it’s whole job was to relieve your workload. So if you needed to get up to go the bathroom, you could tell the copilot to take control and it would act and it would call you if anything became a problem or if you’re in the goo (and what that means is you’re doing an approach and you’re completely in the clouds and the rain is bad); if you’re completely focused on the task of flying, you could have it make a radio call for you. Right now, this is a great idea of how to balance humans and autonomy to work together. I think Toyota has a great concept and it’s Gardian concept. So instead of trying to replace the human driver with autonomy, try to keep the human driver from doing dumb things: running off the road when they’re talking on the cell phone, maybe some lane keep assist; but in ways that – Some cars can have what I call ‘ping pong,’ they bounce between the lanes. So more effective lane assist to keep drivers from running off the road. Emergency braking assistance, I think is really important. Recently, we have a project where we developed a drone listening system. For road drones, so, usually I work on how to make drones better, but in this case we’ve been working on how to alert prisons, for example, turns out drones dropping in contraband into prisons is a huge problem.
Speaker 1 [01:00:26] Wow, what you just opened up the whole topic for another episode. I’m like, hold on a second. Wow. Really?
Speaker 2 [01:00:33] Oh yeah.
Speaker 1 [01:00:33] That’s wild. OK, I accept it. I’ll look into this later. I accept that this is happening. That’s wild.
Speaker 2 [01:00:39] It’s a worldwide problem. Prisons can’t afford the expensive radar based drone detection systems that cost hundreds of thousands of dollars. So we invented basically a three hundred dollar – it looks like a little Alexa puck and it listens. It can listen for drones and it warns you when a drone is nearby, and it’s got a convolutional neural net in it and it’s not right all the time. So there’s a collaboration between the system notifying the person and the person can tell the system whether or not it is a drone sound or not, and then the system can learn over time. Understanding when it’s the right place to use autonomy, how to balance it with what humans can do, especially the human ability to reason under uncertainty, that is where the real strength is going to lie in the next 30 years.
Speaker 1 [01:01:38] Yeah, we need to remember that technology is a tool for us to use. And insofar as it allows us to do a better job at what we do, I love the example of a pilot who is still going to be in control of flying the plane, but could use a little bit of assistance from a system that they fully understand. As opposed to, tell me if this fits into your critique, because I don’t understand enough about flying to know if this quite fits. But the problem with the Boeing 737 Maxus, which my understanding was that the reason those crashes happened was some automated system that was causing the plane to do something that the pilots were not ready for, that there was a mismatch between their understanding of what the plane was doing. Do I have that right?
Speaker 2 [01:02:29] That’s correct.
Speaker 1 [01:02:30] And so that’s a bad version of that, because the pilot in question was like, ‘Holy shit, what’s going on?’ Did the wrong thing as a result, they were fighting an automated system that caused the plane to crash. Versus them remaining the expert, them remaining in control, them understanding everything that’s happening in the machine and having assistance from A.I. or from a human built machine, rather than being disrupted by it or taken aback by it or surprised by it. Having help rather than taking your eyes off the road.
Speaker 2 [01:03:05] Yeah, that’s exactly correct and I think the whole Boeing 737 Max situation was especially egregious because Boeing, at one time in the past, was considered the world leader on developing collaborative automation between humans and flying. And this just goes to show you how it can be a slippery slope. Companies want to believe that eventually, if they can just get rid of pilots, they would save a ton of money. I see this everywhere. You name me an industry and I can promise you, I’ve had a conversation from someone in that industry who came to me and said, ‘I want to get rid of everyone in the fast food restaurant. How do I do that? I want to get rid of drivers. I want to get rid of pilots. I want to get rid of people in manufacturing settings.’ So it is that siren on the shore of you wanting to believe that you can get rid of people to increase your bottom line. I think that sometimes the lure of that is so powerful that it causes companies to forget the knowledge that they had and in the case of Boeing, no company knew the risks better than Boeing’s human factors division. But that being said, they were still overridden in the design of that system.
Speaker 1 [01:04:29] Wow and I mean, this desire to get rid of people. First of all, Andrew Yang ran on the premise of what you’re talking about, that these companies are going to do this and therefore we just need to give people money because everyone’s going to lose their jobs to automation. Why do we need to envision a future that way? Why not, envision a future where humans are still part of the equation, but their lives are better and easier? Rather than saying they’re able to do their jobs more effectively, able to care for themselves and care for others using artificial intelligence as tools. Why not, a version of A.I. of self-driving cars that centers us behind the wheel, doesn’t have us sitting in the back on iPads and doesn’t eliminate every cab driver in the world, but instead makes transportation better for people without people having to lose their jobs.
Speaker 2 [01:05:27] Yeah, I totally agree. I don’t mean to start dissing Andrew Yang, who is a fellow nerd. So I appreciate his nerdy approach to the world. But his statement that we’re going to automate everything anyway is a very common opinion among CEOs, the C Suite leaders of companies and government organizations, and it’s kind of like a hidden IQ test for me because it tells me they don’t understand autonomy and automation at all, because if they really knew what they were talking about, they would realize they’re not going to get that. We’re nowhere near that. And for all the reasons that self-driving cars are not going to be here in the next five years in the way that they’re being sold to us. I do think that the world of collaborative automation, regardless of what everybody is saying, eventually it’s going to it’s going to be clear to people. It may happen slowly in one domain at a time, but I tell taxi driver taxi cab drivers all the time, do not get worried. There are still so many problems, that by the time we actually figure out and the change will come incrementally, that we’re not going to wake up one day and have self-driving cars. Recently, right before covid, I went to Peru. I went and I was in Cusco and in Peru, people just park on the sidewalks. Dogs are everywhere. I’s kind of like in Italy, it can be a free for all. Some places in India; where you got cows and bikes and trikes and cars. Oh, no. Oh, no. This is completely out of the realm of self-driving cars for now. So people are safe, but what we need are government leaders who actually instead of spouting off and saying things like that (even when they’re smart nerds) we need to make sure that they understand what they’re talking about.
Speaker 1 [01:07:28] Well, I can’t thank you enough for coming on to give us a little reality check. It’s exactly what we love to do on this show. Where can people find out more about you and your work?
Speaker 2 [01:07:38] You just Google ‘Missy Cummings, Duke,’ and you’ll get my website. I’ve got every paper. There’s so many papers there. If you’re having a problem with insomnia, just go to my website. They’re pretty dense papers, but there’s also a lot of papers there that are written for the general public because I believe that public education is critical. Clearly, I need to sit down Andrew Yang and have a conversation with him (among many other people) and I do that quite a bit. I make it a personal goal to make sure that I brief people in power to make sure that they understand what is and what is not possible.
Speaker 1 [01:08:14] Yeah, well, maybe you could run for president on the opposite platform of ‘this is not going to happen as soon as everyone thinks it is.’ Really, really appreciate you being here, Missy. Thank you so much.
Speaker 2 [01:08:25] Thanks for having me.
Speaker 1 [01:08:28] Well, thank you once again to Missy Cummings for coming on the show. I hope you enjoyed that conversation as much as I did. Gave me a lot to think about. Hey, if you want to support the show, visit our custom bookstore at factuallypod.com/books where you can buy the books that some of our incredible guests have written. When you do that, you’ll be supporting not just the show, but also your local bookstore because it is through bookshop.org. That is it for us this week on actually. I want to thank our producers Chelsea Jacobson and Sam Roudman. Andrew Carson, our engineer. Andrew W.K. for our theme song. The fine folks at Falcon Northwest for building the incredible custom game PC that I’m reporting this very episode on. You can find me online at AdamConover.Net or @AdamConover wherever you get your social media. Thank you so much for listening and we’ll see you next time on Factually.
Recent Episodes
July 26, 2022
How can we best help animals, when it’s we humans who cause their suffering? Animal Crisis authors Alice Crary and Lori Gruen join Adam to explain how the same systems that hurt and kill animals also harm humans. They discuss the human rights abuses that happen in industrial slaughterhouses and how palm oil monocrops are devastating the world’s rainforests. They also share how we can have solidarity with animals in our daily lives. You can purchase their book at http://factuallypod.com/books
July 19, 2022
In times of turmoil, it can be useful to take a longer view of history. Like, a LOT longer. Paleontologist and author of “The Rise and Reign of the Mammals” Stephen Brusatte joins Adam to explain how mammals took over the Earth hundreds of millions of years ago, and why we survived and achieve sentience when dinosaurs died out. Stephen goes on to discuss why taking a deep look at our history can help prepare us for the crises of the near future. You can purchase Stephen’s book at http://factuallypod.com/books
July 13, 2022
Trans people have existed as long as, you know, people have. But the barriers to legal inclusion and equality are still higher than most people realize. “Sex is as Sex Does” author Paisley Currah joins Adam to discuss why institutions have been slow to give legal recognition to trans identities, why Republicans have shifted their attacks from bathroom policies to trans youth in sports, and why the struggle for trans equality is tied to feminism and women’s liberation. You can purchase Paisley’s book at http://factuallypod.com/books