July 27, 2021
EP. 115 — Is AI Really Inevitable? with Glen Weyl
Everywhere you look, some pundit is claiming that AI is soon to replace humanity. But is that really inevitable – or are we choosing to make it happen? Technologist Glen Weyl joins Adam to discuss why he believes AI isn’t really a technology – it’s an ideology.
Transcript
FACT-115-20210723-Weyl-RCv02-DYN.mp3
Speaker 1 [00:00:22] Hello, everyone, welcome to Factually. I’m Adam Conover, so happy to have you join me on the show once again as I talk to an amazing expert and learn all the crazy shit that they know, that I don’t know. Let’s jump right into today’s show. There’s this narrative that we’re all hearing now that soon, super powerful human replacing God-like artificial intelligence is inevitably going to come and so we need to get ready. We got to prepare ourselves for the advent of the A.I. super being on planet Earth. You hear it all the time. Andrew Yang ran an entire presidential campaign on the idea that A.I. is coming soon, so we need to start just giving people money because there aren’t going to be any jobs for humans left to do once A.I.’s are driving all our forklifts and whatnot. Elon Musk is constantly making headlines by saying ‘A.I. is coming soon and we’re unleashing the demon, and we must beware.’ More and more, we see big promises about A.I. in almost every consumer product under the Sun; from self-driving cars to our cell phones, even Photoshop even has A.I. crap all over it now. But is it really a guarantee: that this version of A.I.; a superintelligence that is going to replace humanity at every human activity, is that really inevitable? Is that something that is destined to happen or is it a choice? Is it something that we are choosing to make manifest? Well, on the show today to answer that question, we have Glenn Weyl. He’s a researcher who has one of the greatest titles I have ever heard: he works for Microsoft and his title is Office of the Chief Technology Officer Political Economist and Social Technologist, or O.C.T.O.P.E.S.T. (pronounced octo-pest). I’m sure there’s a story behind that title, and no, I did not ask him about it which was my mistake. You’ll have to go Bing it to find out the answer. But without further ado, please welcome Glenn Weyl. Glenn, thank you so much for coming on the show.
Speaker 2 [00:02:31] It’s my pleasure, Adam.
Speaker 1 [00:02:32] OK, let’s jump right into it. You wrote a piece for Wired, a little while back entitled ‘A.I. is an ideology, not a technology.’ It’s a very provocative title. What do you mean by that? What have most people misunderstood about A.I. and the way that we use it today in our daily discourse?
Speaker 2 [00:02:51] I think the reality is that most people don’t know what A.I. means exactly anyways, everyone has a pretty vague notion of it.
Speaker 1 [00:02:57] Most people literally just think it’s that the robots are going to kill us. Most people will say that and then that’s about it.
Speaker 2 [00:03:06] I would say probably most people don’t even have the ‘robots are going to kill us,’ they just think it’s some weird techie word. I think most of the rest of the people think the robots are going to kill us and then – anyway. But I think that what A.I. really means, within the technical communities that are developing it and the people that are investing in it, is quite different from the usual description that people superficially give of it. I mean, a lot of people say, ‘Oh, wow, there’s this amazing stuff. There’s these neural networks that are recognizing people,’ or sometimes companies will just refer to pretty much all digital technology as if it’s A.I. or something like that.
Speaker 1 [00:03:51] You use the example that a couple of years ago, something that would make one image look like another image or image recognition technology, that sort of thing. Five or 10 years ago, we would have just called that image processing, ‘Oh, there’s some fancy image processing, like Photoshop.’ And now we call it A.I. like, ‘Wow, Instagram has A.I. to make me look like a bunny rabbit or can identify your face using A.I. or that kind of thing’ and it’s just become a different way of labeling the same old technology, to some extent.
Speaker 2 [00:04:20] Yeah. And in fact, what used to be called A.I., which was these logical systems that would diagnose you or would ask you a few questions. That used to be called A.I.. That’s that’s no longer called A.I. and anything that’s cool and new just gets called A.I.. That’s not a very useful definition, that’s basically just calling all technology A.I.. In fact, the people who are actually building these things and investing in them, that’s not what they’re trying to achieve. They have a very particular vision, which is that they want to create autonomous systems; systems that operate independent of a lot of people’s oversight or involvement and that achieve intelligence that’s human level on a lot of different things. That vision that they’re trying to achieve, I claim that it’s not actually technology. It’s not anything specific. It’s an ideological vision of the future.
Speaker 1 [00:05:17] It’s more like a particular set of ideas one has about how technology should be used. It’s a goal that one would have of technology. It’s like if I was going to say, ‘Everyone should all ride Segways all the time and no one should walk anywhere.’ That’s not a statement about technology. That’s a statement about how I think technology should be used.
Speaker 2 [00:05:41] Yeah. Another way of putting it is: going to Mars is not a technology. Going to Mars is something you want to do. You might invent some technologies to do it. And in fact, this is a great example. Think of the Apollo mission; the Apollo mission was not a technology. The Apollo mission was a goal. We wanted to go to the Moon and we invented all sorts of stuff along the way. A lot of it was really useful in all sorts of other applications, like the GPS (which nobody thought that had anything to do with space) came out indirectly out of the Apollo project. But going to the Moon was the goal. A.I. is a goal set. It’s something that we’re all trying to achieve together. It’s not a specific technology.
Speaker 1 [00:06:23] Yeah. What a strange thing. The strangest thing about it, is that it’s so often talked about as being inevitable. One of the candidates for the Democratic nomination a couple of years ago, literally ran on ‘A.I. is inevitable. It’s going to replace all of our jobs, and so we need to start cutting people checks because no one is going to be able to work anymore because A.I. is going to do it for us.’ And my reaction to that was always like, ‘Why? Why would that be inevitable? What math equation says that that’s going to happen?’ As you say, that sounds like an ideology. It sounds like something you make happen. It doesn’t seem like something that naturally follows the progress of technology.
Speaker 2 [00:07:09] Yeah, I think most claims about what’s inevitable in technology are wrong. In fact, there’s a long history of people who want to make something happen saying that it’s inevitable.
Speaker 1 [00:07:23] That’s what you do when you want to make something happen. Every political candidate says ‘Our energy will overwhelm the opponent and we are destined to win and et cetera,’ if you’re in any kind of conflict and you want something to happen, you say it’s going to be inevitable and that the forces of nature are on your side.
Speaker 2 [00:07:41] And this comes back to Marxism. An ideology like Marxism is the original example of that right? It said the inevitable course of history is that communism will supplant capitalism and we’re all helping it happen in the right way, or something like that.
Speaker 1 [00:08:00] And that’s ideology, not science.
Speaker 2 [00:08:03] Exactly.
Speaker 1 [00:08:06] OK, that’s starting to make more sense to me. Why that statement of ‘A.I. is going to take over and do x y z instead of humans.’ Well, when someone says that, they’re not making a scientific prediction, they’re making a statement about what they want to happen and what they are going to make happen. Is that your argument?
Speaker 2 [00:08:24] Yeah. Well, let me give you an example, the definition of success in A.I. is usually this thing called the Turing test. A Turing test is where there’s a person (on IM or whatever) talking to a computer and a human. If they can tell which one is the computer and which one is the human then the machine loses, and if they can’t tell which one is which then the machine wins.
Speaker 1 [00:08:49] This is a very famous test, and by the way, I always thought this test was kind of bullshit.
Speaker 2 [00:08:54] So look, the Turing test is really a three person contest. There are two people involved and one machine involved. So if the machine does really well, it can win. But if the people do really badly, the machine can also win. If the person is completely drunk and can’t tell the difference between anything, then the machine wins. If the person who’s participating in it has been indoctrinated by their society to behave exactly like a machine, then the machine wins too. The real question of the Turing test, to me, is not like, ‘Is this an interesting test,’ but rather, ‘Why are we taking the side of the machine in this?’ Do we want a world where people are so stupid that they can’t tell the difference between machines and people, or where people are so robotic that nobody has any use for humans anymore because they’re just as bad as robots? I would say our goal set (if you want to present that problem) should be to make sure we create a world where people are interesting and rich enough that no one’s ever going to mistake them for a machine.
Speaker 1 [00:10:04] Wow. It’s a really good point that the Turing test leaves out the human at the other end. Famous thought experiment, or criteria, set up by Alan Turing. It’s fun to think about it and I think, a very important idea. But yeah, it leaves out the context of who the person is. Here’s the story I was going to tell: when I was in college, I had read about the Turing test and for fun I took Eliza. I’m sure you know what Eliza is, but for everybody else, it’s like –
Speaker 2 [00:10:39] There’s a very interesting historical story around that, by the way, which we’ll come back to it in a second.
Speaker 1 [00:10:42] Oh, I would love to hear it. Well, Eliza is the very simplest conversational program ever. It’s from the 50’s and basically, you go, ‘Hello, Eliza,’ and it says, ‘How do you feel today?’ And then you’re like, ‘I’m feeling sad.’ And then it just sort of plugs in ‘How do you feel about feeling sad?’ It’s very, very, very simple. What I did was, I took it and I created a new instant messenger account (tells you what year this was, I was using A.O.L. Instant messenger) and I started messaging my friends from a new account with Eliza. And despite how rudimentary this thing was, they all believed it was a person instant messaging them and they got so angry at it. They were like, ‘You’re not making any sense. Why are you speaking gobbledygook to me?’ And then the program would reply, ‘How do you feel about why are you speaking gobbledygook to me?’ But they would still engage with it, and I was like, ‘Fucking Eliza passed the Turing test. I tricked all these people into thinking it was a real person,’ and I know it’s not technically how the Turing test works. So I understand your point about the idea of the fact that who the person is on the other end is left out of this experiments.
Speaker 2 [00:11:47] Well, and who the person that they’re comparing it to is left out of the experiment too, right? And and the thing that’s interesting about Eliza, is that Eliza was not actually created by an A.I. person. One thing that people forget about and this is again, the issue of ideology and the history of it and so forth, is that people have been (really since the beginning of computer science) – In fact, the founders of Computer Science argued against A.I. being a good way to go, saying that this was not a good set of goals and that didn’t make any sense, et cetera. In fact, Eliza was created by one of the people who is in that camp as an illustration of how stupid the Turing test was. It was actually created to do precisely what you did with it, not to be a therapist.
Speaker 1 [00:12:37] Wow. The thing is, I very quickly felt bad. I did it two or three times, and then I realized I was upsetting my friends because they were in an argument with something that they didn’t understand under false pretenses and I was like, ‘Why am I doing this to people? I shouldn’t do this.’ So, I think it actually had its intended effect on me. You said that a lot of people at the beginning didn’t feel that A.I. was an appropriate set of goals to work towards. How do we end up in a situation now, where it seems to be one of the main goals of an entire sector of our economy; to pursue – I mean, every possible company, every tech company is devoting massive amounts of resources to A.I.. Consumer products are coming out that all have A.I. in the name. We, as consumers, really want to believe in these products. We get products that say A.I. on them and we’re like, ‘Oh, yeah, it’s working. It’s smarter than I am.’ It seems like we’ve all swallowed the pill. Why’d that happen?
Speaker 2 [00:13:45] I think that an important thing to understand, and this you get from studying political history and the history of ideologies, is that what determines the success of an ideology is almost never its effectiveness, like, whether it actually does good things in the world. It’s drama. There’s a guy named Henry George, who probably almost none of your listeners have ever heard of.
Speaker 1 [00:14:15] I actually think you’d be surprised. But for those who don’t, please tell us.
Speaker 2 [00:14:21] Edward George was this amazing guy. He had the best selling book in the English language for 30 years. He was the founder of the center left in the United States. In fact, the term progressive comes from the title of his book, ‘Progress and Poverty.’ He was the first person to run on a real center left platform for a political office in the US. And in fact, one of the guys he beat was Theodore Roosevelt. That’s where Theodore Roosevelt got his progressive ideas from, and was just an incredibly influential, amazing guy.
Speaker 1 [00:14:51] And mostly forgotten today.
Speaker 2 [00:14:53] Mostly forgotten today. If you want to say, ‘Who was it that played the biggest role in inspiring the stuff that became the New Deal and became the postwar settlement that kind of made the world that worked for a while that we had?’ I would say, Henry George is probably one of the most important people, but he’s totally forgotten. Why? Because his stuff actually made some sense and it actually did stuff and it actually kind of worked. As a result, it kind of just got incorporated into institutions in a variety of ways and sort of faded into the background. Whereas Karl Marx had this apocalyptic vision of the end; the clash of this and that and whatever. It didn’t work. Everywhere it was tried, things went really badly and whatever.
Speaker 1 [00:15:43] They’d argue that it wasn’t tried correctly, right?
Speaker 2 [00:15:46] Well, exactly. But that’s the point. The thing that never actually works and that is always tantalizingly out of reach and an imagination, but apocalyptic and either it’s going to do something amazing and bring in utopia or it’s going to destroy everything. Things with that character are great ideologies. But things that just make the world work better, they just make the world work better and then everyone forgets about them. There’s this guy; JCR Licklider, Licklider was the founder of the five computer science departments that were the first ones in the world. He was the program officer at the Defense Department who gave out all the grants that created the computer science departments, and those became the first five nodes of what became the internet. If you want to talk about someone who shaped the actual technologies that people actually use today, he’s probably the most important person. But you’ve never heard of him. You’ve heard of Turing, you’ve heard of Minsky, but you’ve never heard of JCR Licklider. The reason is, his stuff just works. He actually got stuff done. He just didn’t care about big ideologies, big visions of the future. He just cared about actually making things work, and so we kind of forgot about him. But his whole vision was man-computer symbiosis and computers as a way of communicating between people rather than as a computational device to replace people and so forth. All the founders of the actual tools that we use today were followers of Licklider’s. Marvin Minsky and the A.I. people had this imagination capturing vision, mostly people thought it was horrible. But a few weird people thought it was great. That’s sort of the same thing that’s happened with Marxism. It just came to be the ideology that everyone talked about. Mostly, because people were scared of it, but partly because people were attracted to it.
Speaker 1 [00:18:00] And it’s to talk about and by the way, there’s a lot of value in talking about those ideas, but it’s like brain velcro. You can spend a lifetime pulling it apart and dissecting it or, honestly, we talk about Elon Musk too much on this show. But when he would say (as he was doing a lot five or six years ago) before everyone sort of caught on to the scheme a little bit. He would go on to tech conferences like, ‘Oh, I think we’re all living in a simulation.’ Everyone would go, ‘Oh, what does this mean?’ It’s the comment that launched 10,000 podcasts. Same thing about ‘I think we should be very careful with A.I., we are unleashing the demon,’ that kind of thing. It’s a very hot thing to say.
Speaker 2 [00:18:51] Donald Trump does this a lot too, right? His whole thing was that when he doesn’t want you paying attention to X, he says something really inflammatory that polarizes people. But the thing is, when something’s inflammatory and polarizes, it drowns out sense making and it drowns out your ability to actually work on the real problem because everything gets distracted by the ‘for and against’ on whatever this inflammatory thing is.
Speaker 1 [00:19:21] Yeah. In the case of A.I. or ‘are we living in a simulation,’ sometimes the inflammatory thing is also completely hypothetical. It’s a philosophical question more than it is a real one. What are the real issues about A.I. or technology that are being drowned out that we should be talking about instead? What is being missed?
Speaker 2 [00:19:47] Well, I think what’s being missed is that when we have an ideology that says ‘We’ve got to do this crazy, amazing thing and be smarter than any human being and we’ve got to do it in an autonomous way,’ some system has to do it without any humans being involved, which is what A.I. Is saying. The way that you get that done is by putting as many resources as possible inside of some veil. Sort of like in the Wizard of Oz; you got the curtain, and then you put as much stuff inside of the curtain as possible. Because that’s the way that you make whatever is inside of the curtain super awesome and amazing and ultra powerful. You want to have as few people behind the curtain as possible, because the more people that are behind the curtain the less it seems like there’s no one behind the curtain. What you basically do, is you concentrate an enormous amount of resources in the hands of a very tiny set of people. And that is bad. If you want to call it communism, you can call it communism. If you want to call it ultra capitalism, you can call it ultra capitalism. Whatever version of it is, it’s a huge amount of power going to a tiny set of people and that’s just really not a good way to make our future work. We’re seeing it all over the economy. We’re seeing it all over our politics, what that’s doing to us and we just have to get past that. We need to have a vision of what progress looks like that actually empowers different sets of people and doesn’t just concentrate all that power in this fake autonomous system.
Speaker 1 [00:21:25] Yeah, and concentration of power is one of the biggest problems. Maybe the biggest problem in human society right now, in terms of capitalism being becoming hyper concentrated. Geographically power is being becoming concentrated in specific regions and on the coasts rather than in the rest of America. In business there’s monopoly capitalism. I get all that, absolutely. But you’re saying that this A.I. ideology; when somebody says ‘A.I. is going to take over, we’re going to build systems that run autonomously without human intervention.’ The effect of that is actually that some human is intervening. Just only a couple now, all the power is being concentrated in a couple of people. Am I getting it right?
Speaker 2 [00:22:12] Exactly.
Speaker 1 [00:22:13] Wow. That makes sense to me. Because so many of the A.I. systems that we have are things like facial face recognition for law enforcement, which is entirely about casting a wide net over a lot of people and concentrating the power to determine who’s who and where in the hands of a very small amount of law enforcement or companies.
Speaker 2 [00:22:37] You know, there’s all this discussion about A.I. bias and that’s an important discussion and addressing particular biases is important, and so forth. But the thing is, that the systems are going to be biased. No such thing as an unbiased system. That’s actually the fundamental problem. There is no unbiased system, it’s not like bias is a problem to be fixed. People are biased, systems that are designed by people are going to be biased. The question is: who determines what the bias is and who has the power to figure that all out?
Speaker 1 [00:23:10] Safiya Noble talked about this on our show, I believe.
Speaker 2 [00:23:14] Oh, she’s great. That’s great that you had her on. That’s great context for this.
Speaker 1 [00:23:16] Yeah, she’s terrific. And go listen to that episode in our archive if you haven’t, folks. She talked about how people have biases to a greater or lesser degree. But it’s part of being human and when people design a system, they embed their own biases within it.
Speaker 2 [00:23:33] Yeah, and the thing is, what we need to seek is not ‘no bias,’ what we need to seek is a distribution of power over the digital systems we have. So that we actually have a pluralistic society where people can have their different communities and so forth, and that’s what’s being undermined by this myth of autonomy. Because the myth of autonomy is making us believe that there’s just this neutral, independent thing that’s outside of anyone’s control that’s just causing things to happen and therefore deflecting all the responsibility from the people who actually are designing the systems.
Speaker 1 [00:24:11] Yeah. Wow. This is making a lot of sense to me. It is a different vision of technology, the other vision of technology you talked about where technology is something that is meant to help individual people as a tool. That’s the techno utopia I grew up in, in the 90s. The personal computer era, the early internet. Everyone can buy a personal computer and make their own website, and communicate with their family. Or you could do your taxes and you can make a spreadsheet or whatever. It is designed as a tool for a human to use. But A.I. is the opposite, it’s ‘Oh no, this is something that’s going to exist that nobody uses. It just sort of exists and it happens to you. It’s something that is done to you, that someone else controls and implements.’
Speaker 2 [00:24:57] What you said about the 90’s was not a coincidence. That stuff was all directly designed by Licklider and his buddies as a response to what they thought was the problem with the A.I. direction for technology. In fact, my family is involved in this in kind of a funny way. My dad worked at a lab at Stanford that was working on A.I., and in fact, my dad was co-founder of what was arguably the first A.I. start up in the 80’s. He worked down the hall from a guy named Doug Engelbart, and Doug Engelbart was the inventor of the mouse and the graphical user interface. Basically, all the stuff that you associate with personal computing came out of his work and he was sort of rivals with my dad’s lab, because they were pursuing the A.I. thing and he had an opposite approach; which he called augmenting human intellect. They were working on this opposite problem, and you saw what bore fruit. In fact, that experience ended up changing my dad’s mind about what was the right thing to pursue. After having lived through that whole revolution with personal computers. But that lesson was not one that our society learned. We came back to the A.I. thing because we didn’t have that personal experience with it.
Speaker 1 [00:26:18] Well, we learned it for a while. But you know what all of this reminds me of is; I’m a fan of science fiction literature, and I used to read old Isaac Asimov stories and stuff like that. I was always really struck, when I read these in the 90’s and early 2000’s, I was really struck by the fact that a version of a computer from that era is always called ‘Omni Vac,’ and it’s a giant computer that you talk to. There’d be science fiction short stories where there’s one enormous computer and everybody has a little teletype to it where they can ask it questions. Like, ‘Oh, great Omni Vac, tell me the answer to this or that,’ and it’s like ‘The answer is 42.’ I was referencing Douglas Adams. But as a kid I was like, ‘Well, that’s stupid. Isaac Asimov was dumb. That’s not what computers turned out to be.’ They turned out to be a thing where you have your own little computer that you can do whatever you want with. It’s not some superintelligence, but when you’re describing this to me, I’m like, ‘Oh, that actually was in the air. That was what some people were trying to build, and they just ended up losing, to all of our benefit.’ But now the people who want the one giant supercomputer that we are all praying to like a God; those people are back, basically.
Speaker 2 [00:27:29] Yeah. And under the title of ‘inevitability of A.I.’ and whatever, we’re all funneling all the resources of our society into the hands of the people with that bizarre, scary, apocalyptic vision.
Speaker 1 [00:27:44] Wow, OK, let’s take this moment for our break because I have a lot of momentum that I want to ask you more about, and I don’t want to start a new line of questioning before the break. I want to go read some ads for car insurance or whatever and come back and keep grilling you, because this is wonderful.
Speaker 2 [00:28:01] Can I read one poem as we leave the reader?
Speaker 1 [00:28:05] No guest has ever asked if before the break they can read a poem, so I’m going to grant it because I really like this request. Read the poem, please.
Speaker 2 [00:28:11] So this, I think, captures the spirit of what we should be doing with technology. This is from Taiwan’s Digital Minister, Audrey Tong. It’s her job description, she says. ‘When we see the Internet of Things, let’s make it an Internet of Beings. When we see virtual reality, let’s make it a shared reality. When we see machine learning, let’s make it collaborative learning. When we see user experience, let’s make it about human experience and whenever we hear that the singularity is near, let us remember: the plurality is here.’
Speaker 1 [00:28:44] That was beautiful. On that note, we’re going to go to break. We’ll be right back with more Glenn Weyl. OK, we’re back with Glenn Weyl. I’ve read a couple of ads. Before that you read a poem, which is much more nourishing than an ad, frankly. I’m curious why you mentioned that this poem was written by a Taiwanese person, someone working for the Ministry of Information?
Speaker 2 [00:29:20] She’s the Digital Minister of Taiwan.
Speaker 1 [00:29:23] Digital Minister of Taiwan. I know that Taiwan has come up in your work. Tell me about why Taiwan is an inspiration for you.
Speaker 2 [00:29:33] We need more than anything not to talk more about A.I. and how stupid it is, etc., but to show people a better way. People need to start talking about and focusing on what actually matters. Taiwan is the society in the world that I think is most effectively showing a different way of doing things. This woman has an incredible life story but really encapsulates (both in her life story and in the work she’s done there) everything that we should be aspiring to in the way that we design our digital society.
Speaker 1 [00:30:09] So what is it that they’re doing in Taiwan that is so wonderful?
Speaker 2 [00:30:14] They have a participatory democracy platform that more than a quarter of the citizens of the country are monthly active users on, where people figure out consensus oriented solutions to major policy problems, participate in hackathons and upvote solutions to water pollution or issues that they’re having with mask delivery, et cetera. So they’ve actually managed to create an infrastructure; where rather than wasting all of our time screaming about whatever the latest divide in American politics is at the national level, people are actually participating in a more concrete way in finding digital solutions to the problems that they face. That has become, rather than the polarization that goes on in Facebook or Twitter, the focus of the digital culture in Taiwan.
Speaker 1 [00:31:23] Wow. Now first, I want to say that sounds at first very Pollyanna ish, and I’m like, ‘Well, hold on a second. How do you really get people to log on and work in a participatory culture way? Aren’t they just going to fight with each other?’ But it does make me think about how organizations (platforms like Facebook and Twitter) are specifically designed in ways that breed division and breed unproductive relationships. They’re done that way because that benefits the people that own the companies. To have those arguments, to have that misinformation, to have that rancor and et cetera; that’s why the platforms exist that way and they don’t have to. We could build a technology that does not do that to us and that gives us a fruitful way to interact with each other that actually serves us better as a tool.
Speaker 2 [00:32:22] We know that that’s possible because, anyone who’s ever participated in a thoughtfully mediated conversation, if you work for a company that has decent management practices then you probably will have gone through trainings that taught you how to facilitate a conversation. This is not some magical thing; there’s huge volumes of management practices about how you have a respectful, meaningful, inclusive conversation. Now the question is: can we scale that to platforms where there’s millions of people participating? I think the answer is that if we wanted to, we could focus on building capacities that actually do (in a scalable or A.I. way) those functions that facilitate human collaboration and consensus building, et cetera. But if instead, what we do is we say, ‘Oh, we’ve got this set of incentives to just sell ads to people and now let’s just throw an optimization engine at it and do it in the smartest way possible for that given goal,’ then you’re going to get the information ecosystem that we have.
Speaker 1 [00:33:27] Yeah. An example of that is maybe Wikipedia, as a shining example in America (or really worldwide internet culture but it started in America) that Wikipedia has technology behind it, but the technology hasn’t changed that much since 2003. It’s really a set of values and a set of community standards that facilitate discussion and conflict resolution. And Wikipedia has its problems. It has very non-diverse volunteer base and et cetera, et cetera. But it’s created a resource that we think of as being a technology resource, but could only have been created by people.
Speaker 2 [00:34:12] The thing is that everyone is like, ‘Oh, Google, blah blah. A.I., blah blah blah blah.’ But if you actually look, there’s been some computer scientists and economists who’ve tried to study this and they find that something like 40 percent of the value that people get out of internet searches comes from Wikipedia articles.
Speaker 1 [00:34:29] Of course it does.
Speaker 2 [00:34:30] So you go to Google and they’re worth, I don’t know, a trillion whatever dollars. But most of the actual value there, as opposed to the crap, is coming from something that’s built on a completely different set of principles. Not around A.I. optimization and not around profit maximization. But around building thoughtful community. Imagine if we could scale that process; and we could have a thousand Wikipedia’s rather than a thousand Google features. Think about how much better of a world we’d have. That’s what they’re doing in Taiwan.
Speaker 1 [00:35:03] And putting it that way, by the way, it’s too much fun to rag on the A.I.. I know you said we shouldn’t just do it, but it is too much fun to do because so much of Google, just their search product over the last ten years has been, ‘We use sophisticated algorithms to give you the answer you want before you even search for it. We put the answer right there on the screen using A.I. and blah, blah, blah.’ It’s just scraping Wikipedia! Half the time when you search something on Google, it’s literally that they just pulled the fact from Wikipedia and made it bold on the front page of your search results and half the time you got to click through anyway to get to what you actually need.
Speaker 2 [00:35:40] There’s a great XKCD comic, which I don’t know if you’ve ever seen Adam, but there’s a stop sign and it says ‘In order to complete your website registration, please identify whether there’s a stop sign in this photo. Our autonomous vehicle is approaching the stop sign, so please do it in the next three seconds.’
Speaker 1 [00:36:00] Right, right. Well, yeah, there’s that. You log into a website and ask you to identify what’s a bridge and what’s the stop sign you’re like, ‘I’m helping out some fucking A.I. somewhere that is being trained on me.’ These things are ultimately built on humans one way or another, I think is the point. These systems that we build are always going to be humans all the way down at the end.
Speaker 2 [00:36:27] The question is whether we conceal that and undermine the dignity and participation and voice and agency of the people involved; and only give voice and agency to the geniuses who ‘create’ the system? Or do we recognize all those people, lift them up and magnify their voices?
Speaker 1 [00:36:49] Yeah. Well, that would have been a great end to the podcast, but we have another 25 minutes to go, so let’s keep exploring those ideas. That was just a wonderful concluding line, and you could even say it again at the end if you want. But are there any positive examples of things that are called A.I., even frivolous ones, that you think are a good way to do it? Or is that entire term for you like, ‘I just would love it if nobody ever said it again.’
Speaker 2 [00:37:22] I mean, I don’t think that the terminology is useful but there are definitely technologies that fall under it or that use the same set of techniques that do better things. So let me give you the example again from Taiwan. So there’s a system there called Polis. What does Polis do? It’s a Wikipedia-like structure, but where there’s active moderation using some statistical techniques, things that people would usually call A.I.. So what happens is that anyone – imagine that we’re talking about gay marriage or something. People can enter and say ‘Here’s what I think about gay marriage,’ right? Then people can say, ‘do I agree or disagree with someone else’s statement?’ and based on their responses and the language in the thing, you can then cluster these together and you realize, ‘OK, well, there’s this population of a million people, but there’s only really like 10 opinions, more or less.’ There’s one of 10 things that everyone is saying, and then you can actually identify ‘which of these is the most articulate way of saying it, based on what people are voting for.’ Then you can read those 10 statements. You can’t listen to a million people, but you could read those 10 statements. Then, you can have people say things again. But then you can score the next things, not based on how many votes you get, but how diverse across the groups from the first time are the votes that you got? So that you actually get points if you manage to get support from people who are coming from different groups or places; we’re not just repeating and digging ourselves into the same position, but we’re actually creating new positions that cut across the existing divides.
Speaker 1 [00:39:06] I see.
Speaker 2 [00:39:07] And if you iterate a system like that, you can get pretty quickly to at least some rough notion of consensus on most issues. It’s come up with some really brilliant things, like one example: take the gay marriage case. They went through a system like this and in Taiwan, a very traditional Confucian society, there’s this notion that when two people get married, their families automatically get married as well. Because of this whole notion of extended families and the Confucian tradition and so forth. But the thing is a lot of the younger generation don’t really believe in that and they want to just be able to marry. But then the older generation say, ‘Do we want to be forced into this extended family relationship that goes through a gay marriage?’ What they ended up coming to, using this type of thing, was that they actually separated out the marriage of the individuals and a separate contract that was signed by the extended families. And so that gave the freedom to a gay couple to marry, but it also gave the freedom from to family to say, ‘Look, we don’t want to be joined together at this moment or we do.’ And that’s the sort of win win solution that these types of processes can lead you to if you have the right incentives.
Speaker 1 [00:40:23] This is a technologically assisted method of policymaking where it says, “Hey, let’s poll people in this specific way and let them participate and see what we can come to,’ but it’s technology designed to form consensus rather than divide folks, as a tool.
Speaker 2 [00:40:45] Yeah. And the thing is, there is A.I. In there, but you probably lost track of the fact that there was even A.I. in there because the thing is; the focus was not on the A.I.. The focus was on helping people build consensus and then we built whatever tools we needed to do that, right? That’s how I want to see technology be used, not ‘Let’s get it to some human capability or whatever and then figure out what to do with it.’ Because usually the easiest thing to do if you build something to imitate a human ability is to unemploy the people whose ability you’re imitating. Whereas if you say ‘No, our goal is to help people reach consensus.’ Now, maybe you’ll unemploy a few facilitators, but that’s not the main thing that’s going to happen. The main thing that’s going to happen is you’re going get a more cooperative society. I’m not saying no one ever gets unemployed by technology, but if technology is built with the goal of imitating human capabilities, it’s probably pretty likely that it’s going to replace a lot of people. Whereas if it’s built with the goal of facilitating some kind of human cooperation, it might unemploy some people but probably the main thing it’s going to do is facilitate the cooperation.
Speaker 1 [00:41:49] Right. We can choose the goal of the technology that we create. We don’t need to create technology that is going to harm humans. We don’t. We can just not create technology that’s going to put humans out of business. We could create technology that is going to help humans become better and help us do more.
Speaker 2 [00:42:06] I mean, of course, there might be unintended consequences. I’m not saying you can predict everything and of course you need to worry about it. But why set up the goal as replicating every human capability? Why not set up the goal as fostering a diverse, pluralistic democratic society where people cooperate with each other and hear each other’s perspectives in a way that can lead to reasonable consensus?
Speaker 1 [00:42:30] Yeah, I think that’s a very wonderful vision. That’s a very wonderful vision. But we still somehow seemed drawn to the A.I. model in so many ways. People seem to want to believe it. We talked about it on the show before: the way that we tend to trust technology more than we should. The example of people following their Waze directions into a lake, right? I know so many people in L.A. traffic who religiously follow Waze, even though I do not believe it gets you anywhere any faster. But the fact that an app – because you’re making left turn after left turn, there’s no way. You’ve got to wait five minutes every you make a left turn. You’re wasting time. Just go straight, it doesn’t matter if there’s less traffic on the street. But the fact that a piece of technology is telling them that ‘this will be the fastest way,’ relieves anxiety from them. Because now they’re not worried that there’s a faster route somewhere. People seem to like to be told that. This is the most trivial example, but one that really bothered me as a comedy writer was that for a couple of years, there were all these posts on Twitter. ‘I taught an A.I. to write a Seinfeld episode and here’s what it spit out.’ And then there would be a post of a couple of pages of a fake Seinfeld episode. And people were like, ‘Oh, it’s so funny, because look how stupid the computer is.’ And I was looking at this, knowing a little bit about A.I. text generation and knowing a lot about comedy writing and no fucking computer wrote this, all right. A person wrote this, the person who made this post wrote the thing. Maybe they used a program as basically a magnetic poetry kind of thing, where they let it generate and then they picked and chose their best things. But at the end of the day, a human did it. But we all love to imagine that a computer did and so everyone just talked about it as though that’s what happened, even though it didn’t. We seem to have some attraction to this idea.
Speaker 2 [00:44:24] It goes way back historically, way way back. So there’s a great piece by Edgar Allan Poe in the 1830’s in which he basically says that everybody’s obsessed with this thing called the Mechanical Turk. The Mechanical Turk was this person hiding under a chessboard and he was claiming that the machine was playing chess. I’d have to find his phrase exactly, but he says that everyone, all the great technical geniuses, think this is the greatest thing that people have ever created; is this thing that’s totally independent of human agency and is playing chess. Of course, it was just a person hiding underneath the machine.
Speaker 1 [00:45:13] A little machine making the machine work. Yeah.
Speaker 2 [00:45:16] And the thing that’s so funny is that then, Amazon’s Mechanical Turk was named after this. They couldn’t have been more self-consciously aware of what was going on. Yet nowadays, everyone thinks that everything that’s done by Mechanical Turk was some magical thing that came out of the machine rather than paying any attention to the people who actually do the work. And in fact, my colleague Mary Gray wrote a book ‘Ghost Work,’ which was all about that. And there’s a great story by E.M. Forster in the turn of the century called ‘The Machine Stops,’ that I really recommend to everyone. It’s like ten pages long, and it’s one of the best science fiction things ever written. It totally anticipates The Matrix and everything else afterwards. And it’s basically about us going down the path of turning more and more over to a machine and the consequences that follow from that. But yeah, it’s a perennial attraction precisely because it’s so repulsive, I think, it’s sort of like the thing where you get vertigo and you’re on the edge of a cliff, really what you’re afraid of is you’ll throw yourself off. Not that you’re going to fall off, right? That’s somehow how this is. It’s like there’s something so apocalyptic about this vision that we can’t resist pursuing it, you know?
Speaker 1 [00:46:41] Well, yeah, that is a really striking point. If you are in a world where you’re like, ‘Hey, I’m going to get up and go for a walk because my Apple Watch told me to and Waze knows the best directions and I’m going to renounce’ – If you’re having the experience of it feeling pleasurable to renounce your humanity in some way and take orders from a machine, it might naturally follow ‘Well, we’re all just going to do this to the grave, of course we’re going to do this to ourselves.’ But we don’t have to, we have the experience of using technology for our own benefits and knowing, ‘Hey, this is helping me or this is not helping me, I can use it or or not use it.’
Speaker 2 [00:47:22] And I think what makes the difference, and I think this is very clearly pulled out by history and what I think has a chance of making the difference here is a sense of a real threat that we have to rally against. So you think about what got us out of the 1930’s, it was really the threat of fascism. Why is Taiwan the place where this is happening? Probably not much of a coincidence, right? China’s right there, and they need to show that liberal democracy can work. So people are willing to devote their time to defend their way of life. The other place that’s working incredibly well is Estonia, right on the border with Russia, constantly facing that threat. So those are the cases where this really works and I think that can give us a reason to be optimistic because I think increasingly (especially after COVID) a lot of people in the West are feeling like, ‘Can we compete with the Chinese system?’ I think the Taiwan case pulls that out really effectively. I think we have a chance of rallying people around that shared sense of purpose to remember that freedom isn’t free and that we actually have to take on Republican responsibilities. Not capital R ‘Republican,’ but what it is to be part of a republic, if we want to avoid losing our republic. Famously they asked (I think it was) Franklin, what kind of government have you given us after the constitutional convention? And he said, ‘A republic, if you can keep it.’
Speaker 1 [00:48:55] I love your optimism. Why’d I say it that way. [in different intonations] I love your optimism. I love your optimism. But I have to press you a little bit, because you said that competition can really spur change. But I hear all this talk currently of an A.I. arms race with China; with the authoritarian China, not the Taiwanese country that claims that mantle. It specifically seems to be goading us whenever we hear that, to create the authoritarian version of A.I.: the version that tells humans what to do and that operates without us and that replaces us because, ‘Oh, that’s what the Chinese are going to do. So we got to build it first.’ So it seems like that arms race is pushing us in the wrong direction or the direction that’s opposite to what you lay out.
Speaker 2 [00:49:45] Yeah, and the threat of fascism in Germany created the new deal and in the US, and it created Stalinism in Russia. So obviously, the same stimulus can lead to different outcomes. I think we have a moment of choice. We have a moment of opportunity, but we also have a moment of great peril. The question is, how do we meet that opportunity? I think the story of Taiwan, which is a case of actually successful society using technology to actually overcome that exact threat and how they’re doing it successfully, that’s what we need to tell people. That’s what we need to look to. I think if there’s a single thing in the world I could snap my fingers and make happen, it would be to have a really compelling documentary about the experience in Taiwan or maybe biopic or something like that.
Speaker 1 [00:50:39] You got to think bigger than a documentary coming out. If you could change one thing about the world you’d be like, ‘I want there to be a documentary and it’ll air at Tribeca and then they’ll get picked up by Showtime.’ You can do more if you got genie powers, you should just try to solve climate change first.
Speaker 2 [00:50:54] I don’t want to do more, because I think that democracy – when you use genie power, the first thing that you should think about is making sure that you don’t wish for something ambitious. Because if you wish for something ambitious, you might well get what you wished for.
Speaker 1 [00:51:13] Aha. But the documentary, if you do that in a monkey’s paw way, the worst that happens is it’s a bad documentary and people are like, ‘I didn’t like the documentary very much,’ but I guess you didn’t turn into a cat or get a Hitler elected or something like that.
Speaker 2 [00:51:33] Exactly. I mean that’s the problem with A.I., as people often describe it, is always that, ‘Well, it’s going to be too powerful and we’ll wish for the wrong thing and it will destroy the world’ or something like that. And that’s a real problem, and that’s a reason why I would never wish for something that is well beyond the sort of power that’s reasonably allocated to me in society. I would wish that any wish that I make, other people wouldn’t be like, ‘Oh shit, he wished that.’ They would be like, ‘OK, fine, whatever.’
Speaker 1 [00:52:14] You shouldn’t make a wish that gives you authoritarian power over other people, basically.
Speaker 2 [00:52:18] You should make a wish that gives you a chance to participate in a democratic conversation and persuade other people around to your view, not a wish that just changes things. You know what I mean?
Speaker 1 [00:52:28] Wow. I’ve never thought of this egalitarian philosophy of wish granting. Like if you wish, ‘I wish I was so wealthy and famous and lived infinitely long,’ you’re turning yourself into a God and other people might not like that, that’s an unethical wish. You should be thinking about a wish that’s like, ‘I hope everybody is able to have a fair share and a fair say in their community.’
Speaker 2 [00:52:57] But even something like that well, what is a fair say in your community? I would say, maybe ‘Here’s a message that I wish everyone could hear.’ Or ‘Here’s a story that I wish people could partake of, or not.’ That’s the sort of wishes that I have.
Speaker 1 [00:53:14] Well, you have that wish as regards the people who listen to this podcast, who are listening to your voice right now. You can make them learn something. So what is your takeaway and message for them, especially, the next time that they are listening to another podcast or reading the paper about, ‘A.I., this is how it’s going to change the world,’ the claims that are normally made. How should we take those and what should we try to cultivate ourselves instead?
Speaker 2 [00:53:45] Learn about Audrey Tang and what they’re doing in Taiwan. Don’t let people talk about A.I. and the inevitable thing and how we’re locked into this and that without pushing back against them and asking them whether there’s actually a basis, whether it’s actually scientific consensus. Treat those things critically and think about the type of future that you want for technology and where you see that actually happening, something that you want and focus on those things and try to push back on the cataclysmic end of the world type scenarios. The way I think about it is, there’s this term in philosophy of religion called ‘eschatology.’ That’s like the end of the world, how is the end of the world going to come? Then there’s a term in biology called ‘ecology,’ which is there’s a bunch of different stuff and it’s all interacting with each other, et cetera. Try to think more of an ecology than an eschatology. Don’t think of the one thing that’s going to be the end of it all, think about how we get more and more richness and diversity and cooperation and so forth.
Speaker 1 [00:54:59] And this is something that we can do. That was a wonderful ending but I’m just gonna make my own comment. This is something that we can do in our own lives using technology. One thing that really inspires me, is there’s so many people who still use technology in that wonderful way from the 90’s where they use it to empower themselves and empower their communities. Now there’s this trend of people who make a podcast for their friends, for a small community. Or who build a software tool that helps them and the people in their community do something.
Speaker 2 [00:55:30] Mutual aid societies are something that I love and the people who’ve been building the tools for those. If you want to ask, ‘What should we do on the big picture public policy level?’ I think the thing we need to do most of all is empower and scale those best community oriented things, not take some sweeping action. You know what I mean? We need to lift up all of the things that are actually doing the good work at this community level.
Speaker 1 [00:56:01] Yeah, that is such an optimistic vision. And I really appreciate that you gave me a lot to think about and connected a lot of topics that we’ve had on this show before and a lot of names that have come up in the past, in a really exciting way. So thank you so much, Glenn, for coming on the show. Where can people find out more about your work? Where can they support what you do?
Speaker 2 [00:56:22] Check out Radical Exchange. It’s a global social movement of people trying to do all this kind of stuff. We’ve got a paper coming out that’s tentatively titled ‘How A.I. Fails Us.’ It’ll probably be out in a month or two, with a whole bunch of people from all different walks of life, including many of the top experts in A.I., all trying to do this push back together in a coordinated way. Along with some of the messages I talked about today.
Speaker 1 [00:56:52] Amazing. Thank you so much for being here, Glenn.
Speaker 2 [00:56:55] My pleasure.
Speaker 1 [00:57:01] Well, thank you once again to Glenn Weyl for coming on the show, if you want to pick up his book, you can get it at factuallypod.com/books. That’s factuallypod.com/books. Once again, I want to thank our producers Chelsea Jacobson and Sam Roudman. Ryan Connor, our engineer. Andrew W.K., for our theme song. Hey, don’t forget about the fine folks at Falcon Northwest for building me the incredible custom gaming PC that I’m recording this very episode on. You can find me online at AdamConover.Net or @AdamConover wherever you get your social media. Thank you so much for listening. By the way, please remember to tell a friend or family member about the show if you enjoyed it, it really does help us out a lot. Until next time, we’ll see you next week. Thank you so much for listening.
Recent Episodes
July 26, 2022
How can we best help animals, when it’s we humans who cause their suffering? Animal Crisis authors Alice Crary and Lori Gruen join Adam to explain how the same systems that hurt and kill animals also harm humans. They discuss the human rights abuses that happen in industrial slaughterhouses and how palm oil monocrops are devastating the world’s rainforests. They also share how we can have solidarity with animals in our daily lives. You can purchase their book at http://factuallypod.com/books
July 19, 2022
In times of turmoil, it can be useful to take a longer view of history. Like, a LOT longer. Paleontologist and author of “The Rise and Reign of the Mammals” Stephen Brusatte joins Adam to explain how mammals took over the Earth hundreds of millions of years ago, and why we survived and achieve sentience when dinosaurs died out. Stephen goes on to discuss why taking a deep look at our history can help prepare us for the crises of the near future. You can purchase Stephen’s book at http://factuallypod.com/books
July 13, 2022
Trans people have existed as long as, you know, people have. But the barriers to legal inclusion and equality are still higher than most people realize. “Sex is as Sex Does” author Paisley Currah joins Adam to discuss why institutions have been slow to give legal recognition to trans identities, why Republicans have shifted their attacks from bathroom policies to trans youth in sports, and why the struggle for trans equality is tied to feminism and women’s liberation. You can purchase Paisley’s book at http://factuallypod.com/books