April 20, 2021
EP. 101 — Why Facebook Refuses to Fix the Misinformation Crisis It Created with Karen Hao
Facebook pushes dangerous misinformation to billions of people every day. So why can’t it… stop? This week, MIT Technology Review’s Senior AI Reporter, Karen Hao, joins Adam to detail her blockbuster report on how Facebook’s internal AI teams were instructed to stop fighting misinformation because doing so interfered with Facebook’s growth. Read her reporting at: https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/
Transcript
FACT-101-20210416-KarenHao-ACv01-ALT-DYN.mp3
Speaker 1 [00:00:02] Hello, welcome to Factually, I’m Adam Conover and let’s talk a little bit more about misinformation today. You know that it’s out there, you know that it’s bad for you. You know that you’re getting it anyway. Social media spits a constant stream of vaccination misinformation, lies about our democracy, hate speech and garden variety people-yelling-at-each-other into our eye holes and ear holes every single day. Misinformation are the chunks in the toxic soup we sip every time we open our apps. Now, we’ve talked about this on the show before. A couple weeks back, we had Mike Coffield, incredible, wonderful media literacy researcher and educator who told us how we can fight back against it in our own lives, how we can use his “sift method” to separate good information from bad information and help our friends and neighbors do the same. But let’s talk now about who’s responsible for this misinformation. There’s a little bit of a question about that, isn’t there? I mean, these social media companies that we’re getting all this misinformation from, well, they don’t want us to think they’re at fault, right? I mean, they’d like to avoid responsibility for the horrible effects of the content they promote. And, you know, why wouldn’t they? Not taking responsibility is way easier and cheaper than doing the right thing. But I’d argue they are at fault and they are responsible because the amount of control they have over what vast numbers of Americans and people worldwide see and believe is truly stunning. See, the internet today is way different than it was when it got started (or at least when I got started on it a couple decades ago) when companies like Facebook or YouTube were getting going, well, you’d just post a thing. You know, you’d upload a video, you’d make a post. Some people would watch it. If they liked it they’d email it to their friend and say, ‘Hey check out this video of this guy singing the Numa Numa song in his house. It’s funny.’ You’d watch it, that’d be it. It was hard to imagine any single post being that big of a deal. Right? But today, social media is increasingly dominant. People spend on average about two and a half hours on social media every day across the planet. That’s a worldwide number! That includes about forty minutes of Facebook per person and a whopping billion hours of YouTube every day with a B, OK, so social media companies aren’t just neutral platforms where we share funny videos. They are now essential players in where people get and share important information. About one fifth of Americans rely on social media to get their news, a fifth! That’s more than rely on local or network TV news. But needless to say, no matter what you think about the 11:00 p.m. ‘if it bleeds, it leads’ ‘man shot his wife and ate his dog live at 11:00,’ whatever you think about that kind of news, the news on social media is a lot worse. Pew found that those who rely on social media for news are less likely to get the facts right about the coronavirus and politics and more likely to hear some unproven claims. That’s a quote from Pew. So even though more people are getting their news from social media than TV, the quality of that news is, in a word, shittier. And unlike the TV news, social media companies are constantly trying to avoid responsibility for the garbage they promote on their platforms. See, these companies would have us believe that they’re just platforms, right? They just give us a way to upload a video and host it for free or to start a group to chat with our friends. Oh, they don’t control what we see, they just give us away to talk to each other. But this is disingenuous at best and a lie at worst. It might have been true in 2005, right? Numa Numa Guy uploads the video. People email it around. That’s all that happens. But today these sites work very differently. Today all of these companies monitor the content that’s posted. They monitor how we engage with it and they make deliberate choices to push forward some posts and bury others. Whatever makes the user spend more time on the platform, that is what they push us. And sure, some of this is done by algorithms. It’s a computer doing it, yes. But those computer programs are not the forces of nature. They were written by people at the companies and prioritizing business goals that the companies have. OK? They didn’t wash up on a beach one day. The people at these companies are responsible for the algorithms and thus responsible for the results. But these few massive companies: Facebook, Amazon, YouTube, Twitter. They dominate our media ecosystem. They are media companies just like the giants of a couple decades ago were. These are the ABC, NBC, CBS and Fox of our present day age. So when they say that it’s very hard or close to impossible to stop hate speech and misinformation, that’s not true. They can control what’s on their platforms, they just choose not to. Now, that is my personal opinion based on the facts that I have seen and my own judgment of the matter. I hope you found it convincing but you don’t have to take my word for it, to quote LeVar Burton, a man who means a great deal to me. No, instead, you can just listen to the direct evidence we’ve got for you here on the show today. My guest today is Karen Hao, a reporter at MIT Technology Review. She got tremendous access to a Facebook AI team trying to fight misinformation on the platform, which was then directed away from that task in order to keep Facebook big, growing and profitable. This is about as close to a smoking gun as we’re going to get. It is an incredible story and she’s an incredibly talented and brave reporter who wrote about this to great acclaim just a few weeks ago. We are so excited to have her on the show. Please welcome Karen Hao. We’re here with Karen Hao. Karen, thank you so much for being here.
Speaker 2 [00:06:03] Thank you so much for having me.
Speaker 1 [00:06:05] So tell me about, uh, let’s jump right into it. Tell me about this piece that you have in the MIT Technology Review about Facebook and AI, how you came to write it and what what was the big surprise for you when diving into the piece?
Speaker 2 [00:06:20] So this piece is a nine month investigation into the responsible AI team at Facebook. And what is interesting is when I spent the nine months trying to figure out what this team does, what I realized was the story is actually about what it doesn’t do. So I thought if Facebook has a responsible A.I. team, it must be working on the algorithms that have sort of been criticized over the years for amplifying misinformation, for exacerbating polarization, these kinds of things. And the team doesn’t do that. And so the crux of the piece is sort of about this team and its failures. But it’s also about this revelation that Facebook has studied and known about the fact that its recommendation algorithms promote and amplify misinformation, hate speech, extremism, all of these things, for years. But its team doesn’t do anything about that. And in other parts of the company, it’s sort of halted or weakened initiatives that were actively trying to address these issues specifically because addressing these issues would hurt the company’s growth. And so that kind of encapsulates what I was surprised by, is I just thought going into this story when I learned that Facebook had a responsible AI team, that there was a good faith effort at the company to address many of the challenges that it’s publicly been talking about as very technically challenging and things that they’re hard at work on. But there just, there is no real coordinated effort to actually do this.
Speaker 1 [00:08:02] Yeah, it’s pretty stunning. In the piece, you’re speaking with/have a lot of access to the head of the responsible AI program. Extremely highly placed folks in AI at Facebook and, if I can get into the meta piece a little bit, it sort of sounds like Facebook was like, ‘oh, this is a great opportunity to show how great this program is and how seriously we’re taking it. We’re going to talk to this journalist and let them know that we really care about safe AI, responsible AI, fair AI, whatever you want to call it. But then when you actually engaged with them, you realized, wait, but this team is not doing the thing that we all think that they’re supposed to be doing, that they were sort of saying that they were going to address at some point.
Speaker 2 [00:08:50] Exactly. Exactly. I think and the challenge of writing this piece was actually coming to that realization because it’s hard to really identify when something is missing. It’s much easier to write about the things that are present. But while I was reporting the piece and talking with this team, I kept having this nagging feeling that if I talk to the average person on the street and say, ‘hey, Facebook has a responsible AI team, what do you think they do?’ That it would be completely disconnected from the way that they were describing their work and their responsibilities. And it was through, it was like eight months into my nine months of reporting that it finally clicked for me. That ‘wait a minute’, that I’m not going crazy by thinking that the average person would completely misinterpret the responsible AI team, the responsible AI team’s work. There’s actually legitimate reasons why people would think that the responsible AI team does one thing and there are legitimate reasons why Facebook is not actually doing that thing, but still using the term ‘responsible AI’ as a branding mechanism.
Speaker 1 [00:09:59] Wow. There’s so many angles we could get into this from. But let’s start at it from the one you just said. I mean, if you asked me walking down the street, if you ambushed me with a microphone and said ‘Facebook’s got a responsible AI team, what are they working on?’ I would say they’re probably working on misinformation, q-anon, and things like people undermining election results. I’ve heard about the fact that the U.N. has implicated Facebook in the genocide in Myanmar, that misinformation being spread. You can tell me more about that than I probably know. But I would think, ‘OK, you know, we know that there are these algorithmic problems and that that would be what Facebook is trying to address.’ And those are real. So let’s start there. Those are real problems, correct?. I’m not making that up.
Speaker 2 [00:10:50] Yes.
Speaker 1 [00:10:51] Okay, tell me a little about that.
Speaker 2 [00:10:51] Yes, those are those are real problems that Facebook itself was grappling with when they created this team. So this team was created in the aftermath of the Cambridge Analytica scandal and at that time there were multiple angles at which Facebook was being criticized, one for the actual scandal-
Speaker 1 [00:11:09] Can you remind me what that was?
Speaker 2 [00:11:11] Yeah, yeah. There was this this political consultancy that was using the personal data of tens of millions of Americans without their consent to influence how they voted. And specifically they were using the user targeting algorithms that Facebook already had on their platform and weaponizing them to get the right content (often misleading content) in front of very specific people so that they could sway how they thought about different political candidates. And most infamously, they did this for Donald Trump’s campaign. But there was also this conversation around Russian interference at the time. Russian hackers are also weaponizing these user targeting algorithms to sway the election in Trump’s favor. And there were also the conversations around filter bubbles, like the fact that like half of America was shocked that Trump was elected in the first place and people realized that they were completely unaware of some of the conversations that were happening in the other half of America that was also all about these algorithms kind of tailoring the content so specifically to you and your interest that you kind of lose awareness of other people’s interests and the other debates that are happening. So that was the bigger context in which the responsible AI team was then created. So this was very much on Facebook’s radar when they decided to put resources into a so-called responsible AI team.
Speaker 1 [00:12:46] And there’s also the issue of polarization, right, that the more that we’ve realized (and according to reporting Facebook themselves know) that when you are trying to maximize engagement, that their algorithms which are designed to maximize engagement (our time spent on the site), end up pushing people more polarized content and actually make the people themselves more polarized at the end of the day. Is that correct?
Speaker 2 [00:13:14] Yeah, exactly. There have been efforts, not coordinated, but sort of bottoms-up efforts at Facebook where individual employees or teams will start studying what actually is the effect of Facebook’s algorithms on this question of polarization. And I spoke with an engineer who was on a team that was studying this problem and conducted myriad studies on this thing and basically found that because of the way that Facebook’s content recognition algorithms will tailor things to what you like, what you want to share, what you click on and maximize that kind of engagement it will just keep feeding you content that gets you pigeonholed further and further into your beliefs and really helps you dig your heels into your beliefs on things. He was saying this isn’t just a presidential election, something big like that. This can be like a local school board election. They could measure that you would get more and more polarized on your local school board election because the content that you kept being fed was sending you into a rabbit hole where you where you weren’t actually getting other information, other signals that might challenge those beliefs anymore.
Speaker 1 [00:14:32] Yeah, I mean, it’s just that basic thing of – and this has always been my intuition. And you write it really starkly in your reporting so I feel a little gratified, but this idea that what we engage with tends to be the things that make us angry or upset, that piss us off. I’m interested in labor issues. So I get mad every time I see an article that says Jeff Bezos, the National Labor Relations Board just said that Amazon fired a bunch of workers for organizing or something like that. Right? Those stories always make me angry and I always click on them. I click on them, I retweet them. I’m not on Facebook personally, but whatever platforms I’m on, I share them, etc. and they get me agitated. And so therefore I get more of those things which make me agitated because that’s what the algorithm is designed to do. It’s giving me whatever I interact with; I interact with the things that make me mad, it gives me more of those things. And then, since I’m always going in the angry direction I’m falling sort of down a rabbit hole. But this is what’s actually happening. Facebook themselves know about this dynamic. This is, I’m not making this up.
Speaker 2 [00:15:41] Yes. This is actually happening. This is internal studies, internal research that has been done that has repeatedly confirmed that this is a thing that happens.
Speaker 1 [00:15:50] Wow I mean, yeah, there’s this graph here in your piece that you write that Mark Zuckerberg himself was using that shows that the engagement, as it becomes closer and closer to what Facebook prohibits goes up, it’s like a line graph that’s flat with the level of engagement and then right before it becomes something that Facebook would ban (presumably because it’s Holocaust denial or something like that) the things that are most engaged with are the things that are almost Facebook Illegal because they’re so inflammatory. That’s wild they know that there.
Speaker 2 [00:16:26] Yeah. And it’s interesting because when Mark actually published that chart, he published it in 2018 when he did a series of public Facebook posts that were about how he’s going to fix Facebook. And then this particular installment was focused on how am I going to use content moderation to fix Facebook? And he published this chart and basically said, ‘This is just human nature. People like engaging in outrageous stuff. So regardless of where we place this policy line, regardless of where we draw the line for what content is banned on the platform, it’s always going to show the swoop upwards of engagement as we approach that line.’ But what he doesn’t really acknowledge, and is the way that Facebook often talks about these things, is that there’s this implicit assumption that there’s no other way to design Facebook other than to maximize engagement. So they’re like, ‘Oh yeah, this is just a human nature problem and there’s nothing we can really do to solve it. So this is how it is.’ And it’s like, ‘Wait a minute, you were the one that chose to maximize engagement, which then is what incentivizes your algorithms to keep propagating this hateful extremist mis-info content to more and more people, because that’s the content that gets the most engagement. So they always use – I don’t know – they just talk about these things in ways that shirk their own responsibility in the matter and pretend that it’s nothing that they can do. There’s nothing that they can do about it.
Speaker 1 [00:18:02] Right. Oh, this is just what people like to do for eight hours a day to the exclusion of everything else because of the slot machine system that we’ve created specifically in order to keep them sitting in their chairs, on their phones, in front of their computers for that period of time. They just like doing this thing that we’ve designed to do exactly this. Yeah, it’s a little – this is kind of a fucked up point of view. You don’t need to editorialize, I’ll editorialize. You’re the journalist. You keep playing it close to the vest. But OK, so, but Facebook in their public announcements – in this blog post Zuckerberg made, in presumably him talking to Congress and all this sort of thing, they talk about taking this issue seriously; of misinformation, algorithmic polarization, these problems that are – and by the way I think all of us have experienced the negative effects of this in our own lives, all of us have a relative or a neighbor who has been sort of driven mad by the algorithm and is ingesting these weird ideas. This is an issue of national concern. So Facebook says that they want to address this. They then bring on an AI team or they say ‘we’re going to solve this with our AI team’ and they proceed to not solve it. What happened instead?
Speaker 2 [00:19:25] It’s a complicated question. So Facebook, just to take a step back, Facebook has three AI teams. I think part of Facebook’s – I don’t know if this is intentional or unintentional – but it seems to me that Facebook’s tactics around communicating about its company involve some organizational confusion where it can sort of just invoke ‘our AI team is working on this,’ but they won’t really specify which AI team, what they’re actually doing, how it relates to the other teams. But they have three AI teams, one that is a fundamental AI research lab that just does basic science and has absolutely nothing to do with the platform. It doesn’t actually work on any platform issues. They also have an applied research team that’s supposed to, when the basic science research team serendipitously comes across some kind of AI technology that might be useful for Facebook, the applied team is supposed to then pluck that out of the lab and put it into Facebook’s products. So there is the example that Facebook loves to give is that the fundamental lab had figured out some way to translate languages really well using AI and then now that is the main thing that Facebook uses to translate when you’re scrolling through and your friend posts something in a different language and it says, ‘translate this text,’ that is the AI that’s powering that feature. But the responsible AI team is the third team. We’ve already talked about everything that it doesn’t do, but it basically is now specifically working on fairness, transparency and privacy, these three things that they’ve deemed as ‘responsible AI.’
Speaker 1 [00:21:18] Those are all good nouns, but they don’t seem to be the nouns we were talking about.
Speaker 2 [00:21:24] So what’s interesting is fairness and privacy are both things that there is impending regulation to address (and actually transparency as well). GDPR, which is like the European Union’s big regulation for how to think about regulating AI, how to think about regulating data systems. They kind of evoke these three ideas; the system should be fair and the system should be transparent. The system should be private. And so, it’s not actually a coincidence that Facebook is working on these three specific things. But the earliest thing that they started working on that I was kind of digging into was their fairness work, and fairness in the AI context refers to the fact that algorithms can unintentionally be discriminatory.
Speaker 1 [00:22:15] Yeah
Speaker 2 [00:22:16] And Facebook has actually been sued by the US government for its ad targeting algorithms perpetuating housing discrimination, where its ads will learn that they should only show houses for sale to white users and houses for rent to black users. And that is illegal and very clearly a violation of an equal access to housing opportunities. These are very real problems and they’re legitimate problems that Facebook has. But it’s not an either or situation where you can only work on one thing and not the other. You can definitely work on fairness issues and you can work on misinformation. And so, there’s a very clear reason why Facebook chooses to work on one versus the other versus the other. And that’s because they work on things that really support Facebook’s growth but they don’t work on things that undermine Facebook’s growth.
Speaker 1 [00:23:11] Right.
Speaker 2 [00:23:12] So what I kind of realized with this fairness stuff is they really started ramping up this work around the time when a Republican led Congress was starting to escalate their rhetoric around tech giants having an anti conservative bias. Trump was tweeting #stopthebias in the lead up to the 2018 midterm elections and these tech companies were starting to get overwhelmed by attacks from the public (the conservative public/conservative user base) saying, ‘You’re censoring us, your ranking algorithms aren’t promoting our content. Your content moderation algorithms are deleting our content.’ And so Facebook then, Mark Zuckerberg then like a week after Trump tweeted this #stopthebias he called a meeting with the head of the responsible AI team and was like ‘We need to figure out this AI bias thing. We need to figure out how to get rid of any kind of bias in our content moderation algorithms.’ And for me, Facebook never admitted that Mark asked anything related to anti conservative bias in that meeting but for me the timing of the meeting was just so perfect because it’s the first time that he ever met with the head of the responsible AI team and this was six or seven months after it had been created. And after that, they just basically started really aggressively working on this thing. I imagine to then be able to definitively say ‘We do not have anti-conservative bias because our algorithms are fair.’
Speaker 1 [00:24:55] But, OK, this is a long way from from the issue at hand that, again, everyone is talking about Congress is concerned about misinformation, polarization from Facebook’s algorithm. They create an AI team that they say is going to work on that problem. And instead what that team works on is making sure that their AI is unbiased. And then specifically, it’s focusing not on the issue of racial bias, gender bias or anything else, but bias against conservatives on Facebook. We’re now very far away from the original idea. And in fact, doesn’t that goal actually conflict with the original goal of misinformation? Because I’m not going to say that every conservative who is concerned about their views being suppressed is spreading misinformation. But I know for a fact that some people who spread information on social media, when they are stopped from doing that, they say, well, there’s a bias against conservative speech. It’s like, no, no, no. You were you were spreading misinformation about the election or about q-anon or – that’s what the q-anon people say when they are kicked off a platform. They say ‘This is an example of anti conservative bias.’ So it seems like, this is now Facebook working on the opposite of what the problem was.
Speaker 2 [00:26:18] Yeah,
Speaker 1 [00:26:20] (laughing) OK
Speaker 2 [00:26:20] Pretty much, yeah. So the funny thing is Facebook has never actually really said that the responsible AI team specifically is working on misinformation. It has said ‘We are building AI to work on this stuff.’ And that kind of goes back to what I was saying that it doesn’t really specify which team is working on what, and then you just automatically assume the responsibility team is working on it because the name is responsible AI.
Speaker 1 [00:26:49] Yeah
Speaker 2 [00:26:50] But there is another team that – it’s the applied research team that is working on catching misinformation and we can get into that later. But then the responsible AI team, yeah, they are working on bias and it is (from the upper levels of management) motivated (based off of my reporting) I believe it was motivated by this anti conservative bias. But for the people on the team, I think they also saw an opportunity of, well, if we build tools to get rid of anti conservative bias, then we might also – it’s the same tools to then uproot/to try and get rid of racial bias, try and get rid of gender bias. So they sort of had good intentions of ‘Well, let’s just hitch on to the ride and try and do something good now that we have the leadership buy-in to do this. But then the issue is what you get at, where there are legitimate ways that this notion of fairness or this pursuit of fairness for growth or for ridding anti conservative bias will then also undermine efforts to clean up misinformation on the platform. So there were other parts of the company outside of the responsible AI team that, sort of around the same time that the responsible AI team was working on this, were already using the idea of fairness or the idea of anti conservative bias to stop efforts to use the AI algorithms to get rid of misinformation. So there’s this policy team led by Joel Kaplan. And there was this one engineer who described to me – or one researcher – who described to me they would work on developing these AI models and these algorithms for catching misinformation like anti vax misinformation. They would test it out and it would work really well. It measurably reduced the amount of anti vax misinformation that was on the platform. They would then go to deploy it and then the policy team would say, ‘Wait a minute this specific algorithm is affecting our conservative users more than liberal users and that is anti conservative bias. So you need to change the algorithm so that it affects both groups equally, so that it’s a fair algorithm.’ And then the researcher was like, ‘That just made the algorithm meaningless. So we did all this work and it doesn’t…’ It results in nothing.
Speaker 1 [00:29:26] It means it does nothing. If it treats every single person exactly equally on the platform, well the whole point of it is to suppress misinformation and some people spread more misinformation than others. If it doesn’t penalize users who spread more misinformation because it’s trying to ‘be unbiased,’ it is going to literally do nothing. It’s like giving every student in the class a C rather than giving the better ones an A. It’s the participation trophy of algorithms is what it is. How about that, to take to take a popular conservative talking point? It goes beyond what I was saying before. You’re saying that they literally created a useful bit of AI that started weeding out dangerous misinformation (medical misinformation, for example, about vaccines) and then a different unit in Facebook that was concerned about the reaction in the conservative community said, ‘Let’s not use this algorithm, let’s you know – canceled, because we’re worried about how conservatives will react.’ That’s what happened at Facebook.
Speaker 2 [00:30:33] Yes.
Speaker 1 [00:30:34] Holy shit.
Speaker 2 [00:30:34] And this is just one example. There were many, many, many examples. And this was such a huge problem that the team that worked on creating these algorithms had serious retention issues because their work was never being used. They would do all this work, put all this investment in, and then it would be scrapped because it was demonstrating ‘anti conservative bias,’ which by the way there have been studies since that I’ve looked into ‘Does Facebook actually have anti conservative bias?’ And from the assessment of what kind of content thrives on Facebook there’s no actual evidence to suggest that there is a suppression, a systematic suppression, of conservative content. Conservative content actually thrives more on Facebook than liberal content.
Speaker 1 [00:31:25] The top 10 Facebook publishers are, it’s like; Ben Shapiro, Dan Bongino or whatever his name is all the – you know, Fox News does extremely well. Those are the most successful pieces of information. It’s just the people who are publishing them are also constantly claiming that they are – ‘Help! Help! I’m being oppressed!’ Facebook seems very reactive to that – perhaps because, again maybe this is me editorializing, perhaps because that is where they get so much of their traffic from, is from that audience.
Speaker 2 [00:32:03] Yeah, completely. I think there’s a very strong reason to believe that that’s relevant. And also the fact that it took a lot of very wishy washy stances on moderating away certain types of misinformation or hate speech when Trump was in office. And then they made their biggest content moderation decision when it became clear that Trump was leaving office (a.k.a. removing Trump from the platform). So there is a lot of evidence that Facebook has sort of played the stance of just keeping the people in power happy so that they don’t make themselves vulnerable to regulation that would hinder its growth.
Speaker 1 [00:32:46] Right. OK, well, we’ll finally remove the president once the president’s no longer in power because now he doesn’t have any power to actually penalize us. Now there’s been a regime change. But maybe if he wins again, if he runs again and wins, ‘Oh, back on the platform, he goes’ because they’ll be obeying power once again.
Speaker 2 [00:33:11] Yeah.
Speaker 1 [00:33:12] Tell me about the piece of it, though, where in addition to Facebook is focused so much on anti conservative bias (supposedly the conservative bias) that they kneecap their own effort to make sure their algorithms aren’t polarizing people and spreading misinformation. That’s one piece but seems to be the even bigger piece is Facebook’s addiction to growth that you write about, that they constantly want to grow. They constantly want more misinformation. Actually, you know what? We have to take a really short break. So I want you to tell me about this right after we get back. We’ll be right back with more Karen Hao. OK, we’re back with Karen Hao. So, before I so elegantly went to break in a way that was completely preplanned and not at all chaotic I was asking you about how Facebook’s addiction to growth gets in the way of them fighting algorithmic misinformation and polarization. Can you tell me about that?
Speaker 2 [00:34:14] So going back to this chart that Mark Zuckerberg published where he was showing that things that are more likely to be misinformation are then more likely to get an engagement – that chart. There’s a problem, then, of incentives where if we’re telling Facebook to clean up the misinformation on their platform but that’s the content that gets the most engagement and engagement is what helps Facebook grow, then maybe it should just not clean up the misinformation.
Speaker 1 [00:34:47] Yeah.
Speaker 2 [00:34:49] And so that’s that’s sort of – There’s this pervasive issue where a lot of employees at Facebook – it’s not like people are evil at Facebook – it’s not like there are people intentionally being like ‘Muahaha. We’re destroying society.’ Facebook is a very metrics driven company. And there are a lot of employees that are doing their small part of the puzzle in this giant corporation and the goals of how they’re rewarded, how they’re paid, how they’re promoted, all of those things are tied to engagement metrics or business metrics that the company maintains. And so when you have each employee that’s working on these, trying to optimize for the specific metric that they’ve been told will help them get promoted it sort of creates this mass emerging effect across the company of the company doing everything – pursuing growth at all costs. And so there are very clear incentives then for people who work on misinformation to maybe not do it sometimes or people who want to genuinely do good on the platform and fix some of these issues, when they’re told by leadership ‘That’s not really a good project for you to pursue,’ it’s very reasonable that then they would be like, ‘OK, well, I’m not going to keep bashing my head on something that leadership has actively told me not to pursue. I’m going to switch to working on something else so that I can achieve my quarterly goals and get promoted.’ So, yeah, there’s this whole culture of growth. I think it causes a lot of this – a lot of people at the company to just end up working on things that are not actually core to the issues of the platform but on more tangential things that the leadership directs them to do.
Speaker 1 [00:36:46] Yeah, the old adage is that you get what you measure and Facebook measures growth above all else and engagement as a way to get to that growth. And they don’t really seem to measure algorithmic misinformation or polarization. They’re measuring those things to a certain extent. But if their number one priority is going to continue to be growth and then someone is working on, ‘OK, I’m working on a project that’s going to stamp out misinformation,’ but then that project is also reducing growth a little bit or reducing engagement a little bit, then that is not going to be prioritized. They’re gonna say ‘I want to try – you know, that’s really interesting, but maybe don’t work on that.’ Is that sort of what you’re saying?
Speaker 2 [00:37:29] Yeah. Yeah. So to be more concrete about how this happens on a day to day level, engineers at Facebook have sort of the ability to create algorithms that they deploy onto the platform for various things whether that’s cleaning up misinformation or changing the way that content is ranked in your news feed or targeting you with ads. There are lots of engineers – like twenty five percent of engineers at least – all have the ability to train these algorithms, deploy them and then tweak and keep optimizing the way that the platform works. And there’s a very rigorous process for evaluating these algorithms and which algorithms actually make it into the live production of the platform. And the primary evaluation is how does it actually affect the company’s top line engagement metrics? How does that affect likes, shares, comments and other things? And the way that they do that is they will create a training algorithm. They’ll then test it on a subset of users on Facebook and then use that experiment to measure whether or not those particular users then had reduced engagement. And if there’s reduced engagement then most – more often than not, the algorithm is completely discarded. And sometimes there will be discussions where, ‘OK, it reduced engagement. But it did really, really well in reducing misinformation. So that tradeoff is a good tradeoff and we’re going to make that tradeoff.’ But when the algorithm does that it’s no longer this automated process of, ‘OK, check, we’re going to deploy it.’ There’s actually a conversation with multiple stakeholders in different parts of the organization that then have to hash out whether or not this is worth it. And then different people will have different opinions. And most of the time the conclusion is ‘it’s not worth it.’ And then the team has to go back to the drawing board and train a new algorithm that will try to achieve all the same things as its first algorithm without actually depressing the engagement.
Speaker 1 [00:39:37] I mean, the picture that you’re painting is that the algorithms can’t be the solution to this problem because the problem at root is that the same thing that we are begging Facebook to address is the exact thing that their business model produces. What we’ve said to the chart that Zuckerberg showed everybody shows us that the exact shit that we want to stop is what brings them the most engagement and growth. And so it seems like to an extent it is a zero sum game, that by reducing the stuff that we don’t want to have; the misinformation, the polarization, we’re going to be reducing their engagement. And they have specifically constructed a business model that relies on maximizing engagement. And so to a certain extent are we asking a crack dealer to stop selling crack and saying, ‘Hey, the crack is killing people.’ The crack dealers like, ‘Oh, I agree. I agree. I’ve got to get a handle on that.’ And then I’m like, ‘well, I’ll put a task force together and see if I can study,’ you know? But at the end of the day, you need them to stop selling crack and they’re not going to. I mean, sorry, I don’t want to bring the language of the war on drugs into this. I now feel a little bit conflicted about that. But you see the the point I’m making.
Speaker 2 [00:40:53] Yeah, it’s a good analogy. I think what I sort of realized in the process of reporting this particular story is self-regulation it just doesn’t work, because it’s not that – I think the way that people often cover Facebook is like Mark just gets to make whatever decisions he wants and then the company moves the way that he moves, which is true to a certain extent. But also, Facebook exists within its own system which is capitalism. And the way that capitalism incentivizes companies to operate is very much to continue growing and to continue pursuing profit. So, if we only have certain incentives that make Facebook do certain things and we don’t have counter incentives from regulatory bodies to then give Facebook a different signal for what they should be doing then it’s just going to keep chasing growth and chasing profit. That’s – I mean, yeah, there’s not anything.
Speaker 1 [00:41:52] Yeah.
Speaker 2 [00:41:53] Like, what else would they do?
Speaker 1 [00:41:54] But yeah, we need to do it. They’re not going to do it themselves. We need to, as a society make some rules around what this thing is, this new pernicious thing that they’ve created. But is that not why Facebook is now trying to change the subject? They’re saying, ‘All right, there’s going to be regulation. We see it on the horizon. It’s happened in Europe, around privacy. What if it happens around misinformation, too? So let’s make a big deal about how we’re doing something about it. But, shift the conversation so we’re not actually talking about misinformation. We’re working on AI bias which is a comfortable topic that there’s been a lot written about that conservatives are mad about, too.’ And maybe they can just direct everybody, ‘Oh, look what we’re doing with AI bias.’ That can avoid regulation on the issue that is the real issue. But if we addressed it, it would actually reduce their growth and their profits.
Speaker 2 [00:42:50] Exactly. Yeah. And I think Facebook does this a lot. They kind of redirect the public’s attention and talk about things in a way that makes very simple problems sound very complicated. When I was writing this piece my editor in chief said this really good point, I was like, ‘Oh, my God, this is so convoluted. I’m trying to explain to people what AI bias is but then how it’s different from misinformation and blah, blah, blah.’ And he was like, ‘Actually, it’s quite simple and the only reason why it feels complicated is because Facebook is trying to overcomplicate it.’ Facebook has just had certain problems for years now that people have been criticizing it about and it’s not doing anything about it. That’s very simple.
Speaker 1 [00:43:37] Yeah. If it’s not as difficult as Facebook is making it seem do you feel that if they really wanted to, they could address misinformation on the platform? Because there is the issue of, if they’re trying to do it with AI in the first place can’t misinformation peddlers just get around the AI? Learn ‘Oh, if instead of q-anon I say p-anon and now we’ll get ahead of the algorithm for a little bit’ or whatever it is. Is there a way to moderate their way out of the problem with AI or not, or is there a more fundamental problem at play here?
Speaker 2 [00:44:15] I think, to answer the first question, could Facebook actually fix this problem? Yes, I absolutely think that they could. Does their current approach of using AI to try and moderate away the problem actually work? No, I don’t think it ever will. And that’s just because of the fundamental limitations of AI itself. You would need to have a nuanced understanding of human language in order to effectively moderate misinfo. And if you were to survey AI experts about this the average amount of time that they believe it’ll take for us to get to AI that actually has nuanced human understanding, it’s upwards of decades. I don’t think you would have time
Speaker 1 [00:45:02] You would need not just to understand how human language works but also say you’re trying to make any AI that’s going to stamp out vaccine misinformation. Well it needs to not only understand human language, it needs to understand how vaccines work so that it can say, ‘Oh, vaccines can’t actually change your DNA because it’s an RNA vaccine and here’s how RNA works and I’ve read all the papers on this and I know that this is not true and that this is the new tactic,’ you know.
Speaker 2 [00:45:29] Yeah. And it needs to understand culture and history because people use cultural and historical references all the time in their language that then insinuates certain things that are not explicitly said. It needs to understand sarcasm, which from an AI’s perspective, it’s like,’What do you mean that you’re saying literally the opposite of what you mean?’ How do you actually – it’s just that’s not possible. But I think the way that Facebook would address this issue, first of all, I think I sort of increasingly started to believe that it’s just not possible for it to address it at the current scale that the company exists. But also it’s the business model. It’s the fundamental assumption that they need to keep maximizing engagement that is the root of these problems.
Speaker 1 [00:46:16] Yeah.
Speaker 2 [00:46:17] And if it were to change that assumption and change the way that it recommends content on the platform whether that’s the post in your news feed or the ads that you click or the groups that you’re recommended to join, all of those recommendation systems. If the fundamental objective of those recommendation algorithms was not engagement but something else then they would significantly reduce a lot of the hateful content and misinformation content spread on the platform.
Speaker 1 [00:46:50] Yeah but they’re not about to do that because they’re going to, I mean that’s what they’re focused on. Is there a point at which they could ever not be focused on engagement and growth above all else? I mean, they already have, what, a good third to half of people in the world on Facebook?
Speaker 2 [00:47:09] Yeah, if they stopped focusing on that I think the company would sort of cease to exist. It would just, yeah, I don’t know. Or it would be smaller? I don’t know. It’s like, how would Facebook actually work if it didn’t focus on that? Who knows. But yeah, it’d probably be a lot healthier for everyone.
Speaker 1 [00:47:29] So you feel that what we need is some outside rules of the road, like, regulation of some kind – or that is the way to address the problem to some degree.
Speaker 2 [00:47:41] Yeah, I do think that there needs to be external regulation of this issue. What that regulation might look like is definitely outside of my expertise. But I think I’m optimistic that it seems like there’s now enough political will on both sides of the aisle to actually think about how do we (whether it’s antitrust law, whether it’s rewriting Section 230) actually regulate Facebook in a way that will allow the company to still exist and provide us the services that we enjoy without all of the bad stuff?
Speaker 1 [00:48:21] Yeah. It’s endlessly fascinating to me. I grew up in the in the early Internet boom, I was on the Internet starting like 1996 and ‘Oh my God, there’s so much possibility anything can happen on here.’ And I came to realize, ‘Oh, that feeling was just because it was – it’s an entirely new area and there were no laws about anything.’ And now we’ve been doing it for thirty years and we’re starting to realize, ‘Oh, looks like we kind of need some laws,’ just like you do with anything. We invented railroads and after a while we need some laws about the railroads to make sure shit doesn’t go really bad. We’re sort of in the same place again. And to a certain extent it seems like Facebook and these other companies are trying to pretend that we’re not and trying to stave off the inevitable as long as possible. “No, no, no, we’ll do it, we’ll fix that, we’ll fix it.’ But unless they actually do (which they seem incapable of) yeah, we’re going to need to have a conversation about it and figure out, OK, we can’t have people trying to undermine our elections. We can’t we can’t have a company whose entire business model mainlines the distribution of misinformation about public health and democracy, we can’t have that.
Speaker 2 [00:49:44] Yeah, yeah, I think the point that you made about when the Internet first started people were like, ‘This seems fun’ and really that’s actually so true because at the time the people who were founding the Internet; their philosophy was that the virtual world existed separate from society and therefore there didn’t need to be rules of the road. It’s a virtual environment, it’s a sandbox. Whatever happens in this universe is not going to affect the physical world. And obviously that’s become increasingly untrue. We’ve realized that that’s just a faulty assumption and that the virtual stuff that happens and translates into physical world things like a genocide or like the Capitol riots. And those are very legitimate reasons now that I think lawmakers are finally like – it’s finally a concrete enough thing that lawmakers are like, ‘Oh, yes, this is territory that we need to be regulating.’
Speaker 1 [00:50:41] Yeah, and we have a culture and a constitution of free speech in America. We need to not be interfering with that in a way but there needs to be a balance here between making sure that we’re not programmatically causing bad things to happen while people can say their piece but that we’re not pushing harmful misinformation to people. Did you get a sense in your reporting that people at Facebook actually care about this issue? Do you feel Mark Zuckerberg cares about it? I think that’s probably a separate question. Do you feel he cares about it? And do you feel that there are folks working on this problem with Facebook who are like, ‘God damn it, this is a real problem, but my hands are being tied here.’
Speaker 2 [00:51:37] Yes, I think there are a lot of people that really care and whose hands are tied. It’s interesting because I think there are sort of three profiles that I’ve sort of found of the type of person that works at Facebook. Which is, I think, an endlessly fascinating question is ‘why do people work at Facebook in the first place?’ And one of the one of the categories is people who genuinely believe that change can happen more effectively from the inside. And there are a lot of people at Facebook that very much believe that and are working really hard to try and change things. But then many of them ultimately leave because then they become cynical and realize that they’re not actually changing things from the inside. With the question of whether Mark cares about this, I don’t think he doesn’t care about this. But the way it’s been described to me is that Mark is just in general very libertarian and is much more nervous about Facebook being ‘an arbiter of truth’ than the fact that there’s rampant misinformation. I think it’s more terrifying to him to give Facebook the powers to arbitrate truth than to just leave it in a bad state. And so it’s not that – I don’t think he actively doesn’t care, it’s just his value system is sort of different from many other people in society.
Speaker 1 [00:53:09] But in my view that’s an abdication, right? That these companies (Facebook more than any other but also Twitter and these other companies) they have a belief that is incorrect; that they are not media companies. They see themselves as platforms where anybody can post anything and like, ‘Oh, no, you can say what you want to say and then people will see it and we’re just the pipes.’ But they’re not, they exert massive influence. In fact, they are the only ones who exert any influence on what people see. I can post whatever I want on Facebook. The only thing that determines who sees it is Facebook’s algorithm and that is not in substance different from NBC in 1970 deciding who sees what on television. The difference between NBC in 1970 and Facebook today is that NBC, the people who ran it, believed that they had influence over what the public saw and they gave a shit about it. And part of the reason they gave a shit was the government was like, ‘You’re going to lose your license to broadcast unless you do this in a responsible manner.’ There are a lot of problems in the way they did that gatekeeping too back then, there are a lot of problems with the media environment then. But that is the analogous position that Facebook is in today. But it’s on a much – like a ten times bigger scale – because they’re global and people are spending a lot more time on it. My view of all these companies got a lot more simple once I realized, ‘Oh, YouTube, Twitter, Facebook. These are media companies but the difference between them is they get all the media for free. People just post it.’ They don’t have to pay anybody, right?
Speaker 2 [00:54:46] Yes.
Speaker 1 [00:54:47] They just get it all for free. But they’re acting like that means that they don’t distribute it to the public and are therefore responsible for it. They’re like, ‘Oh, no, the person who posted it did.’ It’s like a fundamental misunderstanding of what the fuck it is they’re doing. So, yes, I’m on a rant here. What sort of reaction did you get to this piece? This was a fair bit of a blockbuster I feel like, when it when it came out. Did you get a reaction from Facebook to the piece? I’m curious.
Speaker 2 [00:55:20] I did. So, the CTO of Facebook started responding to me on Twitter.
Speaker 1 [00:55:27] Really?
Speaker 2 [00:55:28] Yeah and his first response, which I thought was really funny, was ‘I’m afraid that this piece will convince people that AI bias is not an issue and deter them from working on it.’ And there was this other Twitter user that then later commented, “It’s really weird that your piece calls out the fact that Facebook is using AI bias as a fig leaf to cover up the fact that they’re not doing anything else. And then in response to that, the CTO was like, ‘But we’re doing AI bias work!’” And I was like, ‘Yes, correct. Very weird.’ But speaking with some former employees at Facebook, executives only engage on things when they feel genuinely threatened. So it was it was basically a confirmation to me that, A: that I’m on to something, the CTO actually felt the need to respond. And B: he wasn’t able to say anything that undermined my reporting. And so it kind of just reinforced the fact that it is true.
Speaker 1 [00:56:33] Yeah, that’s a weird trend right now in the covering of these companies. Same thing happened to Amazon where executives start replying to people on Twitter and saying, ‘Well, that’s not true.’ And then it’s quickly shown to be true (the peeing in bottles thing on Amazon).
Speaker 2 [00:56:53] Yeah
Speaker 1 [00:56:55] Someone needs to tell these executives; stay out of your mentions. You don’t need to get into it on Twitter of all places. You guys, I thought they were- Why didn’t they Facebook you about it? Why did they tweet at you about it?
Speaker 2 [00:57:10] Yeah. It’s also interesting I think, so the CTO also did an interview with Casey Newton afterwards to try and present their narrative in a more formalized, respected, journalistic way and the narrative that they then painted there (or the CTO painted there) was ‘Oh, I was so upset at this piece because if you attack any team at Facebook, please don’t make it the responsible AI team.’ And it was a complete mischaracterization of my piece as well, where I was like, I actually did not attack this team at all. I talked about how they was composed of people that are genuinely trying to do the right thing but whose hands are tied. So, yeah, it’s been interesting to just see the way that in the aftermath the way that Facebook’s PR machine works, which which is sort of like part of my story is that they have this very carefully crafted PR machine that tries to mislead the public. And it was just another demonstration of that.
Speaker 1 [00:58:17] They were trying to sell you a specific story of what it is that they were doing. Of ‘We are taking this problem seriously and the problem is AI bias and look at what a great job we’re doing.’ And you saw through that and told an actual story (did your job as a journalist) told an actual story about what’s going on there and they weren’t happy about that, is what it sounds like.
Speaker 2 [00:58:40] They were very unhappy. They were very unhappy and yeah it’s interesting. I had a lot of other journalists reach out to me afterwards who had also covered Facebook and sort of face these things. And they were like, ‘Yeah, this is just a pattern.’ Facebook will give you lots of access and then be extremely displeased with you when you don’t actually write their exact narrative down on paper. And I don’t know if that’s because Facebook is aware that it’s doing that and just that’s part of their PR tactic or if they fundamentally misunderstand what independent journalism means. But, yeah, it’s just the nature of covering that company.
Speaker 1 [00:59:21] It reminds me- It’s very funny that this memory flashed back to my head but a scene from a Saved by the Bell episode that always stuck with me is – this is completely random. But there’s this scene where Jessie Spano is interviewing Principal Bellding for the newspaper and he thinks it’s going to be a really nice interview and she goes, like, ‘What happened to the missing petty cash that was siphoned from the school budget?’ And he his face gets really sad and he says, ‘I thought this interview is going to be about my pet turtle, Pooky.’ And for some reason that stuck with me. That’s what happened. They were like ‘We thought it was going to be about AI bias. We didn’t know you were going to talk about the real problem at Facebook. We though it was going to be a nice interview.’ In terms of how this issue and this specific story that you wrote about Facebook plays into the larger questions among other Internet companies about A.I. in general, how do you feel about that? Are there are there larger issues that this point to?
Speaker 2 [01:00:23] Yeah, there’s been this ongoing conversation within the AI community (which is the community that I cover and sort of live and breathe) about, the fact that we’re building this very powerful technology where we’re just beginning to see some really dire, unintended consequences of it. And yet, this space and our understanding of this technology is very dominated by the tech giants because in order to even build this technology, you need a lot of resources (both a ton of cash to actually hire the people who have the expertise to build this technology, as well as a ton of computational power; massive computers, massive servers that can actually crunch the data to then train these algorithms) and right before my piece published in December of last year there was this whole fallout around Google and their AI efforts and what their equivalent of the responsible AI team, which is called the ethical AI team. And there was basically a lot of reporting came out that Google actively censors their ethical AI team’s work and other researcher’s work at the company that has criticisms of the technology that Google is building. And so then when my piece came out there was sort of this additional evidence that yet another tech giant is sort of actively trying to distort our understanding of this technology and what it means to build it ethically, what it means to build it responsibly. And even when there are good, well intentioned people at these organizations that are leading these efforts they either get fired or they’re completely hamstrung and can’t make the progress that they need to make. So I think to me, it sort of demonstrates for the scientific community and for regular people who – where algorithms are affecting a lot of things in our lives now, there’s a little bit of this scary thing that’s happening behind the scenes that we don’t actually have full transparency into the way that this technology is going to shape us and the way that it could harm us because of the very carefully, closely kept research and communication about this research at these companies
Speaker 1 [01:03:00] AI, the nature of the research, the nature of what it produces is often AI algorithms that produce results that are surprising to the people who made them because of how opaque AI can be. You train an algorithm and you find out what it does. And so there’s that level of opacity, but then there’s the fact that all the places that are working on AI are places like Google, Facebook, presumably Apple, Microsoft, the Department of Defense.
Speaker 2 [01:03:34] Yeah, yeah, yeah.
Speaker 1 [01:03:35] These these massive organizations that are working on AI for a very specific purpose; to maximize ad revenue, to kill people better. I’m sure there’s work being done at universities. But, you know, the fact is that Tesla is another example, where they talk a lot about ‘Here’s what the AI does,’ but the way that they present what the AI does is very at odds with its actual purpose and its actual capabilities. Tesla is an example where they’ve promoted this idea that fully self-driving cars are right around the corner and then as soon as you look at what the cars actually do and what the technology they’re developing actually does there’s a huge gap there.
Speaker 2 [01:04:20] Yeah
Speaker 1 [01:04:22] They’re promoting a certain idea to the public; ‘Here’s what you should think of when you’re thinking of AI.’ Elon Musk saying, ‘Oh, we should be worried about killer robots and I’ll make sure we don’t have them.’ But what is the actual development that is being done on these things is behind the most closed of all closed doors. It’s being done by a couple of massive companies and organizations that have a very specific interest at heart and it’s not necessarily society’s.
Speaker 2 [01:04:54] Yeah and it’s not just misinforming the public, it also misleads policymakers who are actually trying to figure out how to regulate this technology because there are very few people that they can go to that are actually independent researchers not being paid by tech companies or employed by tech companies even in academia there is so much influence from these tech giants (Google, Facebook, Apple, Microsoft, IBM). Because this technology requires so much money and so many resources to develop universities cannot actually fund it themselves. So they have to seek funding from other places a.k.a. the tech giants, and so for policymakers to actually get a good understanding of what is this technology actually and what should we be concerned about so that we can literally codify guardrail laws to prevent that. Who are they talking to? It’s really hard for them to actually talk to someone who is not – doesn’t have that conflict of interest.
Speaker 1 [01:05:58] Yeah. Well, what would you like to see happen around these issues vis a vis Facebook or the broader AI culture/community in general? I know you said it’s above your pay grade to come up with what the actual policy would be, the federal policy that we would hope Congress would make, (it’s above my pay grade, too) but what would you like to see happen in the next year or two? On a lower level that just would improve a couple of these problems. Do you have any wishes or hopes for this? This is how I like to try to end the interview, is to come up with something that can be done
Speaker 2 [01:06:36] OK, so this is like a little bit far flung from our conversation but I think the thing that I would love to happen in the next year is if the Biden administration put up funding for AI research through the National Science Foundation, through the arm of the government that is focused on basic science research and not defense and not other things. Just put up money that doesn’t have strings attached, that’s really focused on actually understanding the technology and the effects of it so that researchers can be independent and independently scrutinize this stuff without working for tech companies. And I think then what I kind of assume will happen based off of my general reporting, is that we’ll start to – our understanding of AI will start to shift pretty dramatically because we will start to have more people, more papers being produced, more research being done that will actually show what this technology is and what we need to be concerned about. That then provides the scientific foundation for addressing all these problems that we’re talking about, regardless of if they are or aren’t at tech companies.
Speaker 1 [01:07:54] Yeah that would be the government taking the role in scientific progress that it traditionally has taken of, like, really studying the issue. The NSF is, their politicians make decisions but it has scientific leadership who would who could be setting priorities. That would be a huge improvement. Absolutely. Well, my God, thank you so much for coming on the show to talk to us about this and for doing the independent reporting that that pissed Facebook off. If you made the CTO of Facebook a little uncomfortable I think that’s probably a good day and we can we can thank you for doing a service. I think at the very least, make them sweat. You want to make them sweat a little bit.
Speaker 2 [01:08:37] I think so too.
Speaker 1 [01:08:38] So I’m thankful for you for doing that and for coming on the show to talk to us about this and we’d love to have you back next time blow the lid off of something.
Speaker 2 [01:08:45] Thank you so much Adam. It’s been great talking to you.
Speaker 1 [01:08:49] Well thank you once again to Karen Hao for coming on the show. If you enjoyed that interview as much as I did, hey, please leave us a rating or review wherever you subscribe or go to factuallypod.com/books to check out the books written by our past guests. Purchase one or two. If you do, you’ll be supporting the show and you’ll be supporting your local bookstore. I want to thank our producers Chelsea Jacobson and Sam Roudman, Andrew Carson, our engineer Andrew W.K. for our theme song. The fine folks at Falcon Northwest for building the incredible custom gaming PC that I’m recording this very episode for you on. You can find me at @AdamConover over wherever you get your social media. If you have a suggestion of a topic you’d like to hear on the show, shoot me an email at factually@adamconover.net. I do read your emails and it is one of the joys of my day. Until next week, we’ll see on Factually. Thank you so much for listening.
Recent Episodes
July 26, 2022
How can we best help animals, when it’s we humans who cause their suffering? Animal Crisis authors Alice Crary and Lori Gruen join Adam to explain how the same systems that hurt and kill animals also harm humans. They discuss the human rights abuses that happen in industrial slaughterhouses and how palm oil monocrops are devastating the world’s rainforests. They also share how we can have solidarity with animals in our daily lives. You can purchase their book at http://factuallypod.com/books
July 19, 2022
In times of turmoil, it can be useful to take a longer view of history. Like, a LOT longer. Paleontologist and author of “The Rise and Reign of the Mammals” Stephen Brusatte joins Adam to explain how mammals took over the Earth hundreds of millions of years ago, and why we survived and achieve sentience when dinosaurs died out. Stephen goes on to discuss why taking a deep look at our history can help prepare us for the crises of the near future. You can purchase Stephen’s book at http://factuallypod.com/books
July 13, 2022
Trans people have existed as long as, you know, people have. But the barriers to legal inclusion and equality are still higher than most people realize. “Sex is as Sex Does” author Paisley Currah joins Adam to discuss why institutions have been slow to give legal recognition to trans identities, why Republicans have shifted their attacks from bathroom policies to trans youth in sports, and why the struggle for trans equality is tied to feminism and women’s liberation. You can purchase Paisley’s book at http://factuallypod.com/books