Between Two Codes

Laura Donohue: Social Media, Disinformation, & National Security

Episode Summary

In our first episode we talk with Professor Laura Donohue about the effects of social media on the First Amendment, the national discourse, and national security. We discuss the history of misinformation, its weaponization and how the modern internet-driven era is different. From fake people living on the moon to the Russian active measures campaign—Donohue does a deep dive into what has been and the current state of affairs.

Episode Transcription

Laura Donohue (00:01):

But it creates this false reality as though everybody believes that when in fact it is this socially constructed, augmented world.

 

 

Ian Carrico (00:17):

Hello, and welcome to Between Two Codes, a podcast where law students talk to the experts at Georgetown Law about the intersection of law and technology. I'm your host for today's episode, Ian Carrico, and we're glad to have you join us.

 

Between Two Codes is brought to you by the Institute for Technology Law and Policy at Georgetown University Law Center. The Institute is training the next generation of lawyers and lawmakers with deep expertise in technology, law, and policy. You can find the many events that the Institute does at https://georgetowntech.org. Before we get started, we'd love for you all to subscribe and rate us on your podcast app of choice, and follow us at Between Two Codes on Twitter and Instagram. We have a lot more great episodes coming down the line. You will not want to miss them. 

 

This week on Between Two Codes we have professor Laura Donohue, a professor of law at Georgetown Law, director of Georgetown’s Center on National Security and Law, and director of the Center on Privacy and Technology. She has worked extensively on issues of national security, constitutional law and emerging technologies. Today, she is here to talk to us about social media, misinformation and constitutional law. And we're going to just dive right in.

 

Ian Carrico (01:27):

 

Professor Donahue, We talk a lot about the current climate found on social media, but when we think back to our founding fathers, there was immense partisan rancor throughout the press. What is the difference about this moment from then?

 

Laura Donohue (01:39):

Back then, they knew what was happening in everybody's home. Everybody knew what was going on back then. 

 

David Brin wrote this great book called The Transparent Society where he says that previously, we knew everything happening in everybody's homes. Now that we went through industrialization, and you moved to cities, there's this anonymity. But now, we're going back to that pre-industrial society with the surveillance and the information available about individuals. The only difference is how disproportionate it is. You can't see into the government, but they can see into you. And with the issue of  Cambridge Analytica, you can't see into the companies, but they can see into you too. It's actually the inequality in information availability that is causing much of the social troubles that we're seeing now.

 

It's the lack of transparency around issues like what algorithms are being applied to what you see in your social media feed. What is creating this reality around you? In my work, I'm very concerned that we live in an augmented reality. We just don't know it. We think it's reality, but in fact, many of our interactions are curated for us. But we never actually have access to how that's decided. And it's done by this black box that we call Twitter, and Twitter decides. Because of that, what we see, what we read, or what we think about is actually being shaped by forces that we have no control over. We're not selecting: “Okay, I'm going to read Mark Twain. He's been banned. Awesome. I'm going to read Mark Twain.” It's somebody telling you: Here's a book you need to read. Here's what you need to do. And that's a very different kind of a relationship to the ubiquity of information about individuals.

 

Ian Carrico (03:24):

So, let's dive into that quickly: when you have these algorithms that really tell you what you need to do, how is that being used right now to push specific narratives and specific disinformation campaigns to each person?

 

Laura Donohue (03:40):

The basic concept here is the “like” function. If you “like” something, then algorithms pick this up and they say, “Oh, well, if you like that, then you'll like this other thing.” (That has some sort of similarity or comes from an individual with a high eigenvector centrality, who's in contact with some of the people that you have the highest relationship with online, etc. You tend to like their stuff. Therefore, you're going to like this super influencer, and you're going to like their stuff too.) And they start putting it higher up in your feed, and you start reading it more. Pete Singer isa terrific scholar,  has written a wonderful book called Like War, in which he says that the problem is that the emotions that travel most effectively online are anger and hatred.

 

There have been studies done on this—that these emotions are what travel the most. People tend to repost an untruth. It turns out that untruths are 70 times as likely to be posted as truth on Twitter. Part of it is probably our human propensity to extremity. Here’s a kind of scandal or “Oh my goodness!” or being shocked by something-- all that tends to fly online. And in the process, it annoys everybody online. Every time you see it, you get annoyed, and then you pass it on with some emotional content, and it ultimately masquerades as fact. But actually, it's emotionally mediated news traveling online. This results in individuals increasingly alienated and upset and angry. It also leads to extreme views.

 

While there are so many problems, one especially important problem with social media is that it's an inherently false world. We don't recognize this but there's a lot in it that leads to falsity. Social media makes you believe that everybody thinks something, when really, only the people curated by the algorithms for you believe that. If you were to take a poll of everyone, of all 2.7 billion Facebook users worldwide, chances are, you wouldn't find the same consistency that you do in your feed. But it creates this false reality that everybody believes that. When in fact, it's a socially constructed and augmented world. 

 

And also, it's not just news that you’re receiving. It's people seeking validation. People tend to maybe present themselves and information in a not entirely accurate manner online in order to get more likes and responses. If you have 40 characters to do this, it's not going to be a nuanced discussion. It's only going to be a  crude presentation that ultimately plays to everybody's cognitive biases. This social media world is already built to support and promote a false reality. As views are articulated, this false world  drives individuals to more and more extreme views because they now believe something that may really be an outlying idea is a mainstream idea because there are more individuals with that outlying idea in your feed. And so on and so forth. Ultimately, we end up with a drive to extremes online that has a lot to do with the structure of the online environment.

 

Ian Carrico (07:20):

Let's talk a little bit about the structure then: What could possibly be done to the structure as it lies currently to change how we see this perception, to make it less so that hatred and disinformation and false news or false information is being spread so easily?

 

Laura Donohue (07:41):

Well, that's the $10,000 question. You can think about it like a continuum. What if you have just blatantly false information? For example, all Democrats are lizards. Or that wearing masks makes you infertile. “You know you shouldn't wear a mask because then you won't be able to have babies someday.”W e've seen all of these messages  around coronavirus. It's just blatantly false information that’s going up online. Where to start with handling this? You have a few choices as a social media company. One: you can do nothing and leave it up there. Or two: you can make a note of the number of people who ask you to take it down because it's false.

 

Laura Donohue (08:30):

You can say “so many people ask this to be taken down because of veracity.” You can flag it by saying it actually is false. This is problematic. Or you can de-platform somebody and say, “you put so much false information up, we have booted you off of our platform.” Or you can send it to some sort of adjudication, like an oversight board. Let's call it, I don't know, Facebook Oversight Board. And let's have this ostensibly impartial third party—which is actually people we've handpicked, but okay—let's have this impartial third party decide. And we're not doing anything—we're just watching. And then at the far extreme, you have laws that come in and say, “you can't put up anything false.” Well, then you are immediately in the First Amendment world.

 

Ian Carrico (09:22):

And it would be one thing if all of this false information—all of the “masks make you infertile,” “Democrats are lizard people,” etc—were to remain online in social media. We see very quickly that this affects people's behavior in the real world. You have people who are afraid of wearing masks. You have people who are afraid of the opposition party or whatever political piece is involved here. The largest example, of course, being what happened on January 6th of this year—where you have people who believe that the election was stolen come to protest that. Do you want to talk a little bit about how we have seen recently this become more and more elements of the real world and not just the virtual internet world?

 

Laura Donohue (10:11):

One of the issues, at least from a First Amendment perspective with January 6th in mind is the Brandenburg Standard. We used to have a clear and present danger standard, which we got rid of because it was a bad standard. And so in Brandenburg v. Ohio, you have a case that establishes in the mid 20th century saying that there has to be the risk of imminent lawless violence. If you have a mob assembled and you say, “go attack the Capitol. And let's take this democracy back”—that's not protected speech. If you're trying to incite imminent lawless violence. If you're just posting—though publicly—and you're expressing your opinion, “we should take back this election. This election was stolen from us. We should try to take that election back.”

 

Laura Donohue (11:03):

That's a general view that you're posting online as a general statement. That's a little bit different. Now, what if your followers happen to be members of a group that is currently gathered with weapons down around the Capitol complex. What if that's the group? They're not your only followers. In fact, they're not even a significant fraction of your followers. They're just some of your followers. Some of your followers have nothing to do with that group that's gathered. The law was not developed with something like social media in mind. What happens if you put something up on social media and three months later somebody reads it and then engages in violence, are you responsible for it? At the time you posted it, it wasn't imminent.

 

Laura Donohue (11:56):

Are you responsible for everybody who happens to read something you write and then decides to take up arms? Then what do you do about the situations where you have an environment that's created by constant rhetoric, espousing opinions that play into social divisions and social bifurcations—and create two disparate groups and play on those divisions that prime the pump for that engagement. And there, we don't have Supreme [Court precedent], that's protected speech.If we get out of the disinformation and into the incitement category, there has to be imminent lawless violence for that to be within the Brandenburg standard. And likely to produce illegality. “I think we should all attack Mars right now”—there's no real threat there.

 

Ian Carrico (12:57):

I think we should attack Mars. It seems we're in this gray, new world area where the previous Brandenburg test doesn't necessarily fulfill what we would imagine people do on social media for imminent lawless action in some way, shape, or form. Is there something that should be rethought about the First Amendment and the case law there, or is this an area that we need to work with more private organizations that think about de-platforming as a better solution in a corporate sense?

 

Laura Donohue (13:34):

Interestingly enough, it's in libel law that we're seeing the treatment of and wrestling with social media in many ways. In the case of Alvarez, the stolen valor case where there was a legislation that basically said that it was illegal to hold yourself out as having received a military honor if you hadn’t, and the Supreme court said that that section was unconstitutional for First Amendment purposes. Now there was a dissent in that case. That's Justice Alito joined by Scalia and Thomas who said, look, if you say something that's knowingly false information, then that shouldn't be constitutionally protected; If it is knowingly false. ow The Supreme Court, both the majority and the dissent in that case, they subscribed to this idea that there is no such thing as a false statement of fact with regard to history, or with regard to art, or with regard to social sciences. If you say 40 years ago, “President Whoever did this,” you can't be attacked for that.

 

Laura Donohue (14:42):

You can't make that kind of speech reachable by the government, because that's an interpretation and understanding of history. But if you're talking about personal experience, the dissent in that case said, look: knowingly false speech. There is a case this week, for instance, in Hawaii of this doula. There’s the creepy doula case in Hawaii, where the doula is a man who is taking compromising photos of women and labor, and people went on Twitter to call him a creepy doula and that “something's wrong here.” The question in that case is how much latitude do you have on Twitter? There was a case recently in California in December of 2019, the court decided in Unsworth v. Elon Musk. This is the case, where do you remember the children's soccer team who was in the cave?

 

Laura Donohue (15:40):

One of the rescuers was a British caver who rescues people from caves. And apparently Elon Musk had offered the rescuers the opportunity to use a mini sub that he was going to build to go into the cave, to rescue the children—and Unsworth, who was the British rescuer, afterwards got on CNN and said “Elon Musk turned up with his mini sub after we already had eight of the children out, eight of the 12, he was totally useless” and then made some pretty disparaging remarks. Elon Musk then started tweeting that this caver,the guy who works for the rescue of the children in the cave, was never there. That he's a pedophile. He married a twelve-year-old and started making all these allegations against him. The question is are those protected statements on Twitter or not?

 

Laura Donohue (16:33):

How far do we go in protecting speech on a social media platform? And in that case, they said, that's protected speech. It was heated rhetoric because the two of them had locked horns and they were engaging in that kind of discussion. 

 

Now in New York, there's another case—it was a pump and dump case. There was an investigative journalist who published an article in the Southern District of New York for Harper's Magazine, Then he tweeted online about an individual and accused him of illegality. He had done this investigative journalism piece and then he tweeted afterwards about it. And in the Southern District of New York, they said, well, no, that's not protected speech because you're an investigative journalist, you had all of this background, you had this report already done.

 

Laura Donohue (17:22):

One could assume that you meant it in a way that it was personal knowledge and that you would have known whether that was true or false based on your report, therefore people would have believed it, therefore that comes within the libel. The problem is that these cases are all over the map. There's not a body of law emerging because so many of them are fact dependent, but what you're seeing is a real questioning of how to think about and handle false information in an environment which is all about emotional content. How do you litigate what's true or false or how do you prohibit what's true and false? This is something that the court has always wanted to stay out of because it's so hard to know and really you don't want to put your legal system in the middle. And that's why Section 230 of the Communications Decency Act kind of kicks that question off because then it doesn't involve the government in this position of actually interfering in the “truthiness.” So to speak—that’s an awful word. But in the truthfulness of any statement that's put online.

 

Ian Carrico (18:25):

We started this conversation talking about how in 1776 the government and private individuals could kind of see each other and know what was going on. We have this new world, but is all the disinformation, the saying libelous things and publicizing it…  Surely this isn't entirely new. These are issues that we've seen again and again, throughout our history, whether it be with with Russia or early on with various publications that I'm sure our founding fathers did against each other. What makes today so much different?

 

Laura Donohue (19:07):

Yes. That's actually a really good question. And there's a lot of disagreement about this. Is this actually new or not? Should we be concerned about this or not? And if you think about it: Octavian put images on coins, little slogans about Antony that made him out to be like this alcoholic womanizer. And he was corrupted by his lust for Cleopatra, that he was her puppet—and he wins! He becomes the first Roman emperor. He becomes Augustus! He wins on misinformation, on blatantly false information. That's not new. Technology as an enabler—that's not new either. I think about the printing press, what was possible before and after, 50 monks sitting in an Abbey is not going to be nearly as effective as Luther going to a printing press and printing out his thesis and distributing that throughout Europe.

 

Laura Donohue (20:10):

You can overthrow a church by doing that. You can completely change society. That's a very different kind of a world. Or the great moon hoax. This was great. John Locke, the famous philosopher, one of his descendants actually came up with this moon hoax. He couldn't get a job; he was British. He ended up working in New York for one of the New York “penny papers.” And he is Richard Adams Locke. Richard Adams Locke decides he's going to publish this story about how Sir John Herschel, who is this famous scientist, has discovered life on the moon. And he actually publishes pictures from Hershel's telescope showing aliens on the moon—incredibly successful.

 

Laura Donohue (20:49):

Everybody in New York believes that—including the New York times. They write it's highly probable. Because he's citing Herschel and everybody knows Herschel. John Herschel's this famous scientist at the time. Everybody falls for it. It's also very typical of war time. We've seen so much disinformation at war time. The Nazi propaganda, all of these campaigns. The USSR—they had this disinformansia. They're different just informants in the CIA campaign where they engage in these measures. They had “Operation Infection” where they tried to convince people that the AIDS epidemic was actually a biological weapons program in the United States. They used a disinformation campaign to heighten racial animosity. This might sound familiar. They actually played into the civil rights movement and tried to exacerbate that for individuals, they they designed these active measure campaigns and their goal was to look for existing divisions in society and truthful news and weave those together in a way that forged documents and false information came together with truth in order to have this effect.

 

Laura Donohue (21:54):

The idea was to basically get their target audience to change their beliefs or views or minds and steadily become more and more isolated from others in society. To bifurcate society—that was the goal. That's what they were trying to do. And so, when we look at this, this isn't new. What is new is it's faster. It's easier to accomplish. It's much faster. It can be applied at scale. You can do it, not just to one person, but across an entire society. You can take Facebook and you can micro target. This is what Cambridge Analytica—what we learned from that is that you can actually identify different psychological profiles and use those profiles to target individuals in different ways to respond to information.

 

Laura Donohue (22:49):

Individuals who are most fearful about their children's security, you play into that fear with stories about children. And individuals who are neurotic, you play into the neuroticism of those individuals. You can also hide and mask that attribution. And it appears to be much more effective than it has been before. And you don't have to leave home. You can do this from a globe away. And that seems to be new. 

 

Ian Carrico (23:21):

What does this all mean then from a national security perspective? One of your focuses—if you can have a disinformation campaign that can be run by either, a lab in Russia or a 16 year old in Texas, how should the government respond if these are our active issues without stomping upon the First Amendment at the same time?

 

Laura Donohue (23:49):

The national security issues that are presented here are really kind of interesting and unique in some ways. The first point is that these companies are privately owned. They're not actually controlled by the government. And that's something different. Traditionally, national security is when a government attacks another government, or a non-state actor attacks a government. The government's not in this picture. This is privately owned companies, sometimes just straight out contracts with them. Picking up metadata, using that information and advertising on these networks. The government isn't engaged in that. And what that means is, at one level, their customer base might not even be American. Facebook has 2.7 billion users: only 190 million of those are in the United States. That's less than 7%. Think about that: less than 7% of Facebook users.

 

Laura Donohue (24:44):

And most of them, by the way, would be probably over age 50. That's not a stat, that's an assumption. But, our children say, that's what my parents do; that's what my grandparents do. Kids are not on Facebook. Kids are doing Instagram and Snapchat and TikTok and all these other ones, but TikTok is a great example: Only 10% of TikTok users are Americans. What that means is these companies, their interests don't align with US national security. Their interests align with their customer base and with their bottom line. And what works? Sensation. Sensationalist information. At some level there's very little motivation. Not only is the government not involved, but there's very little motivation for them to play ball.

 

Laura Donohue (25:29):

Why would they? If they lose 7% of their market, but they still have 93% of it globally, really how much is that going to hurt Facebook? And what about companies that aren't even US owned? These international companies. They're also incredibly powerful. Facebook, their net worth is $527 billion. That's kind of a moment to pause. That means if we were to look at every country in the world, what their GDP is, Facebook would be number 26 in the world in terms of size. That's in front of countries like Argentina, Iran, the UAE, Ireland, Hong Kong, Singapore... all of them would be ranked lower than Facebook based on their GDP or Facebook's net worth. That's powerful. Then you can deplatform the President of the United States—that is power.

 

Laura Donohue (26:23):

This is a huge issue. My colleague, Professor Julia Cohen has written and thought and worked on this concept of sovereignty—whether this is a new kind of sovereignty that we're seeing, which is just a really fascinating idea. And certainly now that Facebook with their own oversight board. They have a litigation system. They have all this money—they can control who speaks and who doesn't speak. That's a very different circumstance in terms of the national security risks. You have this privately owned actor with global reach, different interests, that's super rich and very, very powerful. And they're being driven by something different: the bottom line. That's the first thing I would say. I would say there are really three things here. The second thing is, we talked about this earlier, is this kind of false reality.

 

Laura Donohue (27:16):

That's promoted that there are these structures of social media that create this false world, an augmented reality that blends with our real world. And we haven't talked yet about what kind of what's coming down the pike, but if you think about the actual move to augmented reality and virtual reality and the use of biometric readings—and facial recognition. Not facial recognition in a biometric sense, but in a sense of being able to read emotion off of individuals and then tag those emotions and analyze those emotions. There are patents; I've been doing this research project as part of the tech incubator that we're launching. And the research assistant—I have a terrific RA who has been just wonderful—we've been really looking at the patents and the patent applications that are coming out. And the idea of being able to read emotion or to record experience and live another's experience, the use of all sorts of disability focused innovations transferred over to shared experience in the social media world.

 

Laura Donohue (28:20):

The idea of attaching social media to the Internet of Things so that suddenly your friend can stop by your house and have a bite to eat. You just unlock the door for them. They can turn, put on the fire, kick their feet up. They can be there when you're not there. Maybe eventually be there virtually when you're not there. Or you can gather in a room in that way. It's just like a whole different world that's coming down the pike. And when we think about this: this is a second reality. Right now, it's a false reality, vis-a-vis the 3D world, but that's going to become blended into one reality. And that's a very different national security threat than we've seen before. And then the last one is there is a lot about social media that makes it vulnerable to bad actors.

 

Laura Donohue (29:07):

Brian Jenkins in 1974 is kind of famous in the terrorism literature. He said, “terrorism is theater.” Terrorists don't want a lot of people dead; they want a lot of people watching. And if you have this world where one person can go viral overnight, sometimes unwittingly. Like the guy whose mom put something up about him during the #MeToo movement, “My son doesn't go on dates” or something. He's like, “Mom, I do. I like to go on dates. I like people. I support the #MeToo movement.” He went viral, poor guy. He woke up and his feed was blowing up. He had to go offline for a while. Then he had the post, like you know, sometimes unwittingly, sometimes wittingly, you can suddenly have this influence and this platform.

 

Laura Donohue (29:55):

What happens is you see things like the New Zealand shooting in the mosque. In New Zealand the shooter live-streamed it on Facebook, announced it on Reddit, tweeted about it, and everybody could see what was going on. This one person on an Island out in the Pacific, who goes in and does this thing suddenly becomes global and everybody knows instantaneously what is happening. That's very different because that allows bad actors to undermine the government, to manipulate society, to influence electorates in the middle of elections, to undermine the financial markets. 

 

There's been very little attention paid to the financial markets and how this plays out. That's exactly what countries have done in the past, but that just wasn't nearly with the same types of tools that can be used in this way. And that's a very different situation than I think we've had before. And it creates a somewhat unique national security context that we need to think long and hard about what are the best ways to respond, especially in light of our constitutional restrictions.

 

Ian Carrico (31:12):

It seems that, going back to what you said earlier, what's different now is the speed and scale, and that we're accelerating towards everything except a resolution. We have a bunch of open questions, but nothing that's truly satisfying right now saying, but we can now look to, or we can go, go to this next. Do you see anything in the future that may lead to better answers? Or is this something that we just need more people to be focused on and to have these conversations and to really question how we should be looking at reality and how we should be looking at social media in the future?

 

Laura Donohue (31:50):

Yeah, that's a great question. I mean, both, right? Yes. We have to focus on those. And in fact, we are. This is at the National Security Center: We're launching this tech incubator and our first project, we got a generous grant from the school network. We have a public interest network of universities that's \funding this as well as some alums who've been very, very generous. We're getting together a really different group of people. For years, 've taught the National Security Law Simulation at Georgetown, and I've just become increasingly concerned at the role of private and non-state actors. And while we've gotten the simulation to a level where we've got a pretty good idea of how to run it in terms of the National Security Council in the US, as well as the Five Eye countries.

 

Laura Donohue (32:47):

Now I'm really turning in my work and at the center, we're certainly turning to look at this role of private actors. It's venture capitalists in Silicon Valley and influencers and social media companies and social groups who are somehow marginalized in society. These are the people on the front line now. It's going to be people who focus on media studies who think about the ethical issues. We're working with the ethics lab as well. We're working with MIT's Gaming Lab to actually game out and look at where social media is and how it's heading forward. And I think it's going to take a lot of difficult cross-disciplinary international and focused discussions and hard work to think about all the risks that are presented. As we start going through how to address these risks, both the intended and unintended consequences, it may be that the traditional approach, which is regulation and legislation is actually not the right way to go.

 

Laura Donohue (33:55):

Maybe it turns out we want third party data brokers. Maybe it turns out actually education is more important. Maybe we need to focus more on the transparency of the algorithms that are being used. Who knows what the answers are, but it's pretty clear that our traditional tools and ways of thinking about national security, even our institutional arrangements, it's not clear for instance, is this FTC, is this FCC, is this FEC. If we're talking about elections, is this the Federal Election Commission? Who should be looking at this and thinking about this? We have institutions that are built with old technologies assumed, and now we're confronting threats that cross these institutions and present new and unique threats to the United States. And so I think it needs a lot more concerted discussion and study, and research and certainly, writing and moving the ball forward. That's what we're focused on at the center.

 

Ian Carrico (34:53):

I think that's a perfect place to end it off. Thank you so much Professor for coming and talking with us today and good luck in everything that the Center is doing. And we look forward to seeing what comes out.

 

Laura Donohue (35:04):

Thank you so much for having me. This was a real delight. Thanks for inviting me.

 

Ian Carrico (35:18):

Thank you all for joining us on our first episode Between Two Codes. Don't forget to subscribe and rate us to be notified for our next episodes. Find us on Twitter and Instagram. You can also follow professor Donahue's work at the Center on National Security and the Law's website at Georgetown Law. Thank you. And we'll see you again in a few weeks.

 

Credits:

Producer: Ian Carrico
Audio Engineer: Luke Evans
Researcher: Abby Johnson
Transcription: Liza Clark & Panya Gupta
Social Media: Ada Locke