Some Information about Misinformation: A Conversation with Samuel Woolley

 

Information literacy is an essential ingredient in a healthy democracy. Samuel Woolley will arrive on Town Hall’s stage on January 9 to discuss his new book The Reality Game. It shows how the breakneck rate of technological change is making information literacy nearly impossible. Woolley argues for a new culture of invention, one built around accountability, and especially transparency.

He recently sat down with Town Hall’s Jonathan Shipley to discuss bots, bias, and Facebook.

JS: What initially got you interested in misinformation?

SW: I first got interested in digital misinformation during the Arab Spring and Occupy Wall Street—offline protests that made serious use of social media tools like Twitter, Facebook, and YouTube to organize and communicate. I noticed, during these protests and participants’ use of digital tools, that the internet wasn’t only getting used for democratic ideals or to aid the people fighting back against authoritarian regimes. In fact, it looked like the regimes and their supporters were also using these same social media platforms in attempts to artificially amplify their own talking points. They were building armies of fake accounts—known as sock puppets and, when automated, political bots—to massively spin things in their favor. A small group of researchers, including my collaborator Philip Howard and I, quickly discovered that these coordinated “computational propaganda” (as we began calling them) campaigns were also being used to attack and defame opposition leaders. Bot armies were simultaneously co-opting the hashtags the activists were using to coordinate and filling them with misinfo, spam and noise—making it so the platforms became less viable tools for communication. After running early analyses on these circumstances, we began to widen our net to focus on whether similar tactics were being used around the globe. The punchline is well known to most people now, but suffice it to say that we found similar tactics being used during almost all online conversations surrounding the elections and other important events we examined—from Australia to Venezuela.

JS: What can we do, John and Jane Q. Public, do to combat it? How can we tell fact from fiction these days?

SW: I think that the public should have hope for several reasons, and also that there are several things we can do to combat misinfo, disinfo, and computational propaganda. First, it’s important to note that the very fact that we are having serious public discussions about the problems associated with misinfo and “fake news” (though I prefer not to use this particular term) is a win for truth. Those who work to spread fiction, for political purposes or otherwise, have a much harder time spreading junk news and other informational garbage when people are savvy to the problem—as some have said, “sunlight is the best disinfectant” when it comes to such issues. When my teammates and I first started studying and reporting on these problems in 2013, it was very difficult to get anyone, let alone tech firms, to pay attention. Now, stories about misinformation are everywhere you look. 

Social media firms are also responding, some more effectively than others. 

There are also tools people can use to track social media bot accounts and false narratives. BotOmeter allows people to plug in Twitter accounts handles and, using numerous parameters, learn if a suspicious account is actually automated. BotCheck.me, from RoBhat Labs in Berkeley, has similar uses. The team at RoBhat also have tools like NewsBotAI, which assesses bias in news articles, Surfsafe.me, which assesses author credibility, and FactCheck.me, which works for cluster automated behaviour and improve response times to misinfo attacks. On top of this, teams at the Center for Media Engagement at UT Austin, FirstDraft, Data and Society, the SMaPP Lab at NYU, the Digital Intelligence Lab at IFTF,  the German Marshall Fund, the Digital Forensic Research Lab at the Atlantic Council and others are constantly releasing top notch research and deploying exciting new tools to combat misinformation and bolster solid reporting.

JS: Are our social media channels too far gone? Twitter, I know, recently banned political ads. Will that prove at all effective? Why/why not?

SW: The largest social media companies,Facebook, YouTube, Instagram, Twitter, etc., face the serious challenge of catching up to a problem that has existed on their platforms for a decade. For the longest time, they were doing little more than deleting the automated accounts they chanced upon. They did not, by their own admission, do nearly enough about the issue of political manipulation on their platforms. They set their sights on growing as fast as possible without thoughts to the informational repercussions that came with this massive scaling. It’s hard for me to see this unfettered growth, alongside the disregard for how they were damaging democratic communication, for anything other than greed and negligence. Finally, though, the executives at the companies are beginning to take responsibility and they are deploying serious resources towards fighting back against misinformation and other forms of manipulation online. 

Really, though, it is the researchers and engineers at these companies who I have the most faith in. They are the ones with the serious know-how, and they’ve shown they want to do something. It was these employees that spoke back against Zuckerberg’s recent move to allow to allow politicians to spread disinformation in Facebook ads in the name of “free speech.” It was them who fought back against (and eventually sunk) predatory payday loan ads and Google’s Project Maven AI drone project with the pentagon. Recent moves by Twitter to ban all political ads, or by Facebook’s near opposite move to allow some forms of political disinformation in ads, feel a little too cut and dry for my taste. These companies are being heavy-handed, likely for the sake of optics and marketing, rather than taking a nuanced approach to the problems. I mean, what exactly constitutes a “political” ad? And how can they allow the most influential among us to spread totally fake narratives? It seems like the companies are trying, and even trying hard, but that they’ve still got a lot of work to do. This makes me wonder, what new social media platforms will arise? How will new channels be built, from day one, in efforts to prevent the flow of misinfo?

JS: Facebook—friend or foe?

SW: Facebook, as I’ve mentioned, has a lot of problems and shoulders a serious share of the blame for the current fractured state of the global information ecosystem. Zuckerberg, Sandberg, and other executives at Facebook have become well known among the tech crowd, from external researchers like me to current and former Facebook employees, for tightly controlling their firm. They would do well to make the process of fixing the problems they’ve helped to create by allowing more democratic input from their own employees and outside experts. They should listen, and listen well, to their research team—which is full of capable, well-trained, social and computer scientists. 

Google also shares a huge share of the blame, but has gotten much less attention, mostly by remaining mum and towing the bogus line that they are “just a search company.” As if the world’s largest search firm hasn’t had a hand in allowing the information we see and consume on their engine to be manipulated and disinformative at several junctures throughout its brief history. They also own YouTube, which researchers like Alice Marwick and Becca Lewis have shown to be rife with white-supremacist, racist, and other seriously problematic content. Google needs to step up in a very big way. Twitter, because of its smaller size, is more a bit player in this drama—though they get a lot of attention because journalists and policy wonks hang out on the platform. In recent months, Twitter has arguably been doing more than its larger rivals to fight back with its political ads ban and other moves. What we chiefly need, though, is more collaboration between the firms. Right now they aren’t taking these issues on as a team. They are still trying to hide their cards from the companies they see as their opponents in the market when really they should be focusing on their opponents in the fight for the truth.

JS: Fake news stories. Twitter bots. Deepfake videos. What’s next on the misinformation front?

SW: I think the next frontiers for misinformation lie in innovations in Artificial Intelligence, machine learning, and extended reality media. I’m also concerned about the realization of individualized political ad targeting and things like politically motivated geofencing. To date, the vast majority of social media bots we’ve seen have been clunky and brutish, usually just massively amplifying likes or re-posts on behalf of one political idea or person or in opposition to another. 

They’re the cheapest tool that has gotten the job done for those hoping to manipulate public opinion. With social media firms stepping up their responses to misinformation, and with innovations and price drops in AI tools, it’s likely we will begin seeing more convincingly human AI accounts. Whether these accounts will actually be able to convince people, rather than polarize and disgust them in the way their clunky-automated brethren have, remains to be seen. But we should be planning for AI to be deployed for manipulative information operations. Also worth thinking about: will AR and VR tools be used to spread propaganda? If so, how? I list examples in my new book of some ways this is already happening in places like China and beyond. We’ve got to get ahead of such uses of our emergent technology before they grow out of hand.

JS: Is technology moving at too fast a rate for us to keep up with it in regards to misinformation? 

SW: Yes and no. Yes, technology is growing too fast and we could really benefit from a “slow” technology movement like that discussed by Janell Burley Hoffman and others. We need a new direction in tech that focuses on thoughtful, ethically-made tools that are built with human rights in mind rather than growth and profit. But no, too, because I’m a firm believer that politics, scandal, and points of concern move like a pendulum. History shows us we tend to swing from one extreme to another, politically, culturally, economically, socially. We are lucky when we exist in times of relative balance. The way technology has allowed disinformation to scale through automation, and the way that features like anonymity prohibit our ability to catch the “bad-guys”—these things are scary but they aren’t insurmountable. Technology is not a runaway train, we aren’t dealing with HAL or Skynet here, we still have control and there are still many, many, things we can do. We can, for instance, built tools with the best features of humanity in mind. We can design for benevolence, equity and fairness. 

JS: What do you suggest the government (local/state/federal) do to stem this tide?

SW: Generate sensible policy! I say “sensible” because many of the attempts I’ve seen, from Europe to Brazil to the US, lack technological viability and tend towards heavy-handedness. We need governments and policy-makers to consult very closely with public interest technologists and social scientists who study technology so that they create laws and regulations that actually combat rather than complicate the problems at hand. I’m proud of politicians and political entities like Mark Warner and the City of Seattle that have worked to actually combat misinformation online. My other caution is, though, that we need systematic regulation to this problem. Fragmented laws—for instance amalgamations of divergent regulation at the local, state and federal levels—could hurt us in getting things done a lot more than they could help.


Learn more when Samuel Woolley talks misinformation on 1/9. Tickets are on sale now.

Upcoming Events

Rental Partner: Earshot Jazz and Seattle Repertory Jazz Orchestra present

Duke Ellington’s Sacred Music

A Timeless Celebration of Jazz