Can Geeks Defeat Lies? Thoughts on a Fresh New Approach to Dealing With Online Errors, Misrepresentations, and Quackery

authordefault
on

This afternoon, Iโ€™ll be at MIT for this conference, sponsored by the Berkman Center for Internet and Society at Harvard and the MIT Center for Civic Media and entitled โ€œTruthiness in Digital Media: A symposium that seeks to address propaganda and misinformation in the new media ecosystem.โ€ Yesterday was the scholarly and intellectual part of the conference, where a variety of presenters (including yours truly) discussed the problem of online misinformation on topics ranging from climate change to healthcareโ€”and learned about some whizzbang potential solutions that some tech folks have already come up with. And now today is the โ€œhack dayโ€ where, as MITโ€™s Ethan Zuckerman put it, the programmers and designers will try to think of ways to โ€œtackle tractable problems with smallย experiments.โ€

In his talk yesterday, Zuckerman quoted a helpfulโ€”if frankly, somewhat jarringโ€”analogy for thinking about political and scientific misinformation. Itโ€™s one that has been used before in this context: You can think of the dissemination of misinformation as someone akin to someone being shot. Once the bullet has been fired and the victim hit, you can try to run to the rescue and stanch the bleedingโ€”by correcting the โ€œfacts,โ€ usually several days later. But, psychology tells us that that approach has limited useโ€“and to continue the analogy, it might be a lot better to try to secure a flak jacket for futureย victims.

Or, better still, stop people from shooting. (Iโ€™m paraphrasing Zuckerman here; I did not take exactย notes.)

From an MIT engineerโ€™s perspective, Zuckerman noted, the key question is: Where is the โ€œtractable problemโ€ in this, uh, shootout, and what kind of โ€œsmall experimentsโ€ might help us to address it? Do we reach the victim sooner? Is a flak jacket feasible? And soย on.

The experimenters have already begun attacking this design problem: I was fascinated yesterday by a number of canny widgets and technologies that folks have come up with to try to defeat all manner ofย truthiness.

I must admit, though, that Iโ€™m still not sure that their approaches can ultimately โ€œscaleโ€ with the kind of mega-conundrum weโ€™re dealing withโ€”a problem that ultimately may or may not be tractable. Still, my hat is off to these folks, and the enthusiasm I detected yesterday wasย impressive.

Someย examples:

* Gilad Lotan, VP of R&D for Social Flow, has crunched the data on falsehoods and, um, truthoods that trend on Twitter. Heโ€™s studied which lies persist, which die quickly, which never catch fireโ€”and why. To stop falsehoods in their tracks, he advocates a โ€œhybridโ€ approach to monitoring Twitter liesโ€“combining the efforts of man and machine. โ€œWe can use algorithmic methods to quickly identify and track emerging events,โ€ he writes. โ€œModel specific keywords that tend to show up around breaking news events (think โ€œbombโ€, โ€œdeathโ€) and identify deviations from the norm. At the same time, itโ€™s important to have humans constantly verifying information sources, part based on intuition, and part by activating theirย networks.โ€

* Computer scientist Panagiotis Metaxas, of Wellesley College, has figured out a way to detect โ€œTwitter bombs.โ€ For instance, during the 2010 Senate race in Massachusetts between Scott Brown and Martha Coakley, Metaxas and his colleague Eni Mustafaraj found that a conservative group had โ€œapparently set up nine accounts that sent 929 tweets over the course of about two hoursโ€ฆ.Those messages would have reached about 60,000 people.โ€ Alas, the Twitter bomb was only detected after the election, once Metaxas and Mustafaraj crunched the data on 185,000ย Tweets.

* Tim Hwang, of the Pacific Social Architecting Corporation, introduced us to bot-ology: How people are creating programs that manipulate Twitter and even try to infiltrate social networks and movements. Hwang talked about, essentially, designing countermeasures: Bots that can โ€œoutโ€ other botsโ€”and even serve virtuous purposes. โ€œThereโ€™s a lot of potential for a lot of evil here,โ€ he told The Atlantic. โ€œBut thereโ€™s also a lot of potential for a lot ofย good.โ€

* Paul Resnick, of the University of Michigan, discussed the beta-mode tool Fact Spreaders, an app that automatically finds Tweets that contain falsehoods and connects users to the relevant fact-check rebuttalsโ€“so they can rapidly tweet them at the misinformers (and misinformed). It seems to me that if something like this catches on widely, it could be powerfulย indeed.

This is just a tiny sampling of the truth gadgets that people are coming up with. Ethan Siegel, a science blogger who was not at the conference (but should have been), is now working for Trap!t, an aggregator that is โ€œtrainedโ€ to find reliable news and content, and screen out badย information.

Soโ€ฆokay. I am very impressed by all of this wizardry, and am glad to share news of these efforts here. But now letโ€™s ask the key question: Can it scale? Can it really make aย difference?

Look: If Google were to suddenly do something about factually misleading sites showing up when you, say, search for โ€œmorning after pillโ€ (see the fourth hit), thereโ€™s no doubt it would make a big difference. But as Eszter Hargittai of Northwestern put it in her talk yesterday (which highlighted the โ€œmorning after pillโ€ example), Google doesnโ€™t seem to be taking on this role. And none of us have anything remotely like the sway ofย Google.

Short of that, what can these kinds of effortsย accomplish?

I heard a lot of impressive stuff yesterday. But what I didnโ€™t hearโ€”not yet anywayโ€”was an idea that seems capable of getting past the vast and potentially โ€œintractableโ€ problem of information-stream fragmentation along ideological lines. The problem, I think, is captured powerfully in this image from a recent New America Foundation report on โ€œThe Fact-Checking Universeโ€; the image itself was originally created by a firm called Morningside Analytics.

What the image shows is an โ€œattentive clusterโ€ analysis of blogs that are interested in the topic of fact-checkingโ€”e.g., reality. Blogs that link to similar sites are grouped together in bubblesโ€”or closer to each otherโ€“and the whole group of bubbles is organized on a left-to-right politicalย dimension.

The image shows that although both profess to care about โ€œfacts,โ€ progressive and conservative blogs tend to link to radically different thingsโ€”e.g., to construct different realities. And thatโ€™s not all. โ€œA striking feature of the map,โ€ write the New America folks, โ€œis that the mainยญstream progressive cluster is woven into [a] wider interest structure [of blogs that are interested in economics, law, taxes, policy, and so on], while political discourse on the right is both denser and more isolated.โ€ In other words, conservatives interested in fact-checking are linking to their own โ€œtruths,โ€ their own alternative โ€œfact-checkingโ€ sites likeย NewsBusters.org.

What I havenโ€™t yet heard are ideas that seem capable of breaking into hermetically sealed misinformation environments, where an endless cycle of falsehoods churns and churnsโ€“where global warming is still a hoax, and President Obama is still a Muslim, born in Kenya, and the health care bill still creates โ€œdeathย panels.โ€

Nor, for that matter, have I yet heard of a tech innovation that seems fully attuned to the psychological research that I discussed yesterday, along with Brendan Nyhan of Dartmouth. For a primer, see here for my Mother Jones piece on motivated reasoning, and here for my Salon.com piece on the โ€œsmart idiotโ€ effectโ€”both are previews of my new book The Republican Brain. And see Nyhanโ€™s research, which I report on in some detail in theย book.

What all of this research showsโ€”very dismayinglyโ€”is that many people do not really want the truth. They sometimes double down on wrong beliefs after being corrected, and become more wrong and harder to sway as they become more knowledgeable about a subject, or more highlyย educated.

Facts aloneโ€”or, the rapid fire Tweeting of fact-checksโ€”will not suffice to change minds like these. Ultimately, the psychology research says that you move people not so much through factual rebuttals as through emotional appeals that resonate with their core values. These, in turn, shape how people receive factsโ€”how they weave them into a narrative that imparts a sense of identity, belonging, andย security.

Stephen Colbert himself, when he coined the word โ€œtruthiness,โ€ seemed to understand this, talking about the emotional appeal ofย falsehoods:

Truthiness is ‘What I say is right, and [nothing] anyone else says could possibly be true.’ It’s not only that Iย feelย it to be true, but thatย Iย feel it to be true. There’s not only an emotional quality, but there’s a selfishย quality.

As I said in my talk yesterday, there is now a โ€œScience of Truthinessโ€โ€”that was very nearly the title of my next book, though Republican Brain is betterโ€”and it pretty much confirms exactly what Colbertย said.

So unless you get the psychological and emotional piece of the truthiness puzzle right, it seems to me, youโ€™re not really going to be able to change the minds of human beings, no matter how cool yourย technology.

Thereforeโ€“and ignoring for a moment whether I am sticking with โ€œtractableโ€ problems or notโ€“I think these tech forays into combating misinformation are currently falling behind in threeย areas:

1.ย ย ย ย ย  Speed. This one the programmers and designers seem most aware of. You have to be right there in real time correcting falsehoods, before they get loose into the information ecosystemโ€”before the victim is shot. This is extremely difficult to pull offโ€”and while I suspect progress will be made, it will be hard to really keep up with all the misinformation being spewed in real time. At most, we might find that the best that’s possible is a stalemate in the misinformation armsย race.

2.ย ย ย ย ย  Selective Exposure. Youโ€™ve got to find ways to break into networks where you arenโ€™t really wantedโ€”like the alternative โ€œfactโ€ universe that conservatives have created for themselves. This is going to mean appealing to the values of a conservativeโ€”perhaps even talking like one. Butโ€ฆ.that sounds very bot-like, does it not? Unless moderate conservative and moderate religious messengers can be mobilized to make inroads into this communityโ€“again, operating atย rapid-fire.

3.ย ย ย ย ย  We Canโ€™t Handle The Truth. Most important, human nature itself stands in the way of these efforts. Iโ€™m still waiting for the killer apps that really seems to reflect a deep understanding of how we human beings are, er, wired. We cling to beliefs, and if our core beliefs are refuted, we donโ€™t just give them upโ€”we double down. We come up with new reasons for why they areย true.

Please understand: I have no intention of raining on this parade. Iโ€™m actually feeling more optimism than Iโ€™ve felt in a long time. Itโ€™s infectious and inspiring to see brilliant people trying to take on and address discrete chunks of the misinformation problemโ€”a problem that has consumed me for over a decadeโ€”and to do so by bringing new ideas to bear. To do soย scientifically.

Still, to really get somewhere, weโ€™ve really got to wrap our heads around 1, 2, and 3 above. Thatโ€™s what Iโ€™m going to tell them at the โ€œhack dayโ€ todayโ€”and the great thing is that unlike some of the people weโ€™re trying to reach, I know this crowd is very open to new ideas, and newย information.

So hereโ€™s to finding out what actually works in our quest to make the world less โ€œtruthyโ€โ€“one app at aย time.ย 

authordefault
Admin's short bio, lorem ipsum dolor sit amet consectetur adipisicing elit. Voluptate maxime officiis sed aliquam! Lorem ipsum dolor sit amet consectetur adipisicing elit.

Related Posts

on

Australiaโ€™s Woodside approves $17.5 billion LNG project just days before Trump social services budget cuts, leaving locals facing โ€œharsh economic reality.โ€

Australiaโ€™s Woodside approves $17.5 billion LNG project just days before Trump social services budget cuts, leaving locals facing โ€œharsh economic reality.โ€
on

Even as the mood at Edmontonโ€™s annual expo turned cautious, industry still bet on public dollars to keep its net zero dream alive.

Even as the mood at Edmontonโ€™s annual expo turned cautious, industry still bet on public dollars to keep its net zero dream alive.
on

The U.S. private equity firm KKR contributed to the presidentโ€™s swearing-in ceremony.

The U.S. private equity firm KKR contributed to the presidentโ€™s swearing-in ceremony.
on

Despite widespread public support for clean energy and climate action, Nigel Farageโ€™s party is running on an aggressively anti-net zero ticket.

Despite widespread public support for clean energy and climate action, Nigel Farageโ€™s party is running on an aggressively anti-net zero ticket.