News Column

Social Media Make Rumor Mill Faster, Not Smarter

Page 2 of 1

Admit it. Sometimes you're part of the social-media problem, spreading news and views you find online without knowing if the information is good.

Rumors and misinformation, granted, are ancient parts of discourse. But this is an election year, and social media platforms have turned the rumor mill into a supercharged rumor turbine, something that can be electronically manipulated and monitored. And that changes the political game.

Researchers will be closely watching how it plays out, especially through Twitter.

That's because they can. The open platform is enabling scientists to build computer models that help them see how misinformation travels.

"What makes social media different is that we have much easier ways of tracking how rumors spread," said Jonah Berger at the Wharton School of the University of Pennsylvania, who studies "social epidemics."

What worries many experts -- even some ardent defenders of free speech -- is that bad information that moves fast enough and far enough, through the power of Twitter, YouTube and Facebook, has the potential of warping the democratic process.

A well-crafted lie that goes viral the week before an election could affect outcomes.

That's the dark side of social media -- "there's more libel, more defamation, more urban myths and harmful information getting out," said David L. Hudson Jr., of the First Amendment Center, a think tank that advocates the tenets of our first freedoms. "I don't like to sound like a censor, I'm for free speech. But I am concerned about this open spreading of rumors ... and the rushing to judgment.

"We're approaching a sobering realization that this new, revolutionary media does come with some dangers."

Political campaigns this year will pour record sums -- perhaps 10 percent of their resources -- into establishing a presence in social media, which strategists view as both an opportunity and a potential curse. Some experts envision races hinging on the campaign errors, misstatements and smear tactics that rivals engineer to go viral.

When Twitter or YouTube push the propaganda, "it all becomes public ... which I think is a good thing," said Jeff Roe of the political consulting group Axiom Strategies, headquartered in Kansas City.

Using social media is free, making it a no-brainer communication tool -- not only for groups that seek to propagate their version of a story, but to the tens of millions of Americans on the receiving end. But Roe doesn't see it as a great bargain:

"Statements made in error that go viral can be very expensive to a campaign" when it needs to fight back.

The technologies of new media turn everyone who uses them into news sources, blasting out information, with attached links, in one click.

"There's a certain ego that goes with being the first to hear something and share it, whether it's true or not," said Eric Melin of Spiral16, an Overland Park consulting firm using 3-D imagery to chart the circuitous paths of attack tweets, damaging rumors and viral tales that spring from social media.

It may be a truth, a half-truth or the early stage of a hoax -- the finger found in Wendy's chili went viral in Facebook's early days before police exposed it as a scam.

This urge, this snap reflex to share a rumor in an instant, has a name: FOMO -- fear of missing out, or being the last in your network to know.

Berger of the Wharton School has found that news on the Internet is most apt to go viral when it touches extreme emotions -- like laughter or anger. Both are kryptonite to businesses and organizations, including political campaigns, that are trying to project honest, everyday values.

In politics, "grass roots" is everything. But social media platforms have given rise to a new strategy to watch out for: the "Astroturf campaign."

It's designed to look like the online conversations of regular people when it's really spawned by insiders shooting automated messages they hope will catch fire.

Among those watching for this will be Indiana University computer scientist Filippo Menczer, whose research team first tracked Astroturf campaigns in the 2010 elections.

"Everyone's doing it -- fake tweets and fake accounts" in an effort to attract real-life Twitter followers into the discussion, he said.

And the wide-open nature of social media makes manipulation all the more tempting. Interactive service providers such as YouTube, Tumblr, Facebook, Pinterest and Twitter are effectively immune from lawsuits, thanks to a 1996 federal law.

"This is the wild west," Menczer said, "where there's no control whatsoever of social-media content."

Friend to friend

It's hard to knock what social media have achieved so far.

They've been credited with empowering the previously powerless, liberating peoples from oppressive regimes, exposing bad behavior among public officials.

(Some of that behavior was related to social media, such as the sharing of sexually explicit photos that drove U.S. Rep. Anthony Weiner, a New York Democrat, out of office.)

The instantaneous, friend-to-friend-to-friend magic of the platforms, however, also fueled swine-flu scares in 2009, when Kansas City-area schools had to respond to false rumors of outbreaks.

Even if the technology allows information -- and misinformation -- to spread in a flash, it allows countless users to fact-check and verify just as quickly, said Kevin Bankston of the Center for Democracy and Technology, a nonprofit that promotes free, unfettered expression on the Web.

"It's always been that a lie will make itself halfway around the world before the truth can get its boots on," Bankston said. "Today, the social media turbocharges that process...

"Still, this access we all have to knowledge and instantaneous sources of information is a good thing for humanity."

The old-fashioned forms of media put out bad information, too. It was The New York Times, after all, that erroneously declared U.S. Rep. Gabrielle Giffords dead from a shooting in Arizona -- an embarrassment the newspaper attributed to a reporter bypassing editorial checkpoints to rush copy to the Web.

But only the Wild West of social media could deliver the following fake report on the @foxnewspolitics Twitter page.

@BarackObama has just passed. The President is dead. A sad 4th of July, indeed.

A hacker had infiltrated the Fox News account, which had 36,000 followers, and began posting several reports of Obama having been assassinated in Iowa.

The fraudulent posts first appeared in the hours after midnight last Independence Day, and though FoxNews.com quickly spotted the hoax, the news network had to wait hours for Twitter to respond to Fox's request to reclaim the account.

Delays at Twitter kept the bogus news displayed past dawn.

Earlier this month, the FBI and New York Police Department opened an investigation into a potential terror threat after several digitally enhanced images of the New York skyline appeared on an Islamic terrorist group's online forum. The graphic carried a caption, "Al Qaeda coming soon again in New York."

Terrorist organizations commonly weave empty threats into social media. The "coming soon" graphic is likely another one, said Steve Stalinsky of the Washington-based Middle East Media Research Institute, which monitors the Web activity of terrorist groups. But could a flurry of idle threats lead to a "cry wolf" complacency that puts America at greater risk of a real attack?

"The Taliban has several Twitter accounts and they're very social-media savvy," Stalinsky said. "YouTube is totally infested with Jihadi propaganda ... Why is this allowed to happen?"

Most social-media platforms will flag or remove hate speech and deceptive spam when such material is brought to the service provider's attention. Twitter early his year announced it will restrict offensive content "in countries that have different ideas about the contours of economic freedom."

The company cited the examples of France and Germany, which ban pro-Nazi content.

Gone viral

Recent cases of social-media causes gone viral underscore the benefits of the public platforms as well as the drawbacks.

Last month, the hottest video in the history of YouTube turned out to be artful spin, the story of an east African conflict almost two decades older than YouTube itself.

The "Kony 2012" mini-documentary nonetheless seemed fresh, credible and urgent to Twitter and Facebook users, who shot out links to the half-hour video, from friend to friend, until it drew more than 25 million views.

The clip elicited public horror and a supportive U.S. Senate resolution for the "invisible children" of Uganda, youngsters abducted and enslaved as soldiers by rebel leader Joseph Kony.

Foreign-policy experts eventually pointed out that Kony hadn't been stirring much trouble and hadn't even been seen in Uganda for several years. Donating money to help the country capture him, as the viral video implored, might not be such a wise thing, traditional news sources reported.

An online petition campaign launched by a Texas mother set off alarms over a ground-beef additive dubbed "pink slime." The cheap, finely textured filler has been served up on school lunch trays, diner counters and kitchen tables for decades, and it's treated with ammonium hydroxide to kill bacteria.

The federal government and some food-safety groups say pink slime is safe. But the public outcry was virulent enough to shut down some meat factories and drive grocers to clear their shelves of ammonia-treated beef.

Many school districts, bowing to online petitions, pledged from here on to serve only the more expensive, slime-less beef.

As with "Kony 2012," the pink slime controversy raised awareness and triggered citizen action in ways once unimaginable. But food without the additive will require more cattle, and industry groups say the public will pay more to stock school cafeterias.

David B. Schmidt, president of the International Food Information Council, issued an online statement:

"Something is seriously out of kilter in our communications environment when safe food products and proven technologies can be torpedoed by sensationalist, misleading, yet entertaining social media campaigns. We should all take several steps back and remember the critical thinking skills we were taught in school."

Defenders of unregulated social media, and there are plenty, counter: We were also taught democracy in schools. If not for throngs of Facebook friends and everyday tweeters, U.S. Rep. Chellie Pingree, a Maine Democrat, may never have introduced a labeling bill to at least let consumers know when they're buying pink slime.

Choice.org -- the petition site that gave rise to the pink-slime crusade (and also sharpened national attention on the killing of Florida teen Trayvon Martin) -- removes discriminatory causes and postings that call for violence. Website spokeswoman Megan Lubin said those cases were rare: "Most everyone is responsible when using" the open platform.

"It was the first time in history that more than 1 million comments were generated on a food petition at the FDA," said Sue McGovern, spokeswoman for the Just Label It Campaign. "The exact number was 1,149,967 ... It's those mammoth, historical numbers that Washington, D.C., is taking a look at" in the viral age.

Tracking tweets

Some contend the best way to thwart the dangers of social media is to fight fire with fire -- better technology.

The U.S. government is pushing to detect online persuasion campaigns and to develop "counter-messaging" software against "adversaries (who) may exploit social media and related technologies for disinformation," according to a Pentagon statement to The Wall Street Journal.

"It's an arms race," said disinformation sleuth Menczer of Indiana's Center for Complex Networks and Systems Research, recipient of a $2 million Defense Department grant. "We may develop better detection tools only to see political and commercial interests invest in beating these tools."

The center he directs has a website, Truthy.indiana.edu, that monitors the Twittersphere to detect how political groups take advantage of it.

The Truthy project spotted suspicious patterns in the 2010 elections. Several Twitter accounts created simultaneously -- along with Web links launched the same day -- gave the illusion of real people having conversations. In fact, they were dummy accounts automatically tweeting and re-tweeting each other.

Followers of those accounts would get the sham tweets and be directed to Web sites resembling news organizations, Menczer said. Some of the reports would accuse a campaign's opponent of backing legislation such as health reform and cap-and-trade proposals for personal gain.

Once the strategy goes viral and a topic, or "meme," is followed -- with Menczer's computers tracing common hash tags, URLs and repeated phrases -- digital images of the activity do resemble a biological virus.

But tracking this tangle of tweets, links and retweets back to the original source can be difficult, giving political campaigns deniability if confronted about the schemes.

In the 2008 Massachusetts race for U.S. Senate, Wellesley College scientists P. Takis Metaxas and Eni Mustafaraj detected a pattern of "Twitter-bombs" against Martha Coakley, the Democratic candidate.

During the week leading up to the vote, the researchers noted a spike in Web searches that directed users to a disproportionate flurry of tweets smearing Coakley. The social-media traffic built enough for Google to tag the race a "trending topic," and Republican Scott Brown scored a surprise victory.

The race in Massachusetts "was the first election in which social media absolutely changed the conversation," said Mustafaraj, who noted the anti-Coakley tweets carried morsels of truth.

"In order for these things to spread, it can't be a complete falsehood," Mustafaraj said. "You hope that other media will pick up on the story."

In time, other research shows, a social-media falsehood finds ways to die. Tracking the tweets from the zone of an earthquake that devastated Chile in 2010, computer analyst Barbara Poblete discovered that accurate reports from victims traveled faster and farther than did the false rumors.

Melin, of Overland Park's Spiral16, notices the same: Bad information has ways of correcting itself, a phenomenon that social media defenders attribute to the collective wisdom of crowds.

"It does seem to actually work in the end," he said. "Believe it or not."

Story Tools