News Column

Only You Can Prevent Social Media Wildfires

January 9, 2013

LEE HOWELL

Social media can rapidly spread dubious and dangerous information. We need to build some firewalls to contain the risk. In 1938, thousands of Americans famously mistook a radio adaptation of the H.G. Wells novel "War of the Worlds" for a genuine news broadcast. Police stations were flooded with calls from citizens who believed the United States had been invaded by Martians.

It is difficult to imagine a radio play causing such a misunderstanding today, when people can quickly check the latest headlines on their smartphones. But the Internet, like radio in 1938, is a relatively young medium. Is it conceivable that a misleading post on social media could spark a comparable panic?

We can think of this possibility as a "digital wildfire." In a hyperconnected world, information can travel with unprecedented speed and reach. The benefits of new communication technologies are many and obvious; the potential for pitfalls informs one of the "risk cases" explored in the World Economic Forum's Global Risks 2013 report, based on expert input about which risks could manifest themselves over the next 10 years.

Social media can rapidly spread information that is either intentionally or unintentionally misleading or provocative. In the summer of 2012, for example, a Twitter user impersonating the Russian interior minister, Vladimir Kolokoltsev, tweeted that President Bashar al-Assad of Syria had been "killed or injured"; crude oil prices rose by over one dollar before traders realized that Assad was alive and well. In September 2012, protests over an anti-Islamic film uploaded onto YouTube killed dozens of people.

In October 2012, during Hurricane Sandy, news organizations such as CNN were fooled by an anonymous Twitter user who tweeted that the New York Stock Exchange trading floor was under three feet of water. In this case, the false rumor was quickly put right by other Twitter users, demonstrating that social media can often self-correct. Nonetheless, it is possible to imagine two kinds of scenario in which a digital wildfire could cause havoc.

Firstly, in fast-changing situations -- such as when a natural disaster is unfolding or social tensions are running high -- damage could be done before a correction can come. The real-world equivalent is shouting "Fire!" in a crowded theater; even if the lack of fire quickly becomes apparent, people may already have been crushed in a scramble for the exit.

Secondly, we can imagine situations in which false information feeds into an existing world view, making it harder for corrections to penetrate. The November 2012 clashes in Gaza, in which both Israel and Hamas used Twitter extensively, show the growing importance of social media in conflict situations. It is possible to imagine an explosive situation being created as competing false rumors propagate in self-reinforcing loops among like-minded individuals.

What could be done to protect against the risk of digital wildfires? Many jurisdictions already have laws that limit freedom of speech in the real world for reasons such as incitement of violence or panic, and are grappling with how to apply those laws to online activities.

But the task of establishing legal restrictions on online speech is complicated by the fact that digital social norms are not yet entirely well established. New communication technologies, by nature, are not easily confined within national borders, and it would be difficult to limit online anonymity without compromising the usefulness of the Internet as a tool for whistle-blowers and political dissidents in repressive regimes.

Furthermore, any new regulations, however sensible they might seem, could also have unintended consequences. Last December, controversy flared at the World Conference on International Telecommunications in Dubai when the United States among others refused to sign a treaty seeking to establish certain online rules over fears it could lead to more government censorship of the Internet.

If we are to avoid creating new regulations, new norms will need to emerge. Users of social media typically have less to lose than traditional media outlets from spreading information that has not been properly fact-checked, and are typically less aware of laws related to issues such as libel and defamation. Nonetheless, as has already been seen, social pressure can exert a powerful influence.

It will also be necessary for more of the consumers of social media to become more literate in assessing the reliability and bias of sources. Technical solutions -- such as programs and browser extensions that aim to help people assess the credibility of online information and sources -- could help. It is possible to imagine that automated flags for disputed information could ultimately become as ubiquitous among Internet users as programs that protect against spam and malware. What's more, if governments, public bodies and other institutions took the route of putting out better quality, better audited, verifiable information in the first place, then the risk of wildfires would be dampened.

The example of how radio broadcasting evolved after the "War of the Worlds" incident is instructive. Broadcasters learned to be more cautious and responsible, and listeners learned to be more savvy and skeptical. A similar shift could help to douse the digital wildfires of the future.


For more stories covering the world of technology, please see HispanicBusiness' Tech Channel



Source: (C) 2013 International Herald Tribune. via ProQuest Information and Learning Company; All Rights Reserved