In association with
Logo Logo Logo

Can Digital Disinformation Be Disarmed?

With its growing impact on real-world events, online disinformation is not a problem that any society can afford to ignore. But because the problem touches on such a wide range of policy issues, economic sectors, and fundamental democratic principles, it will not be solved with just one policy or approach.
Online news on a smartphone and laptop. (Mockup website). Woman reading news or articles in a mobile phone screen application at home. Newspaper and portal on internet, copy space.

The storming of the US Capitol on January 6 imbued longstanding concerns about digital disinformation with a new sense of urgency, because it showed just how easily online engagement can lead to offline action. It didn’t take long to reach this point. Digital disinformation first entered public consciousness with the Brexit campaign in the United Kingdom and Donald Trump’s election as US president. And yet, as the Capitol insurrection showed, the overall disinformation landscape of the last few years differs notably from that in 2016.

While foreign adversaries, bots, and fake accounts have dominated the dialogue about disinformation since 2016, over the last year, domestic influencers – real people with authenticated identities – have taken over. The narratives that were promoted also differed, with the coronavirus pandemic taking center stage. As a result, the scope of disinformation went global.

In the United States, false pandemic narratives vied with stories that sought to undermine confidence in the presidential election. Some sought to organize and incite violence.
But while the actors, techniques, and narratives have evolved, the fundamental problem remains the same. The online information environment we inhabit is dominated by for-profit social-media companies that rely on heavy user engagement to sell advertising space. This, coupled with people’s psychological predisposition to engage more with news that affirms their preexisting beliefs and identities, results in an information ecosystem where falsehoods travel six times faster than facts, on average.

And social-media usage continues to grow. By 2018, leading platforms such as Twitter and Facebook had surpassed print newspapers in the US as a more frequent news source. In 2020, social media surpassed TV (cable, network, and local) as the primary source of political news in the US. Among US adults under the age of 30, 48% already get most of their political news from social media. As younger age cohorts reach maturity, these numbers will grow accordingly.

SMOTHERED TRUTH

President Joe Biden’s arrival in the White House brings new opportunities for reform. Yet, while hundreds of reforms of social media have been proposed, there is little consensus on how best to improve today’s distorted information environment.

Current proposals can be broadly grouped into three categories based on their focus within the information ecosystem. Some measures are “upstream,” supporting the production of high-quality information, including both research and journalism. Others are “midstream,” working to change the behaviors of the dominant social-media platforms. And the third set of proposals looks “downstream” to foster healthier forms of audience engagement.

The challenges facing upstream and downstream efforts are well known to those in the field. Broadly speaking, improving the quality of information (upstream) is probably necessary, but certainly insufficient. There is no shortage of high-quality, evidence-based information on climate change and vaccines, for example, and yet false beliefs about these and other important topics continue to gain traction with the public.

Similarly, audience-facing (downstream) efforts like fact-checking and news literacy programs are generally hard to scale up, and can fall prey to motivated reasoning and confirmation bias. For obvious reasons, people’s propensity to seek information that reflects and reinforces their worldviews complicates the efficacy of many audience-facing efforts.

As for interventions to address the role of social-media platforms, few studies have assessed the many proposed solutions comprehensively. These “midstream” policy proposals can be broken down further into six categories, beginning with those that target the problem of disinformation most directly (content moderation and network curation), and progressing toward less closely related interventions (such as antitrust action), which are more likely to produce unintended consequences.
In any case, no single intervention can solve a problem of this size. Some combination of solutions will likely be required, and the devil will no doubt be in the details.

MODERATE AND CURATE

From the outset, the vast majority of platform-focused proposals have aimed to improve content moderation and network curation by social-media companies. This includes the architecture governing how users engage with information, the algorithms determining what content users see, and the types of groups or networks people are encouraged to join.

While regulation on all of these fronts would seem to offer the most direct approach to solving the problem, this strategy quickly runs into challenges. First, it is surprisingly difficult to define what counts as problematic content. For an article to qualify as “disinformation,” must it be entirely false? Or is it enough to show that an article includes some misleading facts alongside accurate information, or predominantly factual information that has been taken out of context or framed in an inflammatory light? Should these determinations apply to the article, the author, or the entire website?

Network curation, whereby algorithms help to recruit communities of likeminded users that can then become fertile ground for disinformation campaigns, is no less problematic. A 2018 internal report circulated at Facebook, revealed by the Wall Street Journal, found that 64% of people who joined an extremist group on Facebook’s platform did so only because the company’s algorithm recommended the group to them.

These problems persist. The “Stop The Steal” Facebook page – a hub of US election-related disinformation and a platform for organizing the Capitol insurrection – racked up more than 320,000 followers in less than 24 hours following the presidential election, making it one of the fastest-growing Facebook groups of all time.

Clearly, social-media companies cannot be trusted to regulate themselves. A growing chorus of voices is therefore calling for platforms to face greater legal liability for the problematic content they host. This would, it is hoped, strengthen incentives for improved content moderation.

This is the path forged by Germany with its Network Enforcement Law (NetzDG), which allows for fines of as much as €50 million for failing to remove “obviously illegal” content. In the US, however, this particular intervention would be possible only after amending Section 230 of the Communications Decency Act of 1996, which effectively indemnifies online service providers against liability for material posted by third parties. Although a key goal of this provision was to promote innovation and protect free expression online, it has been widely criticized for shielding technology companies from accountability for the harmful content appearing on their sites.

But modifying Section 230 could encourage technology companies to overcorrect by censoring any and all content that could pose legal risks to them. A great deal of legal, technical, and theoretical research is still needed to determine how best to modify Section 230, in order to balance liability with free speech.

POWER TO THE PEOPLE?

Some hope to avoid the content-moderation problem, through two audience-facing reforms: enhanced user controls and platform-deployed digital-literacy training.

Already, social-media platforms have been experimenting with enhanced user settings such as ad blockers, political ad filters, parental controls, URL filtering, and filters limiting who can respond to a post (a feature introduced recently by Twitter). Likewise, the European Commission’s new Digital Services Act, its most significant digital regulatory reform in two decades, seeks to provide “more user autonomy, choice, and control.” In the same vein, Mike Masnick of Techdirt, Francis Fukuyama of Stanford University, and others have proposed related “middleware” solutions. As Masnick notes, “Ideally, Facebook (and others) should open up so that third party tools [middleware] can provide their own experiences – then each person could choose the service or filtering setup that they want.”

Similarly, Stanford’s Daphne Keller points out that users might “choose a G-rated version of Twitter from Disney or a racial justice-oriented lens on YouTube from a Black Lives Matter-affiliated group.” But such solutions would require much greater technical capacity than currently exists, as well as significant investments of time for each competing middleware provider to analyze and label content at the required scale. And the platforms would have to volunteer, or be legally compelled, to cooperate.

Moreover, whether users would take advantage of such options remains unclear, given that most people simply accept a platform’s default settings. And if platform regulation seems challenging now, researching, monitoring, and regulating thousands of mini-Facebooks would be harder still.

The second key reform – platform-deployed digital-literacy training – is based on the argument that internet users need new tools to navigate the modern information ecosystem. More than 20 groups – including the News Literacy Project, First Draft News, and the global development and education organization IREX – have already launched programs.

More recently, inoculation or “pre-bunking” have emerged as alternatives to fact-checking or debunking (which occur after the fact and have proven largely ineffective). Such efforts seek to expose people to lower doses of misinformation preemptively and in a controlled manner, so that they know how to spot the real thing. For example, in collaboration with Nudge Lebanon and the Qatar Behavioral Insights Unit, researchers have experimented with exposing audiences to weakened versions of traditional extremist online-recruitment techniques. These methods appear to help cultivate “mental antibodies” against fake news.

But these efforts run into problems of scale and selection. People are busy. Even if they can find the time for special training, those who would enroll in such programs are not necessarily the people most vulnerable to misinformation. They are the choir, not the congregation.

If training in news literacy could be provided directly through the platforms, however, the scale of the solution would be far better tailored to the scale of the problem. Here, some early experiments appear promising. Twitter’s recent addition of prompts encouraging users to “read the article before you tweet it” led to a 33% increase in users opening articles before sharing them. Similarly, studies from MIT show that “accuracy nudges” prompting users to think about a story’s veracity before they share it can help reduce the spread of misinformation. And Facebook announced an initial investment of $2 million in 2019 to support media literacy projects, and followed up with additional support for such projects in 2020.

With continuing advocacy, pressure on the major platforms, and research into the efficacy of different methods, further progress could be made.

THE LIMITS OF PRIVACY PROTECTION

Data privacy bears directly on the problem of disinformation, too, and few regulatory issues have been discussed more widely in recent years. User data (revealing someone’s age, gender, location, political leanings, voting behavior, and so forth) play a key role in the spread of disinformation, by enabling effective audience targeting. With tools like Facebook’s “custom” or “look-alike” audiences, advertisers can direct disinformation specifically to all those who might share some similarities with, say, anti-vaxxers or others prone to belief in conspiracy theories.

As Facebook’s former chief security officer, Alex Stamos, argues, simply limiting third-party use of the data that enables personalized advertising can defuse disinformation by depriving it of a target. And without a target, there is a higher chance that this content will simply become digital noise. In recent years, privacy legislation such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act, though not directly intended to address disinformation, have helped to limit how corporations can use personal data.

But other experts, including Fukuyama, warn that privacy is no panacea. For starters, enhanced privacy would not reduce the unprecedented concentration of editorial power that the leading platforms now command. Fukuyama likens this perquisite to “a loaded weapon sitting on a table”: it is tolerable, to some, under the current (presumably liberal) platform leadership, but it is neither sustainable nor ultimately compatible with liberal democracy.

Moreover, Fukuyama points out that the proverbial horse has already bolted from the barn. By preventing new entrants from accumulating massive stores of user data, stricter privacy laws might lock in the market power of first movers like Google and Facebook, which presumably would retain access to the data they have already accumulated.

SUBSTITUTE STRUCTURES?

It is now well understood that social-media companies position advertisers as “the customer” and users as “the product” whose attention is being sold. But this business model is not inevitable. One alternative is the subscription model, whereby users would pay to access the services of a platform like Facebook, obviating the need for the most “engaging” content to be elevated over everything else.

This model, however, could prove problematic globally, particularly for users in less-developed countries, who rely heavily on “free” platforms. Indeed, in some countries, Facebook is effectively synonymous with the internet. Moreover, the demonstrated unwillingness of many readers to pay for newspaper subscriptions even in rich countries further dampens this model’s prospects.

Others have called for a “digital public square”: a government-funded, not-for-profit alternative to the current platforms. This could be financed, for example, by levying taxes on social-media companies to support “digital public infrastructure,” similar to public broadcasting in many countries. “The digital advertising industry is currently a $333 billion global market,” notes Ethan Zuckerman of the University of Massachusetts Amherst. “If we posit a 1% levy on highly surveillant advertising … we can easily posit a $1-2 billion annual fund to support public service digital media.”

But, again, such models face complications when it comes to funding and usage. Many countries, including the US, have dramatically lower budgets and appetites for funding public broadcasting; and in the US, especially, that is unlikely to change. Similarly, the assumption that “if we build it, they will come” is unlikely to hold true in this domain. Audiences that have an appetite for confirmatory, polarizing content will seek it out where they can find it. The Public Broadcasting Service (PBS) in the US has faced dwindling audiences. Even countries like the UK, with its substantial public media budgets (think of the BBC), face problems of disinformation.

THE TROUBLE WITH ANTITRUST

Beyond data privacy, another widely discussed proposal would “break up the platforms.” Given the near-monopoly role that companies like Facebook, Google, and Twitter play in today’s information ecosystem, antitrust action has come to be seen as not just an economic remedy, but also a political one. If only users had more alternatives to choose from, according to this argument, no single platform could pose a heightened threat to public discourse.

But those calling for antitrust enforcement against the major platforms may be holding a hammer and seeing every problem as a nail. Antitrust law is a tool that regulators already know. And given the platforms’ dominance, it may even fit from an economic standpoint. But whether it is the best instrument for addressing the problem of disinformation is another matter.

Antitrust enforcement faces two fundamental challenges when applied to disinformation. First, in the US, where the major platforms are based, antitrust law would have to be dramatically overhauled. Unlike in the UK and elsewhere, US antitrust law is currently grounded in the principle of financial “consumer harm.”

But with social media, the harms to users are not monetary; the services that platforms are “selling” are free, because the “users” are the product. US antitrust law thus would need to be reframed to account for different types of harm. And regulators would need to resolve thorny questions about how to define social-media “markets,” and determine the dimensions along which platforms should be “broken up.”

Even if it were feasible, it is unclear whether antitrust action would have the desired effect. As with the proposed “middleware” solution, creating a multitude of platforms may make disinformation more difficult to monitor and regulate. Although introducing more competition might solve the problem of a few platforms exposing huge audiences to extreme, unscientific, or polarizing content, the proliferation of smaller platforms could silo users with fringe beliefs into more airtight echo chambers (which is likely the lesser of two evils, but nonetheless problematic).

Finally, it is not clear that breaking up the major platforms is even possible over the long term. Owing to “network effects,” wherein the value of a network increases with the size of its user base, another platform might quickly gain a monopoly position as everyone flocks to where all their friends are. A baby Facebook might simply grow up to be as large or larger than its parent.

SUPPORTING INFRASTRUCTURE

Regardless of which policy proposals (or combinations of proposals) are pursued, a range of supporting interventions will be essential to help address digital disinformation. The first is research. We need more longitudinal studies into the problem, as well as improved definitions and frameworks to guide decision-making. Since 2016, many important new organizations have been created to deepen our understanding and identify potential solutions. But much remains to be done. As two leading scholars in the field explain in the 2020 book Social Media and Democracy:

“Responding to an environment of panic surrounding social media’s effect on democracy, regulators and other political actors are rushing to fill the policy void with proposals based on anecdote and folk wisdom emerging from whatever is the most recent scandal. The need for real time production of rigorous, policy-relevant scientific research on the effects of new technology on political communication has never been more urgent.”

While tens of billions of dollars are invested annually to research issues like climate change, governments have provided almost no support for research into digital disinformation. We desperately need more studies to determine what impact disinformation is having on readers’ beliefs and worldviews, and to assess the costs, benefits, and potential unintended consequences of proposed interventions.

Second, we need more transparency. Research is severely hampered by a lack of access to data from social-media platforms. While companies currently offer their own transparency reports, and some provide direct access to data, these avenues are highly limited. Transparency reports generally reveal aggregated statistics about what advertisers purchase, or what governments demand, and provide some limited insight into what users share. But they say almost nothing about how the platforms are behaving as independent influencers, determining what content users are exposed to and which groups they are encouraged to join. Comprehensive, privacy-protected data for credible scholars is essential.

Third, ethics training for technologists is critical. As the downsides of new technologies have become increasingly apparent – with Google changing its 2001 motto, “Don’t Be Evil,” in 2015 to “Do the Right Thing,” and Stanford reconsidering its notorious “Persuasive Technology Lab” – leading universities have become more attuned to the problem. For example, Stanford, Harvard, and MIT – which helped to produce some of Silicon Valley’s leading technologists – are developing new courses (“Ethics of Technological Disruption”) and ethics centers (such as Stanford’s Ethics, Society, and Technology Hub).

Finally, because the problem of disinformation spans multiple online platforms, and affects both the public and private sectors, more cross-sector coordinating infrastructure will be essential. Yet, to date, there is surprisingly little formal infrastructure to support learning and collaboration in the field. Related efforts like the Global Internet Forum to Counter Terrorism and the Global Network Initiative, which focuses on privacy and free expression, have been successful on some fronts, but none is focused specifically on improving the quality of the online information ecosystem. More recently, however, the Cyberspace Solarium Commission and US Senator Mark Warner’s office have introduced such proposals.

WHERE TO START

There are no silver bullets. But a combination of the interventions described here can help. As in any field, progress will require either voluntary change on behalf of the problematic actors (social-media platforms), or government mandates (either through regulation or litigation).
The leading platforms have long pledged that they can independently address disinformation through improved natural language processing, machine learning, and artificial intelligence. A technological solution is indeed appealing. Platforms can respond much faster than governments, and their decision-making is inherently better informed by real-time changes in the tech world.

But “self-regulation” is problematic for three reasons. First, so long as platforms’ revenue models depend on high user engagement, emotionally triggering and inflammatory content will likely continue to dominate. Moreover, in addition to economic disincentives to change, platforms have long espoused a philosophical commitment to free speech. And, notwithstanding the decision to deplatform Trump after the Capitol insurrection, they have been reluctant to serve as “arbiters of truth.”

Lastly, even if platforms are willing to moderate content, they may lack the technical or operational capabilities to do so. One recent study found that among posts containing potentially dangerous disinformation about COVID-19 that had already been debunked, 59% remained up on Twitter, 27% on YouTube, and 24% on Facebook.

Clearly, the optimal path runs through government oversight, which has the virtue of being both democratically informed and enforceable. But implementing optimal regulation in the current environment will not be easy, particularly in the US, where political polarization and legislative gridlock have made it hard to make progress on this or any other issue of significant public concern. And even if the US government is now in a more functional state, with Democrats controlling the executive and legislative branches, government incentives to regulate tech companies are complicated by the fact that the Big Five – Facebook, Amazon, Apple, Microsoft, and Google – have a combined market capitalization equivalent to 20% of US GDP. Some of these tech giants recently joined the ranks of the top ten lobbyists in the US.

Thus, any proposed regulations that appear to threaten these companies’ core business models will face fierce and well-financed resistance. And even if the US government (or, more likely, the EU) proves able and willing to act, it will have to weigh many difficult trade-offs: privacy versus transparency; free speech versus accuracy; diversity versus epistemic agreement.

To spur government to act, despite these challenges, public pressure will be essential. There have already been some boycotts by users (#DeleteFacebook) and customers, such as corporate ad buyers signing on to efforts like #StopHateForProfit and Sleeping Giants. To bring about real change, this public energy and attention will need to be increased and sustained over time.

SOCIAL MEDIA AND SOCIAL FRAGMENTATION

As the public square has moved online, societies have begun to fragment along racial, religious, partisan, and economic lines. Social-media platforms, rather than credentialed journalists, now hold significant power not only to communicate with the public but also to highlight key issues and to unite likeminded strangers, enmeshing these new groups in their own distinct (sometimes inaccurate) information systems.

This trend can be either constructive or destructive. The power to inform, unite, and organize almost instantly at a global scale is unprecedented. Though it offers many benefits, the new public square has been designed to maximize revenue growth, and thus to promote engagement by steering users toward novel, emotionally stimulating content, no matter how false, manipulative, and incendiary.

Among the possible solutions outlined here, four interventions appear most promising: improved transparency and data access; research on both the effects of disinformation and potential solutions; digital literacy and tech ethics training; and the development of formal and informal coordinating infrastructure. Unless significant work is done along these lines to improve the current information environment, it is hard to imagine a future in which evidence-based discourse anchors public consciousness.

Comment here !
Related News

It is easy to forget that for a long time – long before Google and Facebook went head-to-head with the

One month ago, in Myanmar, protesters against the military coup gathered around the United States embassy in Yangon. They called

Revelations that the insurrection at the US Capitol included many former and current members of America’s armed forces have been

CHITWAN: Main opposition Nepali Congress (NC) general secretary Dr Shashanka Koirala has said the party should be part of the