The COVID-19 Infodemic: When One Epidemic Hides Another

The COVID-19 Infodemic: When One Epidemic Hides Another

By: Stéphane Duguin, the CEO of the CyberPeace Institute

On February 2nd, 2020, as the world was facing one of the greatest health crises of the 21st century, the World Health Organization announced a new pandemic: the “Infodemic”. In just a few short weeks, this epidemic of spreading false and erroneous information has become the vector of large scale cyberattacks.

As we have pointed out in our previous publications, this phenomenon did not originate with COVID-19. Previous epidemics such as Ebola, SARS, and Zika, or events that occurred which provoked global media coverage, such as the terrorist attack on the French weekly Charlie Hebdo, also led to campaigns of disinformation in order to facilitate cyberattacks. The mechanics of such campaigns have not changed and are based on the exploitation of vulnerabilities, fears, anxieties and of racist and xenophobic behavior. But in the case of the COVID-19 infodemic, the convergence of unrelated factors has led to an unprecedented acceleration of such tactics, including increased Internet penetration; rise of semi-private chat groups (e.g., WhatsApp, Telegram); confinement measures leading to increased consumption of digital information; distrust towards the scientific community; as well as insidious racism giving rise to propaganda.

Accelerating cyberattacks through disinformation requires manipulation of potential victims.

In the era of instantaneous global communication, where “Truth” and “Falsehood” are nothing more than fluctuating values in an unregulated market of information, this manipulation becomes mainstream. Can you prevent COVID-19 by wearing a face mask or by injecting detergent in your veins? The winning concept is the one that goes viral the fastest becoming an effective bait to lure potential victims to download or to click.

This situation makes an infodemic a unique threat. In contrast to a physical illness, where the modes of contamination are known and understood, the vectors of infection of an infodemic are in constant mutation. Some attacks emerge from political propaganda, some are crafted to manipulate and trick, others are deliberately absurd, shocking, gory, abject, sexist, and/or racist, with the sole intention of going viral, before being consigned to the trash heap of the Internet. The forms of an infodemic may vary, but at their core they all share the same DNA: to alter people’s perception of reality, control their beliefs, change how things are presented and manipulate behavior.

Beyond facilitating cyberattacks, this type of infodemic represents a unique menace to all concepts of peace, not only in cyberspace but also in the public sphere.

It locks populations within echo chambers of misinformation, accelerates sectarianism and impacts some of the fundamentals of modern society — the sense of community, the confidence in science, the sharing of common values. The scale of this menace requires a collective response. It is critical that all stakeholders in a position to implement concrete responses — that is, governments, the media, and the tech industry — be held to account.

From the beginning of this infodemic, the major players of the Internet have been under the spotlight as their platforms have been used to accelerate the spread of disinformation. Following years of stonewalling on the subject of the need for, and legality of systems to moderate content on their platforms, the largest social media companies have rapidly put in place procedures for moderating content.

Despite this quick reaction, the response shows systemic limitations, such as the imprecision of automated systems for reviewing content (i.e. artificial intelligence) and the heavy reliance upon human workforce. This reality accentuates the asymmetry of capacity between major players and smaller platforms, the latter being unable to hire sufficient workers to cope with the challenge at hand. Beyond its obvious cost, this reliance on manual review of content creates a new risk beyond the infodemic itself: exposing ever more staff to traumatizing and manipulative content potentially brings vicarious trauma at scale. In this context, any response to the challenge of this infodemic must be human-centric.

For example, human-based moderation must be supported with technology and psychological support for those who are exposed on a daily basis to the worst of the Internet. In addition, solutions to implement technological responses across platforms should be prioritized.

Any technical procedures which support moderation must be widely accessible and shared, not only to the entire industry but also to civil society, which should have the capacity to audit such technology in terms of its potential impact on fundamental human rights.

As disinformation and cyberattacks pass from platform to platform within seconds, it is essential for the digital industry to respond with a communal response to a decentralized threat. On top of moderation, the digital industry needs to re-evaluate its practices in terms of platform architecture. The technologies we use to access information are not neutral. Any number of interfaces are created not only to facilitate access to content, but also to retain the user for as long as possible on the platform. Attention is a primary resource, and the actors of the industry compete for that resource. This should not use any means which are detrimental to the user, for example, by rendering the user more vulnerable to disinformation. Best practices for responsible design exist that, for example, reduce financial incentives to produce “fake news”; the digital industry need to adopt such practices.

Beyond the digital industry, the response of the media (in the broadest sense of the term) is just as important. It is true that “fake news” and “bullshit news” spread very rapidly through private or semi-private networks such as Facebook groups, WhatsApp chats or Telegram channels, where they escape moderation. But even within this decentralized system, actors at the top of the information chain, such as the media and the influencers, still have a key role to play in terms of fact-checking. Any action taken will be worthless if these actors serve as a disinformation haven. In the context of an Infodemic, the media and influencers have an even greater responsibility to verify the quality and veracity of the content that they publish. The good news is that these communities can benefit from existing actionable resources, such as fact-checking initiatives. All in all, the media industry must promote quality professional journalism with the skills and knowledge to decrypt disinformation.

But beyond the digital industry and the media, it is also important to remember our collective responsibility. If an infodemic thrives, and with it cyberattacks accelerate, it is because we consume and spread disinformation and misinformation. This simple fact is maybe at the core of the challenge. It is surely the most promising of the solutions.

In fact, as critical attitude and profound respect for the fundamental rights of others are key in the physical world, why would it be different in cyberspace? Responsible behavior in cyberspace is a collective effort in which everyone participates. Just as simple actions keep COVID-19 at bay, there are simple gestures that exist in order to preserve cybersecurity. For example, here are some the CyberPeace Institute suggests for countering disinformation and cyberattacks. They are just a few among many. The Internet abounds with resources available to anyone who wishes to defend themselves against the Infodemic. It also empowers those who refuse to be part of the Infodemic long chain of rumors and disinformation. As a matter of fact, sharing « fake news » or « bullshit news » is neither innocent nor without danger: it contributes to cyberattacks. In that regard, cyberspace provides many opportunities for those who wish to be involved in the creation of a global response to this digital plague.

The actions of industry, media and civil society are critical. They also require leadership from States. We commend the nation-states ambition to facilitate a collective response to the Infodemic Gross-Regional Statement on “Infodemic” in the Context of COVID-19” and propose tangible actions to walk the talk. States must invest in structures to facilitate, promote and protect professional and independent journalism and fact-checking.

States must ensure access to the cyberspace that is non-discriminatory and which allows for free exercise of human rights. They must also affirm that the use of the Internet against the people of the world is unacceptable, whether it be in attacking systems or in manipulating minds.

The response to this infodemic will not happen in a silo — it has to be collective and not focused solely on technology. Cyberspace is a unique global ecosystem where the mind strives and where communities create and exchange knowledge. In this ecosystem, the viral spread of an infodemic is only one symptom of profound transformations, and the response must be inclusive from all sectors of society.

About the Author:

Stéphane Duguin is the Chief Executive Officer of the CyberPeace Institute. He coordinates a collective response to decrease the frequency, impact, and scale of cyberattacks and cyber operations by sophisticated actors. Prior to this position, Stéphane Duguin was a senior manager and innovation coordinator at Europol. He led key operational projects to counter both cybercrime and online terrorism, such as the setup of the European Cybercrime Centre (EC3), the Europol Innovation Lab, and the European Internet Referral Unit (EU IRU). A leader in digital transformation, his work focuses on the implementation of innovative and human-centric responses to large-scale attacks in cyberspace.

The CyberPeace Institute on #Infodemic :

Stay tuned and join the CyberPeace Institute in its efforts to #StopInfodemic!

 


The CyberPeace Institute is an independent, non-profit organization with the mission to enhance the stability of cyberspace. It does so by supporting vulnerable communities, analysing attacks collaboratively, and advancing responsible behaviour in cyberspace.

© Copyright: The CyberPeace Institute

Sharing is caring!

shares