ENISA’s Threat Landscape Report 2022 – Part 9 – Disinformation & Misinformation

9 Disinformation Misinformation

In contemporary times, digital platforms have become the primary source of news and media. Social media platforms, news outlets, and search engines are now major sources of information for a significant number of people. However, these sites operate by attracting viewers and generating traffic, which means that the information that generates the most views is often promoted, even if it has not been verified. Current events, like the ongoing conflict between Russia and Ukraine, have produced numerous stories that have captured significant attention. However, there is a difference between misinformation, which is inaccurate information shared unintentionally, and disinformation, which is intentionally false or misleading information shared with the intention of deceiving people.

Advancements in cloud computing, AI tools, and algorithms have made it easier for malicious actors to create and spread disinformation. These activities are supported by both technical and social factors and provide state and non-state actors with powerful tools and channels for disseminating false information. These platforms also enable malicious actors to experiment, monitor, iterate, and optimize the impact of disinformation campaigns. Such campaigns often serve as a precursor to launching other types of attacks, such as phishing, social engineering, or malware infection.

State and non-state actors are inundating people with disinformation and misinformation, along with related cyber operations, in an effort to create uncertainty, apathy towards the truth, exhaustion in trying to verify information and fear. It is increasingly evident that disinformation and misinformation pose significant threats to democracy, open discourse, and a free and modern society. Therefore, policymakers should prioritize disinformation as a core issue and take into account its security and privacy implications. This situation has been brought to the attention of the research community, governments, and the general public due to the massive surge of disinformation attacks that preceded and accompanied the Russia-Ukraine conflict.

As per the World Economic Forum’s 2022 Global Risks Report, the intersection of digitalization and the rise of cyber threats can have intangible consequences. The emergence of deepfakes and “disinformation-for-hire” is likely to exacerbate mistrust between governments, businesses, and societies. Deepfakes, for instance, could be exploited to influence political outcomes or sway elections. Furthermore, disinformation has a negative impact on public trust in digital systems and is undermining cooperation between states, as cybersecurity is increasingly seen as a source of divergence rather than collaboration. The Global Risks Perception Survey (GRPS) identifies cross-border cyberattacks and misinformation as areas where efforts to mitigate risk are either in their infancy or not started at all.

According to the EU Project CONCORDIA, propaganda, misinformation, disinformation campaigns, and deepfakes are prevalent and designed to deceive users. These campaigns have a direct impact on people’s daily lives and society.

Russia-Ukraine war

Disinformation has been used as a tool of information warfare since the Cold War and experienced a resurgence in the United States following the 2016 election, during which Russia was accused of interfering with the election process. The significance of disinformation in cyberwarfare was further highlighted in the conflict between Russia and Ukraine, where disinformation campaigns played a central role. Prior to the physical conflict, mass disinformation campaigns were used by Russia to prepare for their invasion of Ukraine, with claims such as Ukraine planning an attack on Donbas being unsubstantiated and used as a justification for military action. Maria Avdeeva, the Ukrainian founder and research director of the European Experts Association, explains that the approach taken was to overwhelm people with quantity over quality of information, resulting in a method of disinformation similar to a DDoS attack on physical machines.

All actors involved in the conflict, including Russia, Ukraine, and other countries, utilized disinformation campaigns to promote their agendas, with Russian disinformation targeting the motivations for the invasion and Ukraine’s alleged aggression, while Ukraine’s disinformation focused on motivating their troops and highlighting Russian military losses. AI-enabled disinformation in the form of deepfakes also played a significant role in the conflict, resulting in the spread of fake videos featuring leaders such as Russia’s Vladimir Putin and Ukraine’s Volodymyr Zelenskyy expressing support for opposing views.

As a result of the growing threat of disinformation, major companies such as Meta, YouTube, and Twitter announced new measures to combat it in response to requests from the Ukrainian government, world leaders, and the public.

AI-enabled disinformation and deepfakes

The increasing prominence of AI-enabled disinformation has made its role in the creation and dissemination of false information more significant, and its potential supply infinite. The use of bots that mimic human personas can easily overwhelm the “notice-and-comment” rulemaking process and community interactions by flooding government agencies with fake comments.

In 2021, AI-powered social media played a significant role in the spread of disinformation, causing social chaos. Deepfake technology is evolving rapidly, with supporting technology making it easier to create and social media making it easier to spread. Misinformation and disinformation campaigns are becoming more credible thanks to deepfakes, which cannot yet be fully countered.

Manipulation of content by political leaders has been a longstanding practice, but deepfakes have taken it to a new level, providing malicious actors with smart and user-friendly tools for generating fake content, such as audio, video, images, and text, that is nearly indistinguishable from the real thing. The power of AI allows malicious actors (both state and non-state) to build targeted attacks that mix individual profiling with personalized disinformation.

The widespread use of deepfakes and AI-based disinformation has eroded credibility in information, media, and journalism. This scenario can result in the liar’s dividend, where the goal of a disinformation attack is not fake news, but to deny the truth. Disinformation has been used to target communities at large, creating ideological conflicts, disrupting elections, and hampering efforts to limit the spread of pandemics. It is also frequently used to harm individuals, with Microsoft reporting that over 96% of deepfake videos concern pornography, while other attacks target people’s reputations. These attacks can cause lasting damage even after the disinformation has been debunked.

Disinformation-as-a-Service (aka disinformation-for-hire)

Large-scale professional disinformation campaigns are commonly produced by governments, political parties, and public relations firms. However, in recent years, an increasing number of third-party organizations have been offering disinformation services, providing targeted attacks on behalf of clients. These services are available in numerous countries, and an increasing number of non-state and private commercial organizations are utilizing them.

The trend towards disinformation-for-hire is growing, making disinformation campaigns easier to implement and manage. When coupled with deepfakes, these services are likely to exacerbate mistrust within society. According to the Centre for International Media Assessment (CIMA), disinformation-for-hire has become a booming industry, with private marketing, communications, and public relations firms being paid to spread false information and manipulate online content to sow discord. CIMA estimates that at least $60 million has been spent on propaganda services since 2009.

Various

  • Disinformation attacks on elections continue to be a critical concern, and Microsoft reports shutting down numerous websites targeting elected officials, candidates, activists, press, and democracy-promoting organizations.
  • PwC’s survey of 3,602 respondents revealed that 19% and 33% expected a significant increase and an increase, respectively, in reportable incidents for such events and threats via these vectors/actors in 2022 compared to 2021.
  • Modern computing infrastructure, social media, data generation tools, and AI-enabled disinformation campaigns are increasingly sophisticated and target the fundamentals of democracies.
  • Microsoft warns that threat actors are increasingly combining cybersecurity and disinformation attacks to achieve their objectives.
  • An analysis of 200,000 Telegram posts on unmoderated platforms revealed that posts with misleading information sources were shared more than those with professional news content, but only a few channels were targeted.
  • Australia’s Australian Code of Practice on Disinformation and Misinformation has been adopted by Twitter, Google, Facebook, Microsoft, Redbubble, TikTok, Adobe, and Apple in Australia.
  • TikTok’s design choice for non-validated and fast video postings has made it a disinformation vector.
  • In 2021, non-expert attackers used Generative Adversarial Networks (GANs) to spread disinformation campaigns and spoof social media profiles.

 

About this article
This article was written based on the ENISA’s Threat Landscape Report 2022. To read the full version of the report click here.

Add a comment

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *